Refactor test/dune and add a failing test #2

Merged
reynir merged 12 commits from batch-test into main 2024-10-29 11:21:16 +00:00
Owner

The failing test shows how writes during a change_and_push are not readable.

A dune runtest run produces the following output / test failure:

File "test/fold2.t", line 1, characters 0-0:
diff --git a/_build/.sandbox/728d444d432837b3609fcb9f66eb6ae8/default/test/fold2.t b/_build/.sandbox/728d444d432837b3609fcb9f66eb6ae8/default/test/fold2.t.corrected
index 5cd2548..37fb6a0 100644
--- a/_build/.sandbox/728d444d432837b3609fcb9f66eb6ae8/default/test/fold2.t
+++ b/_build/.sandbox/728d444d432837b3609fcb9f66eb6ae8/default/test/fold2.t.corrected
@@ -8,7 +8,7 @@ Reading during batch operation
   > get /bar
   > quit
   > quit
-  00000000: 4769 7420 726f 636b 7321                 Git rocks!
+  Cannot find the key /bar.
   $ cd simple
   $ git log main --pretty=oneline | wc -l
   1
The failing test shows how writes during a change_and_push are not readable. A `dune runtest` run produces the following output / test failure: ```diff File "test/fold2.t", line 1, characters 0-0: diff --git a/_build/.sandbox/728d444d432837b3609fcb9f66eb6ae8/default/test/fold2.t b/_build/.sandbox/728d444d432837b3609fcb9f66eb6ae8/default/test/fold2.t.corrected index 5cd2548..37fb6a0 100644 --- a/_build/.sandbox/728d444d432837b3609fcb9f66eb6ae8/default/test/fold2.t +++ b/_build/.sandbox/728d444d432837b3609fcb9f66eb6ae8/default/test/fold2.t.corrected @@ -8,7 +8,7 @@ Reading during batch operation > get /bar > quit > quit - 00000000: 4769 7420 726f 636b 7321 Git rocks! + Cannot find the key /bar. $ cd simple $ git log main --pretty=oneline | wc -l 1 ```
reynir added 1 commit 2024-10-25 09:49:23 +00:00
The failing test shows how writes during a change_and_push are not
readable.
Owner

The question now is which behaviour we strive for.... I guess the "read returns the newest (batched)" is reasonable from a perspective of caldav and refactorings. So, from the semantics I'd be keen to get that behaviour (since it relaxes my brain).

From my point of view, with caldav in mind: we have some changes to various files, and want to ensure that either all of the changes are committed or none of them.

The question now is which behaviour we strive for.... I guess the "read returns the newest (batched)" is reasonable from a perspective of caldav and refactorings. So, from the semantics I'd be keen to get that behaviour (since it relaxes my brain). From my point of view, with caldav in mind: we have some changes to various files, and want to ensure that either all of the changes are committed or none of them.
Author
Owner

I tried the following which I hoped would work as I thought: that is, if you're doing reads inside a change_and_push´ (that is, inside finchange_and_push fs f`), then you get the "staged" changes; otherwise you get the old changes. But it doesn't seem to do what I though.

$ git diff -w
diff --git a/src/git_kv.ml b/src/git_kv.ml
index 3e72f27..71c1a7f 100644
--- a/src/git_kv.ml
+++ b/src/git_kv.ml
@@ -410,15 +410,18 @@ module Make (Pclock : Mirage_clock.PCLOCK) = struct
 
   let get t key =
     let open Lwt.Infix in
-    match t.head with
-    | None -> Lwt.return (Error (`Not_found key))
-    | Some head ->
+    match t.in_closure, t.committed, t.head with
+    | true, Some (head, _), _
+    | _, _, Some head ->
+      begin
         Search.find t.store head (`Commit (`Path (Mirage_kv.Key.segments key))) >>= function
         | None -> Lwt.return (Error (`Not_found key))
         | Some blob ->
           Store.read_exn t.store blob >|= function
           | Blob b -> Ok (Git.Blob.to_string b)
           | _ -> Error (`Value_expected key)
+      end
+    | _, _, None -> Lwt.return (Error (`Not_found key))
 
   let get_partial t key ~offset ~length =
     let open Lwt_result.Infix in
I tried the following which I hoped would work as I thought: that is, if you're doing reads inside a `change_and_push´ (that is, inside `f` in `change_and_push fs f`), then you get the "staged" changes; otherwise you get the old changes. But it doesn't seem to do what I though. ```diff $ git diff -w diff --git a/src/git_kv.ml b/src/git_kv.ml index 3e72f27..71c1a7f 100644 --- a/src/git_kv.ml +++ b/src/git_kv.ml @@ -410,15 +410,18 @@ module Make (Pclock : Mirage_clock.PCLOCK) = struct let get t key = let open Lwt.Infix in - match t.head with - | None -> Lwt.return (Error (`Not_found key)) - | Some head -> + match t.in_closure, t.committed, t.head with + | true, Some (head, _), _ + | _, _, Some head -> + begin Search.find t.store head (`Commit (`Path (Mirage_kv.Key.segments key))) >>= function | None -> Lwt.return (Error (`Not_found key)) | Some blob -> Store.read_exn t.store blob >|= function | Blob b -> Ok (Git.Blob.to_string b) | _ -> Error (`Value_expected key) + end + | _, _, None -> Lwt.return (Error (`Not_found key)) let get_partial t key ~offset ~length = let open Lwt_result.Infix in ```
Owner

But it doesn't seem to do what I though.

Yes, you must update the set function and add a t.head <- new_head:

      unroll_tree t ~tree_root_hash (`Normal, name, hash) (List.tl rpath) >>= fun tree_root_hash ->
+     t.head <- Some tree_root_hash;
      match and_commit with
      | Some (_old_tree_root_hash, th) ->
        t.committed <- Some (tree_root_hash, th) ;
        Lwt.return_ok ()

But some questions are raised, specifically about cooperation between multiple change_and_push...

> But it doesn't seem to do what I though. Yes, you must update the `set` function and add a `t.head <- new_head`: ```diff unroll_tree t ~tree_root_hash (`Normal, name, hash) (List.tl rpath) >>= fun tree_root_hash -> + t.head <- Some tree_root_hash; match and_commit with | Some (_old_tree_root_hash, th) -> t.committed <- Some (tree_root_hash, th) ; Lwt.return_ok () ``` But some questions are raised, specifically about cooperation between multiple `change_and_push`...
reynir added 1 commit 2024-10-25 11:03:49 +00:00
Author
Owner

I pushed a commit that implements what I had in mind for get, get_partial and list at least. For digest and last_modified I'm unsure it's correct. For remove it's incorrect and will have to look into that. Finally, noppp is a terrible function name and should be renamed!

I pushed a commit that implements what I had in mind for get, get_partial and list at least. For digest and last_modified I'm unsure it's correct. For remove it's incorrect and will have to look into that. Finally, `noppp` is a terrible function name and should be renamed!
hannes added 1 commit 2024-10-26 15:34:11 +00:00
robur added 2 commits 2024-10-28 12:05:54 +00:00
Fixed last_modified (when running inside change_and_push),
and change_and_push.

Co-Authored-By: Reynir Björnsson <reynir@reynir.dk>
Co-Authored-By: Hannes Mehnert <hannes@mehnert.org>
Owner

This is what @reynir and myself developed. The semantics have changed, but we believe that this is in good shape now.

We also added some alcotest-lwt tests to show the intended semantics and whether this works nicely.

This is what @reynir and myself developed. The semantics have changed, but we believe that this is in good shape now. We also added some alcotest-lwt tests to show the intended semantics and whether this works nicely.
dinosaure reviewed 2024-10-29 08:48:07 +00:00
src/git_kv.ml Outdated
@ -684,3 +690,2 @@
tree_root_hash_of_store t >>= fun tree_root_hash ->
t.committed <- Some (tree_root_hash, th) ;
let t' = { t with in_closure= true } in
t.change_and_push_waiter <- Some th ;
Owner

t.change_and_push is already set 3 lines before.

`t.change_and_push` is already set 3 lines before.
Owner

indeed, removed in 750ec11

indeed, removed in 750ec11
hannes marked this conversation as resolved
hannes added 1 commit 2024-10-29 08:50:59 +00:00
hannes reviewed 2024-10-29 08:53:01 +00:00
src/git_kv.ml Outdated
@ -683,0 +685,4 @@
t.change_and_push_waiter <- Some th;
( match th' with
| None -> Lwt.return_unit
| Some th -> th ) >>= fun () ->
Owner

the reason for this change is: we may have one change_and_push that is active, and when there are then multiple other change_and_push that should be executed, each needs to wait for the next one.

previously, the code had all other change_and_push wait for the same task -- and thus there may have been multiple waiting change_and_push waken up at the same time, leading to races.

the reason for this change is: we may have one change_and_push that is active, and when there are then multiple other change_and_push that should be executed, each needs to wait for the next one. previously, the code had all other change_and_push wait for the same task -- and thus there may have been multiple waiting change_and_push waken up at the same time, leading to races.
Owner

I'm not sure about the serialization of change_and_push and I think, after that our old th continue, we should ensure the exclusivity of a task to set t.change_and_push_waiter and recheck this value. A Lwt_condition.t/Lwt_mutex.t is probably needed in my opinion to really ensure the serialization and avoid that 2 pending tasks continue the change_and_push process.

I'm not sure about the serialization of `change_and_push` and I think, after that our old `th` continue, we should ensure the exclusivity of a task to set `t.change_and_push_waiter` and recheck this value. A `Lwt_condition.t`/`Lwt_mutex.t` is probably needed in my opinion to really ensure the serialization and avoid that 2 pending tasks continue the `change_and_push` process.
hannes reviewed 2024-10-29 08:55:16 +00:00
src/git_kv.ml Outdated
@ -692,2 +697,4 @@
if Digestif.SHA1.equal new_tree_root_hash tree_root_hash
then Lwt.return_ok res (* XXX(dinosaure): nothing to send! *)
else if not (Option.equal Digestif.SHA1.equal t.head t'.head) then
Lwt.return (Error (`Msg "store was modified outside of change_and_push, please retry"))
Owner

this change requires that the head didn't mutate since we started the change_and_push. we could as well do a rebase or merge, but we weren't able to find merge/rebase code.

this is illustrated by the test set_outside_change_and_push.

this change requires that the head didn't mutate since we started the change_and_push. we could as well do a rebase or merge, but we weren't able to find merge/rebase code. this is illustrated by the test `set_outside_change_and_push`.
hannes reviewed 2024-10-29 08:56:21 +00:00
@ -482,0 +481,4 @@
| _ ->
Ok (Option.fold
~none:Ptime.epoch
~some:Fun.id (Ptime.of_float_s (Int64.to_float (now ())))))
Owner

we don't have a commit in this case (we're in a change_and_push), so let's use the current timestamp. this is good enough for our definition of last_modified (which is the last commit)

we don't have a commit in this case (we're in a change_and_push), so let's use the current timestamp. this is good enough for our definition of last_modified (which is the last commit)
Owner

I'm not sure about the serialization of change_and_push and I think, after that our old th continue, we should ensure the exclusivity of a task to set t.change_and_push_waiter and recheck this value. A Lwt_condition.t/Lwt_mutex.t is probably needed in my opinion to really ensure the serialization and avoid that 2 pending tasks continue the change_and_push process.

So our idea was:
Task A runs change_and_push, and has set the change_and_push_waiter.

Now, task B enters the change_and_push. t,committed is None (since we're not inside another change_and_push). Then t.change_and_push_waiter is set to a fresh task, and the old value is waited for.

Once task A finishes change_and_push, it wakes up task B.

If there's a task C coming along, doing change_and_push, it will set t.change_and_push_waiter to a new task, and wait for the task B to wake up.

So, the t.change_and_push_waiter is only ever used by a single task, since on entry (without any bind / yield point) of change_and_push, the value in (the main) t is set to a fresh task, and then the old value is waited for.

WDYT, where are the potential races?

> I'm not sure about the serialization of `change_and_push` and I think, after that our old `th` continue, we should ensure the exclusivity of a task to set `t.change_and_push_waiter` and recheck this value. A `Lwt_condition.t`/`Lwt_mutex.t` is probably needed in my opinion to really ensure the serialization and avoid that 2 pending tasks continue the `change_and_push` process. So our idea was: Task A runs change_and_push, and has set the change_and_push_waiter. Now, task B enters the change_and_push. t,committed is None (since we're not inside another change_and_push). Then t.change_and_push_waiter is set to a fresh task, and the old value is waited for. Once task A finishes change_and_push, it wakes up task B. If there's a task C coming along, doing change_and_push, it will set t.change_and_push_waiter to a new task, and wait for the task B to wake up. So, the t.change_and_push_waiter is only ever used by a single task, since on entry (without any bind / yield point) of change_and_push, the value in (the main) t is set to a fresh task, and then the old value is waited for. WDYT, where are the potential races?
reynir added 1 commit 2024-10-29 09:10:44 +00:00
Author
Owner

I added a comment in the code to clarify this. It may not be evident from the code that there must not be any yield point there.

An observation I made is we can drop the option from the change_and_push_waiter and initialize it a Lwt.return_unit. The code may become a bit simpler (we drop a match and Some _ constructors), but I fear it may be less easy to understand. WDYT?

I added a comment in the code to clarify this. It may not be evident from the code that there must not be any yield point there. An observation I made is we can drop the `option` from the `change_and_push_waiter` and initialize it a `Lwt.return_unit`. The code may become a bit simpler (we drop a match and `Some _` constructors), but I fear it may be less easy to understand. WDYT?
Owner

I tried to write a test with the three tasks in mind, and there's something wonky going on:

let multiple_change_and_push () =
  match
    let* (tmpdir, pid) = empty_repo () in
    Fun.protect ~finally:(fun () -> kill_git pid) (fun () ->
    Lwt_main.run
      (
        Git_unix.ctx (Happy_eyeballs_lwt.create ()) >>= fun ctx ->
        Git_kv.connect ctx ("git://localhost:9419/" ^ tmpdir) >>= fun t ->
        let wait = Lwt_mvar.create_empty () in
        let task_c () =
          Store.change_and_push t (fun t''' ->
              print_endline "running 3";
              print_endline "running 3 - now get";
              Store.get t''' (Mirage_kv.Key.v "/foo") >>= fun r ->
              if Result.is_error r then Alcotest.fail "failure reading foo in third change_and_push";
              Alcotest.(check string) "Store.get foo" "value 2" (Result.get_ok r);
              print_endline "running 3 - now set";
              Store.set t''' (Mirage_kv.Key.v "/foo") "value 3" >>= fun r ->
              if Result.is_error r then Alcotest.fail "failure writing foo in third change_and_push";
              print_endline "running 3 - now get again";
              Store.get t''' (Mirage_kv.Key.v "/foo") >>= fun r ->
              if Result.is_error r then Alcotest.fail "failure reading foo in third change_and_push adter the write";
              Alcotest.(check string) "Store.get foo" "value 3" (Result.get_ok r);
              print_endline "running 3 - now finished";
              Lwt.return_unit) >>= fun r ->
          if Result.is_error r then Alcotest.fail "failure second change_and_push";
          Lwt_mvar.put wait ()
        in
        let task_b () =
          Store.change_and_push t (fun t'' ->
              print_endline "running 2";
              Lwt.async task_c;
              print_endline "running 2 - now get";
              Store.get t'' (Mirage_kv.Key.v "/foo") >>= fun r ->
              if Result.is_error r then Alcotest.fail "failure reading foo in second change_and_push";
              Alcotest.(check string) "Store.get foo" "value" (Result.get_ok r);
              print_endline "running 2 - now set";
              Store.set t'' (Mirage_kv.Key.v "/foo") "value 2" >>= fun r ->
              if Result.is_error r then Alcotest.fail "failure writing foo in second change_and_push";
              print_endline "running 2 - now get again";
              Store.get t'' (Mirage_kv.Key.v "/foo") >>= fun r ->
              if Result.is_error r then Alcotest.fail "failure reading foo in second change_and_push adter the write";
              Alcotest.(check string) "Store.get foo" "value 2" (Result.get_ok r);
              print_endline "running 2 - finished";
              Lwt.return_unit) >|= fun r ->
          if Result.is_error r then Alcotest.fail "failure second change_and_push"
        in
        let task_a () =
          Store.change_and_push t (fun t' ->
               print_endline "running 1";
               Lwt.async task_b;
               print_endline "running 1 - now set";
               Store.set t' (Mirage_kv.Key.v "/foo") "value" >>= fun r ->
               if Result.is_error r then Alcotest.fail "failure writing foo in first change_and_push";
               print_endline "running 1 - now get";
               Store.get t' (Mirage_kv.Key.v "/foo") >>= fun r ->
               if Result.is_error r then Alcotest.fail "failure reading foo in first change_and_push, after the write";
               Alcotest.(check string) "Store.get foo" "value" (Result.get_ok r);
               print_endline "running 1 - finished";
               Lwt.return_unit) >|= fun r ->
          if Result.is_error r then Alcotest.fail "failure first change_and_push"
        in
        task_a () >>= fun () ->
        Lwt_mvar.take wait >>= fun () ->
        Store.get t (Mirage_kv.Key.v "/foo") >|= fun r ->
        if Result.is_error r then Alcotest.fail "failure reading outside foo";
        Alcotest.(check string) "Store.get bar" "value 3" (Result.get_ok r)));
    Ok ()
  with
  | Ok () -> ()
  | Error `Msg msg ->
    print_endline ("got an error from bos: " ^ msg)

output:

running 1
running 1 - now set
running 1 - now get
ASSERT Store.get foo
running 1 - finished
running 2
running 2 - now get
running 3
running 3 - now get
ASSERT Store.get foo
running 2 - now set
ASSERT Store.get foo
Fatal error: exception Alcotest assertion failure
File "test/tests.ml", line 198, character 14:
FAIL Store.get foo

   Expected: `"value 2"'
   Received: `"value"'


where line 198 is in task_c the first alcotest check.

I tried to write a test with the three tasks in mind, and there's something wonky going on: ```OCaml let multiple_change_and_push () = match let* (tmpdir, pid) = empty_repo () in Fun.protect ~finally:(fun () -> kill_git pid) (fun () -> Lwt_main.run ( Git_unix.ctx (Happy_eyeballs_lwt.create ()) >>= fun ctx -> Git_kv.connect ctx ("git://localhost:9419/" ^ tmpdir) >>= fun t -> let wait = Lwt_mvar.create_empty () in let task_c () = Store.change_and_push t (fun t''' -> print_endline "running 3"; print_endline "running 3 - now get"; Store.get t''' (Mirage_kv.Key.v "/foo") >>= fun r -> if Result.is_error r then Alcotest.fail "failure reading foo in third change_and_push"; Alcotest.(check string) "Store.get foo" "value 2" (Result.get_ok r); print_endline "running 3 - now set"; Store.set t''' (Mirage_kv.Key.v "/foo") "value 3" >>= fun r -> if Result.is_error r then Alcotest.fail "failure writing foo in third change_and_push"; print_endline "running 3 - now get again"; Store.get t''' (Mirage_kv.Key.v "/foo") >>= fun r -> if Result.is_error r then Alcotest.fail "failure reading foo in third change_and_push adter the write"; Alcotest.(check string) "Store.get foo" "value 3" (Result.get_ok r); print_endline "running 3 - now finished"; Lwt.return_unit) >>= fun r -> if Result.is_error r then Alcotest.fail "failure second change_and_push"; Lwt_mvar.put wait () in let task_b () = Store.change_and_push t (fun t'' -> print_endline "running 2"; Lwt.async task_c; print_endline "running 2 - now get"; Store.get t'' (Mirage_kv.Key.v "/foo") >>= fun r -> if Result.is_error r then Alcotest.fail "failure reading foo in second change_and_push"; Alcotest.(check string) "Store.get foo" "value" (Result.get_ok r); print_endline "running 2 - now set"; Store.set t'' (Mirage_kv.Key.v "/foo") "value 2" >>= fun r -> if Result.is_error r then Alcotest.fail "failure writing foo in second change_and_push"; print_endline "running 2 - now get again"; Store.get t'' (Mirage_kv.Key.v "/foo") >>= fun r -> if Result.is_error r then Alcotest.fail "failure reading foo in second change_and_push adter the write"; Alcotest.(check string) "Store.get foo" "value 2" (Result.get_ok r); print_endline "running 2 - finished"; Lwt.return_unit) >|= fun r -> if Result.is_error r then Alcotest.fail "failure second change_and_push" in let task_a () = Store.change_and_push t (fun t' -> print_endline "running 1"; Lwt.async task_b; print_endline "running 1 - now set"; Store.set t' (Mirage_kv.Key.v "/foo") "value" >>= fun r -> if Result.is_error r then Alcotest.fail "failure writing foo in first change_and_push"; print_endline "running 1 - now get"; Store.get t' (Mirage_kv.Key.v "/foo") >>= fun r -> if Result.is_error r then Alcotest.fail "failure reading foo in first change_and_push, after the write"; Alcotest.(check string) "Store.get foo" "value" (Result.get_ok r); print_endline "running 1 - finished"; Lwt.return_unit) >|= fun r -> if Result.is_error r then Alcotest.fail "failure first change_and_push" in task_a () >>= fun () -> Lwt_mvar.take wait >>= fun () -> Store.get t (Mirage_kv.Key.v "/foo") >|= fun r -> if Result.is_error r then Alcotest.fail "failure reading outside foo"; Alcotest.(check string) "Store.get bar" "value 3" (Result.get_ok r))); Ok () with | Ok () -> () | Error `Msg msg -> print_endline ("got an error from bos: " ^ msg) ``` output: ``` running 1 running 1 - now set running 1 - now get ASSERT Store.get foo running 1 - finished running 2 running 2 - now get running 3 running 3 - now get ASSERT Store.get foo running 2 - now set ASSERT Store.get foo Fatal error: exception Alcotest assertion failure File "test/tests.ml", line 198, character 14: FAIL Store.get foo Expected: `"value 2"' Received: `"value"' ``` where line 198 is in task_c the first alcotest check.
Owner

Once task A finishes change_and_push, it wakes up task B.

If there's a task C coming along, doing change_and_push, it will set t.change_and_push_waiter to a new task, and wait for the task B to wake up.

I think that if a task B and a task C are both waiting for A to finish, both will become unblocked. Even if cooperation ensures that B runs before C or that C runs before B, both will still change t.change_and_push. My idea relates more to something like:

let* () = Lwt_mutex.with_lock  t.mutex @@ fun () ->
  let rec await () =
    if t.change_and_push_running
    then Lwt_condition.wait ~mutex:t.mutex t.condition >>= await
    else begin t.change_and_push_running <- true; Lwt.return_unit end in
  await () in
...
let* () = Lwt_mutex.with_lock t.mutex @@ fun () ->
  t.change_and_push_running <- false;
  Lwt_condition.signal t.condition ();
  Lwt.return_unit in
Lwt.return res
> Once task A finishes change_and_push, it wakes up task B. > > If there's a task C coming along, doing change_and_push, it will set t.change_and_push_waiter to a new task, and wait for the task B to wake up. I think that if a task B and a task C are both waiting for A to finish, both will become unblocked. Even if cooperation ensures that B runs before C or that C runs before B, both will still change t.change_and_push. My idea relates more to something like: ```ocaml let* () = Lwt_mutex.with_lock t.mutex @@ fun () -> let rec await () = if t.change_and_push_running then Lwt_condition.wait ~mutex:t.mutex t.condition >>= await else begin t.change_and_push_running <- true; Lwt.return_unit end in await () in ... let* () = Lwt_mutex.with_lock t.mutex @@ fun () -> t.change_and_push_running <- false; Lwt_condition.signal t.condition (); Lwt.return_unit in Lwt.return res ```
Author
Owner

Hmm. Would you mind pushing that test?

Hmm. Would you mind pushing that test?
hannes added 1 commit 2024-10-29 09:44:52 +00:00
Owner

I pushed the test, and found a fix... we were in the teardown of the change_and_push unconditionally setting t.change_and_push_waiter to None -- I now use (physical!) equality of th == t.change_and_push_waiter to guard that assignment.

Et voila, I think this is fine now.

I pushed the test, and found a fix... we were in the teardown of the change_and_push unconditionally setting t.change_and_push_waiter to None -- I now use (physical!) equality of th == t.change_and_push_waiter to guard that assignment. Et voila, I think this is fine now.
Owner

both will still change t.change_and_push_waiter

But that is fine, no? There's no yield point (as far as I understand, but then I barely understand Lwt) between function entry and the setting of change_and_push_waiter!?

> both will still change t.change_and_push_waiter But that is fine, no? There's no yield point (as far as I understand, but then I barely understand Lwt) between function entry and the setting of change_and_push_waiter!?
Author
Owner

I pushed the test, and found a fix... we were in the teardown of the change_and_push unconditionally setting t.change_and_push_waiter to None -- I now use (physical!) equality of th == t.change_and_push_waiter to guard that assignment.

Et voila, I think this is fine now.

Actually, it seems we can remove those three lines!

> I pushed the test, and found a fix... we were in the teardown of the change_and_push unconditionally setting t.change_and_push_waiter to None -- I now use (physical!) equality of th == t.change_and_push_waiter to guard that assignment. > > Et voila, I think this is fine now. Actually, it seems we can remove those three lines!
reynir added 1 commit 2024-10-29 09:51:28 +00:00
The waiter will be updated by new callers of `change_and_push`.
Owner

I pushed the test, and found a fix... we were in the teardown of the change_and_push unconditionally setting t.change_and_push_waiter to None -- I now use (physical!) equality of th == t.change_and_push_waiter to guard that assignment.

Et voila, I think this is fine now.

Actually, it seems we can remove those three lines!

No, we can't. Now, when change_and_push finishes, and at some other time some other change_and_push is entered, there's still the old th in change_and_push_waiter.

EDIT: sorry, force-pushed with a comment. If you're convinced that these three lines can be removed, I'd be happy to first see a test...

> > I pushed the test, and found a fix... we were in the teardown of the change_and_push unconditionally setting t.change_and_push_waiter to None -- I now use (physical!) equality of th == t.change_and_push_waiter to guard that assignment. > > > > Et voila, I think this is fine now. > > Actually, it seems we can remove those three lines! No, we can't. Now, when change_and_push finishes, and at some other time some other change_and_push is entered, there's still the old th in change_and_push_waiter. EDIT: sorry, force-pushed with a comment. If you're convinced that these three lines can be removed, I'd be happy to first see a test...
hannes force-pushed batch-test from 51e30538f6 to 1327cc4f94 2024-10-29 09:53:56 +00:00 Compare
dinosaure added 2 commits 2024-10-29 10:10:13 +00:00
reynir reviewed 2024-10-29 10:19:16 +00:00
src/git_kv.ml Outdated
@ -13,0 +11,4 @@
; mutable committed : Digestif.SHA1.t option
; mutable change_and_push_running : bool
; condition : unit Lwt_condition.t
; mutex : Lwt_mutex.t
Author
Owner

Could this be done with the mutex only instead of a bool, mutex and a condition?

Could this be done with the mutex only instead of a bool, mutex and a condition?
Owner

Indeed, we can acquire the mutex at the begin of change_and_push -- which will wait (block) until the mutex is unused.

Indeed, we can acquire the mutex at the begin of change_and_push -- which will wait (block) until the mutex is unused.
Owner

yes, basically protect the whole change_and_push with a mutex will solve all of our issues 😄

yes, basically protect the whole `change_and_push` with a mutex will solve all of our issues 😄
hannes added 1 commit 2024-10-29 10:52:07 +00:00
dinosaure approved these changes 2024-10-29 10:59:41 +00:00
hannes approved these changes 2024-10-29 11:14:11 +00:00
hannes left a comment
Owner

LGTM

LGTM
reynir merged commit e2295fe0b3 into main 2024-10-29 11:21:16 +00:00
reynir deleted branch batch-test 2024-10-29 11:21:18 +00:00
Sign in to join this conversation.
No reviewers
No labels
No milestone
No project
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: robur/git-kv#2
No description provided.