Likely deserves a comment about why we check committed.
I may be slow - what do you mean by "don't have any yield points inside op
"? We do have various yield points in there...
we can. I'm not sure what we should do with the comment, to be honest.
Indeed, we can acquire the mutex at the begin of change_and_push -- which will wait (block) until the mutex is unused.
I pushed the test, and found a fix... we were in the teardown of the change_and_push unconditionally setting t.change_and_push_waiter to None -- I now use (physical!) equality of th ==…
both will still change t.change_and_push_waiter
But that is fine, no? There's no yield point (as far as I understand, but then I barely understand Lwt) between function entry and the setting…
I pushed the test, and found a fix... we were in the teardown of the change_and_push unconditionally setting t.change_and_push_waiter to None -- I now use (physical!) equality of th == t.change_and…
no, we need to unpack the inner result -- which is a (unit, _) result.
I tried to write a test with the three tasks in mind, and there's something wonky going on:
let multiple_change_and_push () =
match
let* (tmpdir, pid) = empty_repo () in
…
I'm not sure about the serialization of
change_and_push
and I think, after that our oldth
continue, we should ensure the exclusivity of a task to sett.change_and_push_waiter
and recheck…
we don't have a commit in this case (we're in a change_and_push), so let's use the current timestamp. this is good enough for our definition of last_modified (which is the last commit)
this change requires that the head didn't mutate since we started the change_and_push. we could as well do a rebase or merge, but we weren't able to find merge/rebase code.
the reason for this change is: we may have one change_and_push that is active, and when there are then multiple other change_and_push that should be executed, each needs to wait for the next one.