NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Async Mutexes (matklad.github.io)
pwnna 22 days ago [-]
For single-threaded, cooperative multitasking systems (such as JavaScript and what OP is discussing), async mutexes[1] IMO are a strong anti pattern and red flag in the code. For this kind of code, every time you execute code it's always "atomic" until you call an await and effectively yield to the event loop. Programming this properly simply requires making sure the state variables are consistent before yielding. You can also reconstruct the state at the beginning of your block, knowing that nothing else can interrupt your code. Both of these approaches are documented in the OP.

Throwing an async mutex to fix the lack of atomicity before yielding is basically telling me that "i don't know when I'm calling await in this code so might as well give up". In my experience this is strongly correlated with the original designer not knowing what they are doing, especially in languages like JavaScript. Even if they did understand the problem, this can introduce difficult-to-debug bugs and deadlocks that would otherwise not appear. You also introduce an event queue scheduling delay which can be substantial depending on how often you're locking and unlocking.

IMO this stuff is best avoided and you should just write your cooperative multi-tasking code properly, but this is does require a bit more advanced knowledge (not that advanced, but maybe for the JS community). I wish TypeScript would help people out here but it doesn't. Calling an async function (or even normal functions) does not invalidate type narrowing done on escaped variables probably for usability reasons, but is actually the wrong thing to do.

[1]: https://www.npmjs.com/package/async-mutex

never_inline 22 days ago [-]
Here's a use case - singleton instantiation on first request, where instantiation itself requires an async call (eg: to DB or external service).

    _lock = asyncio.Lock()
    _instance = None

    def get_singleton():
      if _instance:
        return _instance
      async with _lock:
        if not _instance:
          _instance = await costly_function()
      return _instance
How do you suggest to replace this?
duped 21 days ago [-]
The traditional thing would be to have an init() function that is required to be called at the top of main() or before any other methods that need it. But I agree with your point.
never_inline 21 days ago [-]
Now lets say its an async cache instead of singleton.
duped 20 days ago [-]
Return the cached item if it exists else spawn a task to update it (doing nothing if the task has already been spawned), await the task and return the cached item.
never_inline 20 days ago [-]
Thanks, that's a useful trick.
michaelsbradley 22 days ago [-]
Also:

Developers should properly learn the difference between push and pull reactivity, and leverage both appropriately.

Many, though not all, problems where an async mutex might be applied can instead be simplified with use of an async queue and the accompanying serialization of reactions (pull reactivity).

22 days ago [-]
zeroq 22 days ago [-]
It's kids taking toys of the shelve and playing with words. Give me a strong Angular vibe with it's miss use of "dependency injection".
paulddraper 22 days ago [-]
How did angular misuse DI?
fjwufajsd 22 days ago [-]
I'm not familiar with AngularJS so I did a quick google: https://angular.dev/guide/di#where-can-inject-be-used

It looks eerily familiar to Spring DI framework, yikes.

didibus 22 days ago [-]
I'm not fully able to follow what the article is trying to say.

I got especially confused at the actor part, aren't actors share nothing by design?

It does to say there's only 1 actor, so I guess it has nothing to do with actors? They're talking about GenServer-like behavior within a single actor.

As I write this, I'm figuring out maybe they mean it's like a single actor GenServer sending messages to itself, where each message will update some global state. Even that though, actors don't yield inside callbacks, if you send a message it gets queued to run but won't suspend the currently running callback, so the current method will run to completion and then the next will run.

Erlang actors do single-threaded actors with serialized message processing. If I understand the article, that avoids the issue it brings up completely as you cannot have captured old state that is stale when resuming.

rdtsc 22 days ago [-]
There every complex enough distributed system will eventually implement Erlang but in an ad-hoc and not very good way. Quite a lot of systems ended up with actors. TigerBeetle in Zig, FoundationDB in C++. But they miss critical aspect of it and that’s isolated memory between processors. And they don’t know how to yield unless they do it on IO or do cooperative yielding. The global shared heap is really the dangerous part though. Rust could avoid the danger by asserting some guarantees at compile time.
hawk_ 22 days ago [-]
I can't imagine those examples you picked implemented in Erlang performing anywhere close to the Zig/C++ ones. So the "ad-hoc subset" there is by design.
rdtsc 22 days ago [-]
It is ad-hoc because then they write blog posts wondering “how come we still need mutexes, we thought asyno operations don’t need them”.
James_K 22 days ago [-]
They have realised that, in single-threaded code, an implicit mutex is created between each call to await, and therefore state variables can usually be organised in such a way to avoid explicit mutexes. Of course, one strongly suspects that such code will involve a lot of boolean variables with names such as “exclude_other_accesses_to_data”.
Ygg2 22 days ago [-]
> I got especially confused at the actor part, aren't actors share nothing by design?

Me as well. I thought actors are a bit like mailboxes that process and send their messages.

rdtsc 22 days ago [-]
It’s half-baked actors. Well not baked at all (“no-bake actors”?) as isolated heaps and shared nothing is not easy to accomplish. OS processes can do it but they can be heavyweight. BEAM VM does a great job of it, though.
surajrmal 22 days ago [-]
Share nothing is often too coarse grained serialization for all situations. Using a lock or other shared memory primitive (RCU, hazard pointers, etc) can lead to better performance due to smaller granularity of data protected and latency necessary to access the data. Optimizing for latency is very different from optimizing for throughput.
bheadmaster 22 days ago [-]
Heh, I'm not the only one who always wrote his own async mutex for any non-trivial async codebase.

It may seem like single-threaded execution doesn't need mutexes, but it is a fact that async/await is just another implementation of userspace threads. And like all threads, it may lead to data races if you have yield points (await) inside a critical section.

People may say what they want about bad design and reactive programming, but thread model is inherently easier to reason about.

surajrmal 22 days ago [-]
Yeah, if you reason about each task as a thread, then all the same problems with sharing data between threads applies to sharing data between tasks, regardless of single threaded runtime or not. The only real difference is you can avoid atomics. Generally, I advocate for avoiding sharing any non-trivial state which would necessitate holding a "lock" across an await point (there are rust lints to help you here if you use RefCell). This usually requires an actor pattern to explicitly queue up work that requires that state into its own self contained task rather than reach for an async mutex.
halayli 22 days ago [-]
That's not the right abstraction. What op needs is barrier/latch.
mrkeen 22 days ago [-]
I don't get this:

  the entire Compaction needs to be wrapped in a mutex, and every callback needs to start with lock/unlock pair ... So explicit locking probably gravitates towards having just a single global lock around the entire state, which is acquired for the duration of any callback.
This is just dining philosophers.

When I compact, I take read locks on victims A and B while I use them to produce C. So callers can still query A and B during compaction. Once C is produced, I atomically add C and remove A and B, so no caller should see double or missing records.

user28712094 22 days ago [-]
Curious how to async walk a tree which may or may not be under edit operations without a mutex
surajrmal 22 days ago [-]
There are RCU safe trees like the maple tree in Linux: https://docs.kernel.org/core-api/maple_tree.html

The tl;dr is that it uses garbage collection to let readers see the older version of the tree they were walking while the latest copy still gets updated.

I read a paper recently about concurrent interval skip lists the other day as well which was interesting.

ggygygy 21 days ago [-]
How are hackers
gsf_emergency_4 22 days ago [-]
So..not even wait-free? (Since you still have locks)

https://en.wikipedia.org/wiki/Non-blocking_algorithm#Obstruc...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 03:58:16 GMT+0000 (Coordinated Universal Time) with Vercel.