> If we extend it further with a fairly simple distributed mutex mechanism, we can now persist and share state across any service which can access the Redis instance!
I’m curious to hear more about the approach you took here. Does the first server to open the document hold the mutex, or do servers only hold the mutex when briefly while they persist data?
> or do servers only hold the mutex when briefly while they persist data?
This: it's only held during document updates, which in itself is an operation we debounce to avoid unnecessarily hammering the DB. Eventual consistency is our friend here, so this isn't a high frequency requirement (as I'm sure you're aware!).
If you can nail the "sync-on-close" mechanism in browsers, then you're golden :-)
How do you handle it within y-sweet (cool project, BTW!)?
paulgb 2 days ago [-]
Makes sense, thanks!
We run Y-Sweet in Plane.dev (also our open source project). Plane runs a process for every document across a pool of compute, so each process effectively has a lock on that document’s data and can persist on a loop without worrying about conflicting with another writer.
thoughtlede 2 days ago [-]
> Beyond this, if you want to determine causality, e.g. whether events are "causally related" (happened before or after each other) or are "concurrent" (entirely independent of), you can look at Vector Clocks—I won't go down that rabbit-hole here, though.
I'll add Loro[0] to the author's list. While I utilise Yjs heavily for another project, Loro is fairly featureful and so I picked it to build a screenplay editor[1], which requires things like Peritext or tree structures. It's fairly young, though.
I'll also commend the author's attempt at DIY! Even if your case does not require a custom solution, it's healthy to understand how your tools work.
We're probably one of the first to make a real tech bet on Loro. We inched our way into it, and plus or minus some edge cases, it is going very well so far.
Even on the edge cases, most of it just relates to what primitives are exposed in the API, and we've found the library's author to be highly engaged in creating solutions.
We've found it to be an incredibly well designed library.
sambigeara 2 days ago [-]
Not happened across Loro, thanks for sharing.
3 days ago [-]
venom85jojes 3 days ago [-]
[flagged]
Rendered at 18:40:38 GMT+0000 (Coordinated Universal Time) with Vercel.
> If we extend it further with a fairly simple distributed mutex mechanism, we can now persist and share state across any service which can access the Redis instance!
I’m curious to hear more about the approach you took here. Does the first server to open the document hold the mutex, or do servers only hold the mutex when briefly while they persist data?
(I’ll also shamelessly plug Y-Sweet, an open source Yjs server with persistence that I contribute to https://github.com/jamsocket/y-sweet)
> or do servers only hold the mutex when briefly while they persist data?
This: it's only held during document updates, which in itself is an operation we debounce to avoid unnecessarily hammering the DB. Eventual consistency is our friend here, so this isn't a high frequency requirement (as I'm sure you're aware!).
If you can nail the "sync-on-close" mechanism in browsers, then you're golden :-)
How do you handle it within y-sweet (cool project, BTW!)?
We run Y-Sweet in Plane.dev (also our open source project). Plane runs a process for every document across a pool of compute, so each process effectively has a lock on that document’s data and can persist on a loop without worrying about conflicting with another writer.
If anyone want to go down that rabbit hole: https://www.exhypothesi.com/clocks-and-causality/
I'll also commend the author's attempt at DIY! Even if your case does not require a custom solution, it's healthy to understand how your tools work.
[0] https://loro.dev
[1] https://www.weedonandscott.com/tech/project-realm
Even on the edge cases, most of it just relates to what primitives are exposed in the API, and we've found the library's author to be highly engaged in creating solutions.
We've found it to be an incredibly well designed library.