Really nice to see this, I wrote this comment almost 2 years ago when I was a little miffed about trying to use litestream and litefs: https://news.ycombinator.com/item?id=37614193
I think this solves most of the issues? You can now freely run litestream on your DB and not worry about issues with multiple writers? I wonder how the handoff is handled.
The read replica FUSE layer sounds like a real nice thing to have.
> When another Litestream process starts up and sees an existing lease, it will continually retry the lease acquisition every second until it succeeds. This low retry interval allows for rolling restarts to come online quickly.
Sounds workable!
simonw 19 hours ago [-]
This post is like they read my mind and implemented everything I wanted from a new Litestream. So exciting.
thewisenerd 17 hours ago [-]
ben, thanks for litestream!
we're using it on production for a write-heavy interal use-case (~12GB compressed) for more than a year now; and it's costing us a couple hundred pennies per month (azure).
excited to try the new changes when they land.
Skinney 5 hours ago [-]
If I’m deploying a new version of my app, the typical managed solution will spawn a new server instance with that new version, and once a health check has succeeded a couple of times it will reroute trafic to this new instance and kill the old one.
Previously this would be problematic, as the new instance might miss changes made by the old server. Is this fixed by these new changes?
maxmcd 21 minutes ago [-]
I don't think this is trivially fixed since you can still only have one writer that is holding the lease.
Your new service will come up, but it won't be able to get the write lease until the previous server shuts down. Now you have tools to detect this, stop one writer, and start the other, but the service will likely have to experience some kind of requests queueing or downtime.
bradgessler 17 hours ago [-]
I wish Fly would polish the developer experience on top of SQLite. They're close, but it's missing:
1. A built-in UI and CLI that manages SQLite from a volume. Getting the initial database on a Fly Machine requires more work than it should.
2. `fly console` doesn't work with SQLite because it spins up a separate machine, which isn't connected to the same volume where the SQLite data resides. Instead you have to know to run `fly ssh console —pty`, which effectively SSH's into the machine with the database.
The problem in general with SQLite web apps is they tend to be small apps, so you need a lot of them to make a decent amount of money hosting them.
adenta 14 hours ago [-]
Brad, what’s your take on Rails 8 w/ SQLite? Are you gravitating towards it these days over Postgres?
bradgessler 12 hours ago [-]
Yep! I just migrated a Fly PG cluster database to SQLite because I over-provisioned DB resources and got tired of dealing with the occasional node crashing.
TBH I wish they had their managed PG cluster running because it would have made it easier to downsize, but I’m happy with SQLite.
I used SQLite for another project that I knew was going to max out at 100 concurrent users and it worked great. The best moment was when a user reported a production error I couldn’t recreate locally, so I downloaded the database and recreated it with the latest production data on my laptop. You couldn’t do that with a high-compliance app, but that’s not most apps.
I’m hesitant to outright say “SQLite and Rails is great”because you have to know your app will run on one node. If you know that then it’s fantastic.
16 hours ago [-]
jasonthorsness 19 hours ago [-]
What a coincidence, I was just researching Litestream today! I use Sqlite on my VPS and was thinking about adding this.
Am I understanding correctly that I will be able to restore a database to any point-in-time that is while the litestream process is running? Because auto-checkpointing could consume the WAL while it isn't running?
So for an extreme example if the process crashed for an hour between 2:00 and 3:00, I could restore to 1:55 or 3:05 but the information required to restore between 2:00 and 3:00 is lost?
benbjohnson 18 hours ago [-]
Litestream saves WAL segments to a given time granularity. By default, it ships off WAL changes every second so you should be able to restore to any given second in your history (within your retention period).
dolmen 7 hours ago [-]
Do you have DST handling issues?
I'm asking because switching from winter time to summer time in Europe happened on March 30th with local time jumping from 2:00 to 3:00.
hobo_mark 17 hours ago [-]
If you wanted to use litestream to replicate many databases (ideally, one or more per user), which is one of the use cases described here (and elsewhere), how do you tell litestream to add new databases dynamically? The configuration file is static and I haven't found an API to tell it to track a new db at runtime.
mrkurt 17 hours ago [-]
I would expect this problem to get solved. It's tricky to detect new sqlites, but not impossible.
In the meantime, it's pretty straightforward to use as a library.
JSR_FDED 11 hours ago [-]
Will the new litestream work with object stores that don’t provided conditional writes?
rads 13 hours ago [-]
What will be required from users of the existing Litestream version to upgrade to the new one? Is it a matter of bumping the version when it comes out or is there more to it?
bambax 9 hours ago [-]
Very cool!
There may be a typo here:
> The most straightforward way around this problem is to make sure only one instance of Litestream can replication to a given destination.
Can replicate? Or can do replications?
mythz 12 hours ago [-]
Awesome stuff, this resolves my #1 feature request of being able to replicate an entire directory of SQLite *.db's from a single Litestream process - happy it's finally here.
Should make replicating Multi tenant per-user SQLite databases a lot more appealing.
oulipo 6 hours ago [-]
I still don't really understand the real "advantages" of such an architecture, over, say, a centralized Postgres server, which can process just as much data no?
psanford 19 hours ago [-]
This looks great! A few years ago I wrote a sqlite vfs for using dynamodb as a backing store[0] called DonutDB. With the recent addition of CAS to S3, I was thinking about making a new version of DonutDB backed by S3. I'm really glad lightstream supports this so I don't have to!
Do you have a reference for this? I assume by CAS you mean content addressable storage? I googled but can't find any AWS docs on this.
xyzzy_plugh 13 hours ago [-]
Compare And Swap
gcr 13 hours ago [-]
The TL;DR is that Amazon S3 now supports "conditional writes" which are guaranteed to fail if the file was written by some other writer. This is implemented by sending the ETag of an object's expected version alongside the write request.
Litestream now depends on this functionality to handle multiple writers. Think of optimistic locking.
Thanks both! In the contexts I work in CAS always means content-addressable-storage, my mistake.
malkia 16 hours ago [-]
So fossil (which is built on top of sqlite) + this = SCM?
hiAndrewQuinn 8 hours ago [-]
I'm still waiting for someone to make the GitHub for fossil. Bonus points if it's called Paleontology
diggan 6 hours ago [-]
Is that needed to make Fossil useful? Since it's basically GitHub but all in Git, it's all works P2P without the need of a centralized service.
I guess for discovery it kind of makes sense, but wouldn't really be "GitHub for Fossil" but more like a search engine/portal.
wiradikusuma 9 hours ago [-]
For Fly.io employees here: Can I finally replace my Postgre with this a'la Cloudflare D1 (which is also Sqlite based)?
neom 17 hours ago [-]
Is Litestream on a path to subsume LiteFS's capabilities? Re: PITR, would this be used to facilitate automated A/B testing of AI-generated code changes against live data subsets? I can imagine a lot of cool stuff in that direction. This is really cool Ben!
srameshc 19 hours ago [-]
I have been following Ben for a long time but I never knew LiteFS was based on his work. I somehow settled eventually for rqlite for self managed distributed.
I don't think they're similar at all. LiteFS uses Consul to elect a leader for a single-write-leader multiple-replica configuration, the same way you'd do with Postgres. rqlite (as I understood it last time I looked) runs Raft directly; it gets quorums for every write.
One isn't better than the other. But LiteFS isn't a "distributed SQLite" in the sense you'd think of with rqlite. It's a system for getting read-only replicas, the same way you've been able to do with log shipping on n-tier databases for decades.
apitman 14 hours ago [-]
rqlite also requires you to use special client libraries, whereas litefs is transparent to the program.
ChocolateGod 8 hours ago [-]
> It will be able to fetch and cache pages directly from S3-compatible object storage.
Does this mean your SQLite database size is no longer restricted by your local disk capacity?
bdcravens 6 hours ago [-]
Looking at the LiteVFS repo, it appears so, with some limitations.
"LiteVFS is a Virtual Filesystem extension for SQLite that uses LiteFS Cloud as a backing store."
Limitations
- Databases with journal_mode=wal cannot be modified via LiteVFS (but can be read)
- Databases with auto-vacuum cannon be opened via LiteVFS at all
A SQLite database that supports read-replicas and can offload cold data to object storage would be super useful.
rawkode 19 hours ago [-]
Amazing to see and hear about the progress. Always a pleasure when Ben works on something and shares it. Keep it up!
j0e1 18 hours ago [-]
This is exciting! Especially glad that Litestream is still maintained. Is there a use-case for Litestream for more than backup? I am a fan of offline-first but it would be cool to have a way to synchronize on-device SQLite instances to a single central instance.
benbjohnson 18 hours ago [-]
Backups & read replicas are the primary use cases. If you're interested in local-first, you can check out projects like cr-sqlite[1].
Can this be done with only Litestream, or is LiteVFS still in development? I looked into this last year but was put off by LiteFS's stated write performance penalty due to FUSE [1]; it's still marked as WIP [2] and hasn't seen updates for over a year.
Fantastic to see it's getting updated! I am a big fan of litestream, have been using it for a while together with pocketbase. It's like a cheat code for a cheap, reliable and safe backend.
fra 19 hours ago [-]
Litestream has seen very little development lately and I was worried it was dead. Very glad to see Ben Johnson is continuing to push the project forward with some exciting new plans.
noroot 7 hours ago [-]
I love the idea of litestream and litefs and do use it for some smaller projects, but have also been worried it was abandoned. The line is quite thin between "done" and "not maintained".
There clearly still is some untapped potential in this space, so I am glad benbjohnson is exploring and developing these solutions.
Great that the new release will offer the ability to replicate multiple database files.
> Modern object stores like S3 and Tigris solve this problem for us: they now offer conditional write support
I hope this won't be a hard requirement, since some S3 compatible storage do not have this feature (yet). I also do use the SFTP storage option currently.
avtar 14 hours ago [-]
That's the conclusion I reached a couple months ago when I was evaluating similar tools. The last Litestream release was issued in 2023 and the official Docker image is over a year old. In the end it seemed like a safer bet to accept some inconvenient tradeoffs and just create backups more frequently.
tptacek 13 hours ago [-]
Ben also wrote BoltDB, which was untouched (archived, even) for years despite a thriving community. Sometimes things are just done!
Zekio 8 hours ago [-]
seems to have active commits from 2 weeks ago, just not on the main branch
caleblloyd 18 hours ago [-]
Is the backend pluggable? Could it be configured to write to any key value store with support for optimistic concurrency control?
benbjohnson 18 hours ago [-]
We don't support plug-ins at the moment but there's several backends at the moment (S3, Azure Blob Storage, Google Cloud Storage, SFTP, etc)
m3sta 14 hours ago [-]
Is there anything like Livestream that can be just pip installed?
yowmamasita 16 hours ago [-]
tangent: in modern SQLite, are writes still serialized? That's my main concern when choosing a tech stack for an app that might have thousands of writes happening on peak periods
simonw 15 hours ago [-]
Yes they are, but if you benchmark thousands of writes a second you'll likely find that SQLite does just fine.
You might start running into problems at tens or hundreds of thousands of writes a second, though even then you may be OK on the right hardware.
will revamped litestream have a solution for ACKing only when transactions have durably committed to storage?
ignoramous 18 hours ago [-]
We have a sneaking suspicion that the robots that write LLM code are going to like SQLite too. We think what coding agents like Phoenix.new want is a way to try out code on live data, screw it up, and then rollback both the code and the state.
Prescient.
Agents would of course work well if they can go back in time to checkpoints and branch from there, exploring solutions parallely as needed.
Anyone who has experience with building workflows (Amazon SWF, Temporal, and the like) knows how difficult it is to maintain determinism in face of retries & re-drives in multi-tier setups (especially, those involving databases).
Replit recently announced their Agent's integration with Neon's time travel feature [0] for exactly the purpose outlined in TFA. Unlike Fly.io though, Replit is built on GCP and other 3p providers like Neon and it is unclear if both GCP & Databricks won't go all Oracle on them.
Is there a migration guide from stable to the branch 0.5? I’m running Litestream as a Docker sidecar alongside my Python app container and it’s been great and a nice comfort knowing my SQLite db is backed up to S3.
nico 19 hours ago [-]
Very cool idea, I wonder if that works better than their Postgres instances
Recently, I deployed a little side project using a small postgres vm on fly.io
After a couple of days, and only having about 500kb of data stored in that db, the postgres vm went into an unrecoverable fail loop, saying it ran out of memory, restarting, then immediately running out of memory again, so on and so forth
It took about 3-4hrs to recover the data jumping through a lot of hoops to be able to access the data, copy it to another volume and finally download it
I would've reached for support, but it seems like the only option available is just posting on their forum. I saw a couple of related posts, all with unsatisfactory answers unfortunately
To be fair, it was incredibly easy to get up and running with them. On the other hand, almost all the time I saved by that quick start, was wasted recovering the failing db, all the while my site was down
Ironically, I originally developed the project using sqlite, but then switched to postgres to deploy
tptacek 18 hours ago [-]
This post has nothing to do with Fly.io's platform offerings. Litestream is completely uncoupled from Fly.io. Ben started it before he got here.
sosodev 16 hours ago [-]
Clearly it does have something to do with fly.io considering fly is and has been pushing for litefs/stream as the ideal database solution for fly users. It seems reasonable that readers would compare it to other fly offerings.
tptacek 16 hours ago [-]
We have... never done that? Like ever? LiteFS is interesting for some read-heavy use cases, especially for people who are doing especially edge-deployed things, but most people who use databases here use Postgres. We actually had a managed LiteFS product --- LiteFS Cloud --- and we sunset it, like over a year ago. We have a large team working on Managed Postgres. We do not have a big SQLite team.
People sometimes have a hard time with the idea that we write about things because they are interesting to us, and for no other reason. That's also 60-70% of why Ben does what he does on Litestream.
mixmastamyk 39 minutes ago [-]
What happened to the supabase integration? Seems to have fizzled as well.
sosodev 16 hours ago [-]
I’m sorry. I think that I, and probably others, have misinterpreted it. Between Ben’s writings on the fly blog and litefs cloud it seemed like that was the case. I didn’t realize it had been discontinued.
tptacek 16 hours ago [-]
Neither LiteFS nor Litestream (obviously) have been discontinued. They're both open source projects, and were both carefully designed not to depend on Fly.io to work.
norman784 5 hours ago [-]
That's one of the reasons I don't use their Postgres instances and instead go with a service with a dedicated database service, but for deploying backend apps it's pretty good.
teamcampapp 5 hours ago [-]
[dead]
yellow_lead 18 hours ago [-]
It's strange to me that they still haven't offered a managed Postgres product. Other providers like Render or even Heroku seem to have realized that this is a core part of PaaS that customers want. Instead they focused on GPUs and LiteStream. When I evaluated different PaaS for the startup I work at, I had to go with Render. I couldn't even give Fly.io a try since I knew we needed Postgres.
tptacek 18 hours ago [-]
We're rolling out Managed Postgres, very slowly.
yellow_lead 16 hours ago [-]
Looking forward to it!
nico 18 hours ago [-]
It think they are in beta. I wished they had a managed Redis though
For Postgres I ended up going with Neon (neon.tech), very happy with them so far. Super easy to setup and get up and running, also love being able to just easily see the data from their web interface
For something rather new there seems to be too many choices already. Please pick a strategy under one name, good defaults, and a couple of config options.
14 hours ago [-]
fasdfdsa 16 hours ago [-]
[flagged]
tiffanyh 19 hours ago [-]
[flagged]
vhodges 19 hours ago [-]
I think that might be a structure/css issue. On Desktop it's to the left of the article (but I shrunk the window and indeed it puts it below the article).
19 hours ago [-]
xmorse 7 hours ago [-]
Fly feels like is going to be bankrupt soon
internet_points 7 hours ago [-]
why?
xmorse 6 hours ago [-]
The landing page shows all logos of small companies, including one that is migrating away from them (Turso)
Really nice to see this, I wrote this comment almost 2 years ago when I was a little miffed about trying to use litestream and litefs: https://news.ycombinator.com/item?id=37614193
I think this solves most of the issues? You can now freely run litestream on your DB and not worry about issues with multiple writers? I wonder how the handoff is handled.
The read replica FUSE layer sounds like a real nice thing to have.
edit: Ah, it works like this: https://github.com/benbjohnson/litestream/pull/617
> When another Litestream process starts up and sees an existing lease, it will continually retry the lease acquisition every second until it succeeds. This low retry interval allows for rolling restarts to come online quickly.
Sounds workable!
we're using it on production for a write-heavy interal use-case (~12GB compressed) for more than a year now; and it's costing us a couple hundred pennies per month (azure).
excited to try the new changes when they land.
Previously this would be problematic, as the new instance might miss changes made by the old server. Is this fixed by these new changes?
Your new service will come up, but it won't be able to get the write lease until the previous server shuts down. Now you have tools to detect this, stop one writer, and start the other, but the service will likely have to experience some kind of requests queueing or downtime.
1. A built-in UI and CLI that manages SQLite from a volume. Getting the initial database on a Fly Machine requires more work than it should.
2. `fly console` doesn't work with SQLite because it spins up a separate machine, which isn't connected to the same volume where the SQLite data resides. Instead you have to know to run `fly ssh console —pty`, which effectively SSH's into the machine with the database.
The problem in general with SQLite web apps is they tend to be small apps, so you need a lot of them to make a decent amount of money hosting them.
TBH I wish they had their managed PG cluster running because it would have made it easier to downsize, but I’m happy with SQLite.
I used SQLite for another project that I knew was going to max out at 100 concurrent users and it worked great. The best moment was when a user reported a production error I couldn’t recreate locally, so I downloaded the database and recreated it with the latest production data on my laptop. You couldn’t do that with a high-compliance app, but that’s not most apps.
I’m hesitant to outright say “SQLite and Rails is great”because you have to know your app will run on one node. If you know that then it’s fantastic.
Am I understanding correctly that I will be able to restore a database to any point-in-time that is while the litestream process is running? Because auto-checkpointing could consume the WAL while it isn't running?
So for an extreme example if the process crashed for an hour between 2:00 and 3:00, I could restore to 1:55 or 3:05 but the information required to restore between 2:00 and 3:00 is lost?
I'm asking because switching from winter time to summer time in Europe happened on March 30th with local time jumping from 2:00 to 3:00.
In the meantime, it's pretty straightforward to use as a library.
There may be a typo here:
> The most straightforward way around this problem is to make sure only one instance of Litestream can replication to a given destination.
Can replicate? Or can do replications?
Should make replicating Multi tenant per-user SQLite databases a lot more appealing.
I can't wait to try this out.
[0]: https://github.com/psanford/donutdb
Do you have a reference for this? I assume by CAS you mean content addressable storage? I googled but can't find any AWS docs on this.
Litestream now depends on this functionality to handle multiple writers. Think of optimistic locking.
https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-s3...
I guess for discovery it kind of makes sense, but wouldn't really be "GitHub for Fossil" but more like a search engine/portal.
https://github.com/rqlite/rqlite
https://youtu.be/8XbxQ1Epi5w?si=puJFLKoVs3OeYrhS
One isn't better than the other. But LiteFS isn't a "distributed SQLite" in the sense you'd think of with rqlite. It's a system for getting read-only replicas, the same way you've been able to do with log shipping on n-tier databases for decades.
Does this mean your SQLite database size is no longer restricted by your local disk capacity?
"LiteVFS is a Virtual Filesystem extension for SQLite that uses LiteFS Cloud as a backing store."
Limitations
- Databases with journal_mode=wal cannot be modified via LiteVFS (but can be read)
- Databases with auto-vacuum cannon be opened via LiteVFS at all
- VACUUM is not supported
https://github.com/superfly/litevfs
[1]: https://github.com/vlcn-io/cr-sqlite
Can this be done with only Litestream, or is LiteVFS still in development? I looked into this last year but was put off by LiteFS's stated write performance penalty due to FUSE [1]; it's still marked as WIP [2] and hasn't seen updates for over a year.
[1] https://fly.io/docs/litefs/faq/#what-are-the-tradeoffs-of-us...
[2] https://github.com/superfly/litevfs
There clearly still is some untapped potential in this space, so I am glad benbjohnson is exploring and developing these solutions.
Great that the new release will offer the ability to replicate multiple database files.
> Modern object stores like S3 and Tigris solve this problem for us: they now offer conditional write support
I hope this won't be a hard requirement, since some S3 compatible storage do not have this feature (yet). I also do use the SFTP storage option currently.
You might start running into problems at tens or hundreds of thousands of writes a second, though even then you may be OK on the right hardware.
will revamped litestream have a solution for ACKing only when transactions have durably committed to storage?
Agents would of course work well if they can go back in time to checkpoints and branch from there, exploring solutions parallely as needed.
Anyone who has experience with building workflows (Amazon SWF, Temporal, and the like) knows how difficult it is to maintain determinism in face of retries & re-drives in multi-tier setups (especially, those involving databases).
Replit recently announced their Agent's integration with Neon's time travel feature [0] for exactly the purpose outlined in TFA. Unlike Fly.io though, Replit is built on GCP and other 3p providers like Neon and it is unclear if both GCP & Databricks won't go all Oracle on them.
[0] https://blog.replit.com/safe-vibe-coding
Recently, I deployed a little side project using a small postgres vm on fly.io After a couple of days, and only having about 500kb of data stored in that db, the postgres vm went into an unrecoverable fail loop, saying it ran out of memory, restarting, then immediately running out of memory again, so on and so forth
It took about 3-4hrs to recover the data jumping through a lot of hoops to be able to access the data, copy it to another volume and finally download it
I would've reached for support, but it seems like the only option available is just posting on their forum. I saw a couple of related posts, all with unsatisfactory answers unfortunately
To be fair, it was incredibly easy to get up and running with them. On the other hand, almost all the time I saved by that quick start, was wasted recovering the failing db, all the while my site was down
Ironically, I originally developed the project using sqlite, but then switched to postgres to deploy
People sometimes have a hard time with the idea that we write about things because they are interesting to us, and for no other reason. That's also 60-70% of why Ben does what he does on Litestream.
For Postgres I ended up going with Neon (neon.tech), very happy with them so far. Super easy to setup and get up and running, also love being able to just easily see the data from their web interface
https://http//litestream.io/
https://fly.io/