> Most relational database management systems do not support nested records, so tables are in first normal form by default. In particular, SQL does not have any facilities for creating or exploiting nested tables. [0]
Your link already points out that this isn't followed anymore since Json has been added as a default SQL feature
aerzen 1 days ago [-]
Ok, hear me out: what if we make something that takes a postgres database dir, tars it together and encodes it as a binary blob in SQLite?
We could have SQLite within postgres within sqlite within postgres! Is it practical or even slightly useful? Of course not - but it's SQL databases all the way down. Not that this is a good thing in itself.
Take it one step further, the table-oriented database(tm) , embed clickhouse, MongoDB, Redis and PostgreSQL to ensure you have more flexibility than anyone can utilize efficiently. The one database to rule them all.
luismedel 16 hours ago [-]
Now you only need support for qcow columns which you can mount in your embedded engines....et volia, enjoy your storage and compute separation.
next stop: mongodb inside sqlite inside postgresql
eastbound 20 hours ago [-]
Giving @Transactional(NESTED) a whole new meaning.
robertclaus 1 days ago [-]
What are the use cases for this? I can't imagine designing a database schemas to use this in a typical product. Is it intended for hybrid applications to back up local user data directly with their account info?
simonw 1 days ago [-]
I can think of plenty.
The most interesting one for me is if you're running a SaaS product like Notion where your users create custom applications that manage their own small schema-based data tables.
Letting users create full custom PostgreSQL tables can get complex - do you want to manage tens of thousands of weird custom tables in a PostgreSQL schema somewhere?
I'd much rather manage tens of thousands of rows in a table where one of the columns is a BLOB with a little SQLite database in it.
munk-a 1 days ago [-]
> Letting users create full custom PostgreSQL tables can get complex - do you want to manage tens of thousands of weird custom tables in a PostgreSQL schema somewhere?
Yea, I'd be fine with that - postgres has the concept of databases and schemas within those databases. If you really want to build a product like that I'd suggest starting with per-tenant schemas that leverage table inheritance as appropriate. The permissions would be pretty easy to manage.
Though, in a lot of cases I've actually seen this done every client ends up with a dedicated server (or container - whatever tech you use to do it, something completely isolated from other instances) because user version management ends up being a huge issue. When you're building something that custom it's highly likely that version migrations need to be done with client oversight to ensure everything actually works.
I have yet to find an actual real world case where the inner-platform effect is the right solution. Usually when tools like that are selected the software ends up being so generic and flexible that's it's useless. Custom application/BI environment development relies on really judiciously telling users they can't have most features - with the hard part being figuring out which features are necessary and which ones you can cut to reduce bloat.
jitl 23 hours ago [-]
Notion has 100 million users, managing schema-per-tenant at our scale sounds like a complexity nightmare. We have 480+ identical schemas across 100+ Postgres hosts, and that already takes a lot of brainpower & engineering time to manage T_T
bob1029 22 hours ago [-]
> managing schema-per-tenant at our scale sounds like a complexity nightmare.
The per-tenant schema could be the tenant's responsibility. Most non-technical users can handle the idea of tables & columns, assuming you leverage UI/UX patterns they are already familiar with.
As long as we never add new features, never need to change how we map UI <-> Postgres DDL, and our users never make any mistakes when they change their tables, it could work without being a complexity nightmare
mbesto 20 hours ago [-]
Curious - so how do you manage client-specific schemas then? Do you just have mappings in postgres (column1, column2, column3, etc.) or maybe store a client specific schema in bson per client?
jitl 14 hours ago [-]
It's all JSON in two Postgres tables: `collection` which represents a Notion Database and has a `collection.schema` JSONB column, and `block` which has a `block.properties` JSONB column that stores the property values - the stuff in the Notion Database cells - for each row. We apply "schema on read" when querying or rendering a Notion Database, and we have a service on the backend that builds indexes/caches for hot collections on the fly. The service handles all the queries for collections larger than X rows. For smaller collections, we just give the client the whole thing modulo permissions and it does the query locally.
mediaman 20 hours ago [-]
Why not use jsonb for this kind of thing? Store the schema in one table, one per client, or perhaps one per table per client, and then store the data for that in another table, segregated by customer and table type, with row data stored in a JSONB field using that table's schema.
I normally don't like using JSONB when I could have a rigorous schema, but this sort of application seems reasonable.
rswail 1 days ago [-]
You manage a fleet of devices that need to get operating parameters regularly, but they're complicated and SQLite is a great mechanism for sending that.
So at the backend you have a postgres database that contains the device details etc as well as the operating parameters for that device.
You can update the operating parameters as part of a postgres transaction so either all the BLOBs are updated, or none.
Using /tmp on the postgres cluster (server) is a bit of a hack, it would be nicer to have memory based SQLite blobs.
In terms of security, you get Postgres row level security, so each SQLite value is protected in the same way as the rest of the row.
bravura 1 days ago [-]
The top line of the README says: "Embed an SQLite database in your PostgreSQL table. AKA multitenancy has been solved."
But I'm still having trouble trying to grok the intricacies of it. In a sense, I guess it has well isolated individual SQLite DBs and you'd have to go out of your way to join over them. With that said, does PostgreSQL manage and pool all the writes correctly? So you don't need to worry about SQLite concurrency issues?
pmontra 1 days ago [-]
If by solving multinenancy they mean
CREATE TABLE tenants (
id BIGINT NOT NULL,
database SQLITE DEFAULT execute_sqlite(
empty_sqlite(),
'CREATE TABLE users (etc.)'
and all the other tables
for each tenant
)
);
then they don't need to make joins between sqlite dbs.
Your other concerns are very real. Those sqlite dbs could become very large.
I prefer the use case depicted in another reply: preparing sqlite dbs before shipping them to their own devices. Or maybe receiving them and performing analysis, maybe after having imported it in overall psql tables. Or similar scenarios in which all the db is read or written at once. Anyway, once we have a tool we start using it.
tucnak 22 hours ago [-]
> then they don't need to make joins between sqlite dbs.
The extension could also provide custom index access methods (considering that SQLite only has a handful of column types in the first place.) That would allow you to incorporate the keys in the index heaps, as opposed to table heaps, boom, you get bitmap index scans for Joins, i.e. GIN but with a bit more redundancy.
rswail 1 days ago [-]
Each of the columns is an instance of a SQLite database, so I assume (without looking at the source) that they properly multi-thread as needed.
So there's not cross-SQLite-database connections or multiple writers going on.
rswail 1 days ago [-]
You could join over them, but not really in the way you're thinking.
Each of the columns that are databases would be updated when the functions execute.
You could do weird crap like INSERT/DELETE as part of a postgres level SELECT.
Tostino 21 hours ago [-]
You can do that with any function already. This isn't new because of nested databases.
mixmastamyk 21 hours ago [-]
I’m thinking maybe you’d like to use litefs for multi-tenant dbs close to the tenant. But perhaps you’ll want a centralized billing/reports database under postgres as well?
So, instead of saving the client sqlite db of the org to cloud storage you save it to the centralized db column instead. Litefs probably doesn’t support it yet, but wouldn’t be too hard to add.
abrookewood 3 hours ago [-]
"multitenancy has been solved" - I'm confused, can you not use multiple SQLite files??
ecuaflo 23 hours ago [-]
I think SQLite columns for SQLite would be superior to SQLite’s JSON columns whose operators are a whole ‘nother query language you need to learn and seem comparatively limited.
james_marks 21 hours ago [-]
Agreed, the JSON search queries in Postgres are esoteric, to say the least.
But after spending some time with a mixed-schema table at even modest scale, I’m wondering how often a better design could have cut the whole problem off.
maCDzP 22 hours ago [-]
I spent some time learning those queries for a project and when I grokked it they where very handy.
xyc 21 hours ago [-]
With Claude you barely had to learn the language this days as you just need to prompt, but SQLite column is an interesting idea.
ray_v 22 hours ago [-]
wouldn't that just be a foreign key to another table or, a list of keys or am i missing something?
simonw 1 days ago [-]
They /tmp file mechanism sounds like a bit of a hack, is that definitely necessary?
It may be possible to create a SQLite in-memory database instead and then load the binary blob data into it using the backup API or some kind of trick with VACUUM INTO.
michelpp 1 days ago [-]
I think the right approach would be to store the sqlite database as a varlena type that can be TOASTed and then "expanded" using the Expanded Datum API so that it's a live open database connection for the life of the transaction:
"Introducing pglite-fusion: Embed SQLite Databases Directly Into PostgreSQL Tables
What is pglite-fusion?
pglite-fusion is a powerful PostgreSQL extension that bridges two popular database worlds by allowing you to embed SQLite databases directly into PostgreSQL tables. Each row can contain its own SQLite database using the new `SQLITE` column type—perfect for scenarios where you need highly localized or structured data at the row level.
How It Works
- SQLITE Type: Define columns in PostgreSQL that hold embedded SQLite databases, stored as CBOR-encoded `Vec<u8>` objects.
- Querying SQLite: Use the `query_sqlite` function to run SQL queries on the embedded SQLite databases. Results are returned as JSON-encoded arrays in a single row.
- Updating SQLite: Use `execute_sqlite` to modify the SQLite database and retrieve updated instances.
Why You'll Love It
- Seamlessly blend the simplicity of SQLite with the scalability of PostgreSQL.
- Unlock new use cases like row-specific data isolation, nested data structures, or lightweight experimentation with different SQLite schemas.
- Built on Rust using the pgrx framework for safety and performance.
Under the Hood
pglite-fusion temporarily stores SQLite databases as files in `/tmp` to perform operations. SQLite processes these files to execute queries or updates and passes the results back into PostgreSQL.
Who Is It For?
Developers and DBAs who want fine-grained control over data storage, hybrid database use cases, or just love the flexibility of combining PostgreSQL and SQLite in innovative ways.
SahAssar 17 hours ago [-]
Nobody wants AI-generated comment summaries, even less so without disclosing it's generated.
bni 1 days ago [-]
1NF crime against humanity?
klysm 20 hours ago [-]
If you’re using Postgres, multi tenancy has been solved with row level security. It’s super easy to add a tenant id column to every table and a policy that only allows connections to see data from one tenant
michelpp 20 hours ago [-]
RLS is very useful and can solve multi tenancy and other problems, but it is complicated and can add a significant per row cost to queries if your policies get complicated.
The common path of comparing some constant like the role name to some column in the table is fine, and it's fast enough as the policy checker already has the row in hand when it does the check, but the natural tendency for people to want to abstract their policies into a function like has_permission() will blow up fast.
The best approach I've seen from pyramation's launchql [1] which precomputes policies into a bitstring and then masks that against a query constant bitstring of required permissions. Flexible policy definitions compiled into the row as bits so the check is as fast as possible.
Sure if you start using it for more than just multitenancy you can get into performance trouble or other complexities. I haven’t felt tempted to put anything beyond the tenant level isolation though yet and it’s served us very well
toasterlovin 19 hours ago [-]
Multi-tenancy causes performance issues that simply don't exist if each customer's data is in it's own database.
arkh 22 hours ago [-]
You may want to use the $$ way to declare strings for your examples. Something like:
-- Create a todo for "frectonz"
UPDATE people
SET database = execute_sqlite(
database,
$sqlite$INSERT INTO todos VALUES ('solve multitenancy')$sqlite$
)
WHERE name = 'frectonz';
frectonz 22 hours ago [-]
oh nice, i didn't know this existed, thanks
Tostino 21 hours ago [-]
The string between the dollar signs can only be closed by another set of dollar quotes with the same string between them. So it allows you to do quotes within quotes within quotes if necessary.
zekenie 1 days ago [-]
I’m trying to think through when I’d reach for this over jsonb… I guess the fact that there’s an enforced schema? And that you could do aggregations on your SQLite db? Or maybe if you wanted to send the whole delete db to a client??
sgarland 1 days ago [-]
> enforced schema
I have bad news for you [0] about SQLite’s view on schema consistency.
I love using the database as the source of truth for data consistency, and constraining your data to only be allowed in your database as long as it's in a valid state.
It's easy enough to replicate those constraints to the client if you want the client to do ahead of time validation, but your source of truth lives in the database...
I wouldn't survive with SQLite.
sgarland 19 hours ago [-]
You can make it behave with its STRICT mode, but that’s fairly recent, and it’s also just upsetting that it has to exist in the first place.
Completely agree that the DB should be the arbiter of validity. Constraints are a good thing.
Pinus 21 hours ago [-]
What, no operators? I want indexes on these columns, and some weird and wonderful operator syntax for doing cross-database joins between multiple DATABASE columns! =)
rustman123 23 hours ago [-]
This is likely great for serving SQLite data to frontends using the SQLite http-vfs.
Would be great combined with functions/triggers/views to mirror specific data/queries from Postgres as SQLite.
21 hours ago [-]
dboreham 21 hours ago [-]
This is not as crazy as it sounds but I'd rather have PG-in-PG than the hetrogeneous arrangement here. PG in S3 would be useful too.
counterpartyrsk 8 hours ago [-]
Posts like this make me realize I never even scratched the surface of any technology ever.
snthpy 6 hours ago [-]
If you do the same for DuckDB you could call it pgducken. ;-)
tacone 20 hours ago [-]
Please let us embed that sqlite-hosting postgres in a TXT DNS record. We really need that.
bitwize 23 hours ago [-]
Yo, dawg, I heard you like databases...
This is nuts. I can't think of a use for it, but I'm sure it's "a solution that will eventually find a problem".
unregistereddev 20 hours ago [-]
A different approach:
I had a project that stored a tremendous amount of spatial data. There were "sessions" of spatially-tagged time-series data that would be individually processed (think generating a map layer from time-series data). There were also reasons to perform higher level aggregations that did not dive into the time series data. The data density was high enough that it was impractical to build spatial indices over the entire dataset. Even using space-filling curves as multidimensional B-trees would require so many lookups that queries were impractically slow.
One POC I tried (and then rejected as an abomination) was to store each session's time-series data inside a SQLite database with SpatialLite extensions enabled. Then store each session's metadata, including spatial extent, in a Postgres database. The SQLite files were tossed in S3 and referenced from Postgres. I guess I could have inserted them directly to a BLOB column inside Postgres.
aaronbwebber 20 hours ago [-]
I was _extremely disappointed_ not to see this meme when I clicked on the link. Will not consider using this extension until Xzibit is prominently featured.
TekMol 24 hours ago [-]
Are there still reasons to use PostgreSQL?
I like the simplicity of SQLite's "a file is all you need" approach so much, that I started to converge all my projects to SQLite. So far, I have not come across any roadblocks.
Can anyone think of a use case where PostgreSQL is better suited than SQLite?
prisenco 24 hours ago [-]
The biggest one is redundancy. Architecting with Read replicas is much easier with Postgres than Sqlite because of it's server model.
Sqlite on the server is a fantastic starter database. Dead simple to set up, highly performant and scales way higher (vertically) than anyone gives it credit for.
But there certainly is a point you'll have to scale out instead of up, and while there are some great solutions for that (rqlite, litefs, dqlite, marmot) it's not inherent to Sqlite's design.
otoolep 23 hours ago [-]
rqlite[1] creator here, happy to answer any questions about it.
Should replication really be a concern of the DB layer?
Replication means writing queries which alter the data to multiple machines, right?
Shouldn't that be done by a software one level up? Which takes in the queries via some network protocol and then sends them to all machines.
That would sound more logical to me.
prisenco 23 hours ago [-]
Historically, yes. Databases were software that were concerned with both storage and networking.
It's fine to want to separate those out, but it's not easy to do so and there are reasons they've been coupled for decades.
23 hours ago [-]
TekMol 23 hours ago [-]
What makes it hard?
Having a single DB that takes write queries via a proxy which spreads them out to multiple read-only-DBs sounds easy at first.
abtinf 23 hours ago [-]
When do you consider the write/transaction to be completed?
What do you do about out-of-sync read replicas?
ACID gets real hard real fast when introducing replication.
TekMol 23 hours ago [-]
> When do you consider the write/transaction to be completed?
Sending a UPDATE/INSERT/DELETE statement to SQLite is not blocking? I would think it is, because in my code I can read the number of affected rows right after I sent the query.
> What do you do about out-of-sync read replicas?
Delete them and replace them by uploading a checkpoint and replaying a log of the queries since then.
Tostino 20 hours ago [-]
If you are doing statement level replication, you better make sure every query is run in the exact same order (and finishes in the same order).
Without that you will have drift from your master database.
With that, you have a whole new host of synchronization issues you need to deal with.
buildbuildbuild 24 hours ago [-]
Sometimes you have applications that should not be able to access an entire database. There are other various scaling reasons, and PG extensions that can be helpful. But I agree that for small to medium sized projects, SQLite is highly underrated.
Zambyte 24 hours ago [-]
When your application scales beyond one machine that needs access to the same database, PostgreSQL becomes an obviously better choice than SQLite. Until that point, SQLite is a fine, and honestly underrated choice.
DuckDB is another option worth considering.
TekMol 24 hours ago [-]
Should the concept of "machines" really be a concern of the DB layer?
SQLite already allows multiple connections, so putting it on a server and adding a program that talks a network protocol and proxies the queries to the DB sounds more logical to me?
evandrofisico 24 hours ago [-]
And after all of that you basically have something that looks like postgres or mysql.
TekMol 23 hours ago [-]
My feeling is that I would have something better.
Because I can use SQLite and its "a file is all you need" approach as long as I don't need multiple machines.
And only bring in the other software (the proxy) when I need it.
Zambyte 22 hours ago [-]
High performance software is written acknowledging the reality that it will run on hardware. Databases tend to be a class of software that is hyper-focused on performance.
Writing a networked application that uses SQLite as a database is perfectly reasonable. You're just making the decision to lift the layer of abstraction that is concerned with machines from the DB to your application, which may or may not be a reasonable thing to do.
MySQL has limited spatial data/function support versus PostGIS extension.
randomdata 24 hours ago [-]
Certainly if you need a network-attached database and aren't creating your own home brew network-attached database (the so-called API server), Postgres is a pretty good choice.
redwood 23 hours ago [-]
Concurrency.
2codr2pro2max 24 hours ago [-]
Jeebus i keep forgetting how many of you are bootcamp noobs here
prisenco 24 hours ago [-]
Maybe so but this isn't helpful.
foul 24 hours ago [-]
A query on very ugly tables will count 26 thousands occurrences of the apostrophe, I love it
Apreche 1 days ago [-]
That’s fun, but I think I'll just use an SQLite foreign data wrapper instead.
frectonz 24 hours ago [-]
That's the sensible option but not the fun one.
michelpp 24 hours ago [-]
Then you need one wrapper per database, with this approach you can have one database per row.
ellisv 22 hours ago [-]
But can I have one row that holds all the databases?
With the expanded datum api you can also work with subscriptable array types to only expand elements lazily as needed. It might already works if you try it, but support for it might be hardwired only to nested stock arrays, something to look into.
BiteCode_dev 22 hours ago [-]
Someone, somewhere will eventually find a legitimate use case for it.
NoMoreNicksLeft 21 hours ago [-]
We should have a response team standing by, ready to dump thousands of tons of concrete onto that legitimate use case. A gigantic cement sarcophagus that may not solve the problem, but our descendants thousands of years from now may be better prepared to do what we can't and destroy it. The "someone" will just have to be a tragic casualty, as we won't be able to save him or her without risking the contagion spreading.
kunley 23 hours ago [-]
Speed, anyone?
How long does it take to update a table of, say, 1k rows? 1m rows? Same when subqueries and joins are involved to calculate what's to be updated?
kevincox 23 hours ago [-]
The current implementation is writing out the DB to `/tmp` then reading the resulting file back and writing it to the column.
So on the bright side updating 1k rows takes the same amount of time as updating one row. On the other hand every write is a full table write (actually two).
I don't think there is a way to do this efficently with the current API as PostgreSQL is MVCC so it needs to write out each version separately (unless it has some sort of support of partial string sharing, I don't think so). Maybe a better version of this would write every page of the SQLite DB as a separate row so that you only need to update the changed pages.
Suppafly 20 hours ago [-]
>The current implementation is writing out the DB to `/tmp` then reading the resulting file back and writing it to the column.
I think there was already another comment where someone told OP how to solve that.
klysm 20 hours ago [-]
The real win of this seems like schema divergence among one column?
amazingamazing 23 hours ago [-]
Yet again SQlite is over represented here in the server context.
nojvek 18 hours ago [-]
Yo dawg, I heard you liked databases. So I put a database inside your database!
Exactly what we need from a Show HN.
DeathArrow 18 hours ago [-]
How would embedding a database inside another help someone?
StayTrue 1 days ago [-]
Yo dawg I heard you liked databases so we put a database in your database.
eddieroger 21 hours ago [-]
Dang wish this wasn't the bottom comment because it is amazing. If it wasn't here, I'd have posted it.
kramer2718 1 days ago [-]
(laughs in xzibit)
mtharrison 24 hours ago [-]
Came for this
1 days ago [-]
Rendered at 14:34:12 GMT+0000 (Coordinated Universal Time) with Vercel.
“Not with that attitude.”
– frectonz
[0]: https://en.wikipedia.org/wiki/First_normal_form
We could have SQLite within postgres within sqlite within postgres! Is it practical or even slightly useful? Of course not - but it's SQL databases all the way down. Not that this is a good thing in itself.
The most interesting one for me is if you're running a SaaS product like Notion where your users create custom applications that manage their own small schema-based data tables.
Letting users create full custom PostgreSQL tables can get complex - do you want to manage tens of thousands of weird custom tables in a PostgreSQL schema somewhere?
I'd much rather manage tens of thousands of rows in a table where one of the columns is a BLOB with a little SQLite database in it.
Yea, I'd be fine with that - postgres has the concept of databases and schemas within those databases. If you really want to build a product like that I'd suggest starting with per-tenant schemas that leverage table inheritance as appropriate. The permissions would be pretty easy to manage.
Though, in a lot of cases I've actually seen this done every client ends up with a dedicated server (or container - whatever tech you use to do it, something completely isolated from other instances) because user version management ends up being a huge issue. When you're building something that custom it's highly likely that version migrations need to be done with client oversight to ensure everything actually works.
I have yet to find an actual real world case where the inner-platform effect is the right solution. Usually when tools like that are selected the software ends up being so generic and flexible that's it's useless. Custom application/BI environment development relies on really judiciously telling users they can't have most features - with the hard part being figuring out which features are necessary and which ones you can cut to reduce bloat.
The per-tenant schema could be the tenant's responsibility. Most non-technical users can handle the idea of tables & columns, assuming you leverage UI/UX patterns they are already familiar with.
As long as we never add new features, never need to change how we map UI <-> Postgres DDL, and our users never make any mistakes when they change their tables, it could work without being a complexity nightmare
I normally don't like using JSONB when I could have a rigorous schema, but this sort of application seems reasonable.
So at the backend you have a postgres database that contains the device details etc as well as the operating parameters for that device.
You can update the operating parameters as part of a postgres transaction so either all the BLOBs are updated, or none.
Using /tmp on the postgres cluster (server) is a bit of a hack, it would be nicer to have memory based SQLite blobs.
In terms of security, you get Postgres row level security, so each SQLite value is protected in the same way as the rest of the row.
But I'm still having trouble trying to grok the intricacies of it. In a sense, I guess it has well isolated individual SQLite DBs and you'd have to go out of your way to join over them. With that said, does PostgreSQL manage and pool all the writes correctly? So you don't need to worry about SQLite concurrency issues?
Your other concerns are very real. Those sqlite dbs could become very large. I prefer the use case depicted in another reply: preparing sqlite dbs before shipping them to their own devices. Or maybe receiving them and performing analysis, maybe after having imported it in overall psql tables. Or similar scenarios in which all the db is read or written at once. Anyway, once we have a tool we start using it.
The extension could also provide custom index access methods (considering that SQLite only has a handful of column types in the first place.) That would allow you to incorporate the keys in the index heaps, as opposed to table heaps, boom, you get bitmap index scans for Joins, i.e. GIN but with a bit more redundancy.
So there's not cross-SQLite-database connections or multiple writers going on.
Each of the columns that are databases would be updated when the functions execute.
You could do weird crap like INSERT/DELETE as part of a postgres level SELECT.
So, instead of saving the client sqlite db of the org to cloud storage you save it to the centralized db column instead. Litefs probably doesn’t support it yet, but wouldn’t be too hard to add.
But after spending some time with a mixed-schema table at even modest scale, I’m wondering how often a better design could have cut the whole problem off.
It may be possible to create a SQLite in-memory database instead and then load the binary blob data into it using the backup API or some kind of trick with VACUUM INTO.
https://www.postgresql.org/docs/17/xtypes.html#XTYPES-TOAST
https://github.com/postgres/postgres/blob/master/src/include...
https://github.com/michelp/pgexpanded
What is pglite-fusion? pglite-fusion is a powerful PostgreSQL extension that bridges two popular database worlds by allowing you to embed SQLite databases directly into PostgreSQL tables. Each row can contain its own SQLite database using the new `SQLITE` column type—perfect for scenarios where you need highly localized or structured data at the row level.
How It Works - SQLITE Type: Define columns in PostgreSQL that hold embedded SQLite databases, stored as CBOR-encoded `Vec<u8>` objects. - Querying SQLite: Use the `query_sqlite` function to run SQL queries on the embedded SQLite databases. Results are returned as JSON-encoded arrays in a single row. - Updating SQLite: Use `execute_sqlite` to modify the SQLite database and retrieve updated instances.
Why You'll Love It - Seamlessly blend the simplicity of SQLite with the scalability of PostgreSQL. - Unlock new use cases like row-specific data isolation, nested data structures, or lightweight experimentation with different SQLite schemas. - Built on Rust using the pgrx framework for safety and performance.
Under the Hood pglite-fusion temporarily stores SQLite databases as files in `/tmp` to perform operations. SQLite processes these files to execute queries or updates and passes the results back into PostgreSQL.
Who Is It For? Developers and DBAs who want fine-grained control over data storage, hybrid database use cases, or just love the flexibility of combining PostgreSQL and SQLite in innovative ways.
The common path of comparing some constant like the role name to some column in the table is fine, and it's fast enough as the policy checker already has the row in hand when it does the check, but the natural tendency for people to want to abstract their policies into a function like has_permission() will blow up fast.
The best approach I've seen from pyramation's launchql [1] which precomputes policies into a bitstring and then masks that against a query constant bitstring of required permissions. Flexible policy definitions compiled into the row as bits so the check is as fast as possible.
[1] https://github.com/launchql/launchql
-- Create a todo for "frectonz"
UPDATE people
SET database = execute_sqlite(
)WHERE name = 'frectonz';
I have bad news for you [0] about SQLite’s view on schema consistency.
[0]: https://www.sqlite.org/quirks.html
It's easy enough to replicate those constraints to the client if you want the client to do ahead of time validation, but your source of truth lives in the database...
I wouldn't survive with SQLite.
Completely agree that the DB should be the arbiter of validity. Constraints are a good thing.
Would be great combined with functions/triggers/views to mirror specific data/queries from Postgres as SQLite.
This is nuts. I can't think of a use for it, but I'm sure it's "a solution that will eventually find a problem".
I had a project that stored a tremendous amount of spatial data. There were "sessions" of spatially-tagged time-series data that would be individually processed (think generating a map layer from time-series data). There were also reasons to perform higher level aggregations that did not dive into the time series data. The data density was high enough that it was impractical to build spatial indices over the entire dataset. Even using space-filling curves as multidimensional B-trees would require so many lookups that queries were impractically slow.
One POC I tried (and then rejected as an abomination) was to store each session's time-series data inside a SQLite database with SpatialLite extensions enabled. Then store each session's metadata, including spatial extent, in a Postgres database. The SQLite files were tossed in S3 and referenced from Postgres. I guess I could have inserted them directly to a BLOB column inside Postgres.
I like the simplicity of SQLite's "a file is all you need" approach so much, that I started to converge all my projects to SQLite. So far, I have not come across any roadblocks.
Can anyone think of a use case where PostgreSQL is better suited than SQLite?
Sqlite on the server is a fantastic starter database. Dead simple to set up, highly performant and scales way higher (vertically) than anyone gives it credit for.
But there certainly is a point you'll have to scale out instead of up, and while there are some great solutions for that (rqlite, litefs, dqlite, marmot) it's not inherent to Sqlite's design.
[1] https://rqlite.io
Replication means writing queries which alter the data to multiple machines, right?
Shouldn't that be done by a software one level up? Which takes in the queries via some network protocol and then sends them to all machines.
That would sound more logical to me.
It's fine to want to separate those out, but it's not easy to do so and there are reasons they've been coupled for decades.
Having a single DB that takes write queries via a proxy which spreads them out to multiple read-only-DBs sounds easy at first.
What do you do about out-of-sync read replicas?
ACID gets real hard real fast when introducing replication.
Sending a UPDATE/INSERT/DELETE statement to SQLite is not blocking? I would think it is, because in my code I can read the number of affected rows right after I sent the query.
> What do you do about out-of-sync read replicas?
Delete them and replace them by uploading a checkpoint and replaying a log of the queries since then.
Without that you will have drift from your master database.
With that, you have a whole new host of synchronization issues you need to deal with.
DuckDB is another option worth considering.
SQLite already allows multiple connections, so putting it on a server and adding a program that talks a network protocol and proxies the queries to the DB sounds more logical to me?
Because I can use SQLite and its "a file is all you need" approach as long as I don't need multiple machines.
And only bring in the other software (the proxy) when I need it.
Writing a networked application that uses SQLite as a database is perfectly reasonable. You're just making the decision to lift the layer of abstraction that is concerned with machines from the DB to your application, which may or may not be a reasonable thing to do.
CREATE TABLE crime_against_humanity ( databases SQLITE[] );
How long does it take to update a table of, say, 1k rows? 1m rows? Same when subqueries and joins are involved to calculate what's to be updated?
So on the bright side updating 1k rows takes the same amount of time as updating one row. On the other hand every write is a full table write (actually two).
I don't think there is a way to do this efficently with the current API as PostgreSQL is MVCC so it needs to write out each version separately (unless it has some sort of support of partial string sharing, I don't think so). Maybe a better version of this would write every page of the SQLite DB as a separate row so that you only need to update the changed pages.
I think there was already another comment where someone told OP how to solve that.
Exactly what we need from a Show HN.