This is rather old and there are better ways of doing most of these things now. For instance the counting example would usually be much more efficient performed using the ROW_NUMBER() window function instead of a Cartesian product. When you can remove a cross product from your process it is almost always beneficial to do so. That will often involve introducing a CTE⁰¹ which might put off some beginners²³, but it shouldn't as this sort of example is pretty simple (you aren't worrying about any recursive case).
----
[0] Because you want the ordinal of the row in the input table/view, not your output.
[1] You could also use a sub-query, in most cases a good query planner will see the equivalence and do the same thing for either. The CTE option is easier to read and maintain IMO.
[2] In databases, like sports, CTEs can result in headaches!
[3] Or veterans of postgres, where until a few years ago CTEs were an optimisation gate, blocking predicate push-down and making some filtered queries a lot more expensive (though often no more so than the naive Cartesian product method).
alphazard 14 hours ago [-]
I always tell people to worry about the data structures that you want the database to maintain for you, and not worry about the SQL.
You can always use Google to look up the SQL, or now ChatGPT to generate it for you.
SQL is a not-that-great language and it intentionally hides what's going on.
It is also different enough between databases that you need to pay attention.
So learning to design/think in terms of SQL is probably not worth doing.
The set of data structures that you use to model and index a dataset is worth understanding, and designing in that space is a skill worth learning.
baq 3 hours ago [-]
You’re kinda right, but designing for a particular RDBMS with awareness of queries which will be performed thus indexes necessary (…or not ;) is really not that far away from what you propose. The only issue is beginner SQL learning material says ‘it’s declarative, don’t worry about what’s happening as long as you get a good result’ and that just isn’t true in any non-trivial applications of SQL.
sgarland 12 hours ago [-]
Frankly, this is terrible advice. If you’re not designing your data model around the language it’s going to be queried in, how do you expect to get decent performance out of the database?
Also, in no way does SQL hide anything - it’s a declarative language, and will produce exactly what you tell it to, provided you understand what it is you asked it to do. The query engine is somewhat of a black box, but that is completely orthogonal.
chasil 8 hours ago [-]
> Also, in no way does SQL hide anything - it’s a declarative language, and will produce exactly what you tell it to.
Ha ha, no, SQL implementations can conform to the standard in unexpected ways.
SQL is a declarative language so it —- by definition —- hides the execution.
Not really sure what you’re trying to argue here.
sgarland 10 hours ago [-]
Parent made it sound - to me - that you put an input in and hope for the best. If you understand the operators, you can quite confidently predict an output given an input.
halfcat 9 hours ago [-]
> If you understand the operators
That’s the point. In an imperative language if you don’t yet understand (or make a typo, or whatever), you can just print/console.log and find out.
I’ve seen junior devs, data analysts, and LLMs spin their wheels trying to figure out why adding a join isn’t producing the output they want. I don’t think they would figure it out using SQL alone if you gave them a month.
SkiFire13 3 hours ago [-]
The equivalent of `print`/`console.log` in SQL would be using subqueries/CTE and run them to see the intermediate result (just like `print`/`console.log` show you intermediate results of the executions in an imperative language).
crazygringo 8 hours ago [-]
You missed the "performance" part.
Depending on how you write your query and how you structure your data, a query can take 0.005 seconds or 500 seconds.
SQL hiding the execution is an extremely leaky abstraction. To get the performance you need, you have to plan your possible queries in advance together with how to structure the data.
I mean, it doesn't matter if you only have 100 rows per table, but once you're dealing with multiple tables with millions of rows each it's everything.
awesome_dude 12 hours ago [-]
I don't get this - my database is going to be normalised to whatever is optimal (3rd normal form generally, denormalised for higher load/sharded/caching)
The indexing is where the main optimisations take place - hashmap indexes, or clustering indexes for priority queues.
What am I missing?
yakshaving_jgt 13 hours ago [-]
For posterity, how would you recommend the average working programmer should go about doing that?
alphazard 10 hours ago [-]
An intro data structures course is worth watching if you haven't taken one. There are plenty of them on YouTube.
Try to follow along with a language that has an explicit pointer type. Go is a good choice. Java and Python are worse choices (for this particular thing) IMO.
Assuming you are familiar with trees and hashmaps, you have all the important building blocks. You can imagine a database as a bunch of trees, hashmaps and occasionally other stuff, protected by a lock.
First you acquire the lock, then you update some of the data structures, and maybe that requires you to update some of the other data structures (like indexes) for consistency. Then you release the lock.
By default, most data will live in a BTree with an integer primary key, and that integer is taken from a counter that you increment for new inserts.
Indexes will be BTrees where the key is stuff you want to query on, and the value is the primary key in the main table.
Using just those data structures you should be able to plan for any query or insert pattern. It helps to figure this out yourself in a programming language for a few practice cases, so you know you can do it. Eventually it will be easy to figure out what tables and indexes you need in your head. In the real world, this stuff is jotted down in design docs, often as SQL or even just bullets.
That's really all you need, and that's where I recommend getting out of the rabbit hole. Query planners are pretty good. You can usually just write SQL and if you did the work to understand what the tables and indexes should be, the planner will figure out how to use them to make the query fast.
belfthrow 8 hours ago [-]
Java is a bad language for this compared to go? Is this legitimate advice on a serious programming blog. Pretty unbelievable honestly.
ryanjshaw 13 hours ago [-]
Read code from other projects
phartenfeller 13 hours ago [-]
SQL is beautiful in its own way. Definitely not easy to master but beautiful in how much business logic you can implement in not many lines of code.
And with SQL macros becoming a thing it is now easily possible to store patterns as reausable functions with parameters.
potatoproduct 15 hours ago [-]
Not ashamed to admit that I never really thought about the distinct operator 'being redundant' as its essentially just a group by.
stevage 11 hours ago [-]
Huh, I have always just thought of it as a syntactic shortcut.
morkalork 15 hours ago [-]
distinct has always felt like a query smell to me. Too many junior analysts abusing it because they don't know the schema well and are over-joining entities
ryanjshaw 13 hours ago [-]
Sometimes the number of joins is fine but they don’t understand the data properly and should be spending more time understanding why multiple rows are being returned when they expect one (eg they need to filter on an additional field).
I wish SQL had a strict mode syntax that forces you to use something like `select one` (like LINQ’s Single()) or `select many` to catch these kinds of bugs.
dspillett 13 hours ago [-]
DISTINCT is often a smell at the head (or middle) of a complex query as you are throwing away processed information, sometimes a lot of it, late in the game. Much better to filter it out earlier and not process it further, where possible. Filtering earlier, as well as reducing waste processing time (and probably memory use), increases the chance of the query planner being able to use an index for the filter which could greatly decrease the IO cost of your query.
paulddraper 8 hours ago [-]
SELECT DISTINCT is often a code smell. (Not always.) If you see it, there’s a 70% chance it got slapped on to fix an issue that should have been solved a different way.
SELECT DISTINCT ON is different, and useful.
aspaviento 3 hours ago [-]
I had a teacher who had specific rules for exams when we wrote SQL statements:
- For a question worth 2 points, if you use the word "DISTINCT" when it wasn't needed, you lose 0.5 points.
- If you don't use "DISTINCT" when it was necessary, you lose all 2 points.
14 hours ago [-]
jslaby 15 hours ago [-]
Of course, trying out the first example doesn't work on SQL Server..
datadrivenangel 15 hours ago [-]
"We use Oracle syntax and write <column expr> <alias> instead of ANSI SQL <column
expr> AS <alias>. Ditto for table expressions"
Footnote on page 3.
jslaby 15 hours ago [-]
T-SQL can handle that alias expr just fine, but the seqNum returned is 4,8,12,16,20 instead of the 1,2,3... I tried on MySQL and it works fine. I'm not sure how SQL Server is handling those cartesian joins differently
Rendered at 10:37:15 GMT+0000 (Coordinated Universal Time) with Vercel.
----
[0] Because you want the ordinal of the row in the input table/view, not your output.
[1] You could also use a sub-query, in most cases a good query planner will see the equivalence and do the same thing for either. The CTE option is easier to read and maintain IMO.
[2] In databases, like sports, CTEs can result in headaches!
[3] Or veterans of postgres, where until a few years ago CTEs were an optimisation gate, blocking predicate push-down and making some filtered queries a lot more expensive (though often no more so than the naive Cartesian product method).
The set of data structures that you use to model and index a dataset is worth understanding, and designing in that space is a skill worth learning.
Also, in no way does SQL hide anything - it’s a declarative language, and will produce exactly what you tell it to, provided you understand what it is you asked it to do. The query engine is somewhat of a black box, but that is completely orthogonal.
Ha ha, no, SQL implementations can conform to the standard in unexpected ways.
NULL = NULL
Is that true or false? We didn't know until 2003.
https://en.wikipedia.org/wiki/Null_(SQL)#Criticisms
Not really sure what you’re trying to argue here.
That’s the point. In an imperative language if you don’t yet understand (or make a typo, or whatever), you can just print/console.log and find out.
I’ve seen junior devs, data analysts, and LLMs spin their wheels trying to figure out why adding a join isn’t producing the output they want. I don’t think they would figure it out using SQL alone if you gave them a month.
Depending on how you write your query and how you structure your data, a query can take 0.005 seconds or 500 seconds.
SQL hiding the execution is an extremely leaky abstraction. To get the performance you need, you have to plan your possible queries in advance together with how to structure the data.
I mean, it doesn't matter if you only have 100 rows per table, but once you're dealing with multiple tables with millions of rows each it's everything.
The indexing is where the main optimisations take place - hashmap indexes, or clustering indexes for priority queues.
What am I missing?
Assuming you are familiar with trees and hashmaps, you have all the important building blocks. You can imagine a database as a bunch of trees, hashmaps and occasionally other stuff, protected by a lock. First you acquire the lock, then you update some of the data structures, and maybe that requires you to update some of the other data structures (like indexes) for consistency. Then you release the lock.
By default, most data will live in a BTree with an integer primary key, and that integer is taken from a counter that you increment for new inserts. Indexes will be BTrees where the key is stuff you want to query on, and the value is the primary key in the main table.
Using just those data structures you should be able to plan for any query or insert pattern. It helps to figure this out yourself in a programming language for a few practice cases, so you know you can do it. Eventually it will be easy to figure out what tables and indexes you need in your head. In the real world, this stuff is jotted down in design docs, often as SQL or even just bullets.
That's really all you need, and that's where I recommend getting out of the rabbit hole. Query planners are pretty good. You can usually just write SQL and if you did the work to understand what the tables and indexes should be, the planner will figure out how to use them to make the query fast.
And with SQL macros becoming a thing it is now easily possible to store patterns as reausable functions with parameters.
I wish SQL had a strict mode syntax that forces you to use something like `select one` (like LINQ’s Single()) or `select many` to catch these kinds of bugs.
SELECT DISTINCT ON is different, and useful.
- For a question worth 2 points, if you use the word "DISTINCT" when it wasn't needed, you lose 0.5 points.
- If you don't use "DISTINCT" when it was necessary, you lose all 2 points.
Footnote on page 3.