I really like the second part of the blogpost but starting with Gaussian elimination is a little "mysterious" for lack of a better word. It seems more logical to start with a problem ("how to solve linear equations?" "how to find intersections of lines?"), show its solution graphically, and then present the computational method or algorithm that provides this solution. Doing it backwards is a little like teaching the chain rule in calculus before drawing the geometric pictures of how derivatives are like slopes.
egonschiele 24 hours ago [-]
Author here – I think you're probably right. I wrote the Gaussian elimination section more as a recap, because I figured most readers have seen Gaussian elimination before, and I was keen to get to the rest of it. I'd love to hear if other folks had trouble with this section. Maybe I need to slow it down and explain it better.
turingbook 3 hours ago [-]
Do you have any plan to turn it into a full book—maybe called Grokking Linear Algebra ?
egonschiele 3 hours ago [-]
Lol. Maybe! I did enjoy writing Grokking Algorithms, but writing a full book is a real commitment. That one took me 3 years.
maybewhenthesun 12 hours ago [-]
I actually really liked the gaussian elimination part. It's a term you hear often and 'demystifying' it is good imho.
Only nitpick I have is that it's a pity you use only 1 and 2 in the example with the carbs. Because of the symmetry it makes it harder to see which column/row matches which part of the vector/matrix because there's only 1s and 2s and it fits both horizontally and vertically...
Syntonicles 23 hours ago [-]
Loved the article, and also the shoutout to Strang's lectures.
I agree with the order, the Gaussian should come later I almost closed the article - glad I kept scrolling out of curiosity.
Also I felt like I had been primed to think about nickles and pennies as variables rather than coefficients due to the color scheme, so when I got to the food section I naturally expected to see the column picture first.
When I encountered the carb/protein matrix instead, I perceived it in the form:
[A][x], where the x is [milk bread].T
so I naturally perceived the matrix as a transformation and saw the food items as variables about to be "passed through" the matrix.
But another part of my brain immediately recognized the matrix as a dataset of feature vectors, [[milk].T [bread].T], yearning for y = f(W @ x).
I was never able to resolve this tension in my mind...
emmelaich 16 hours ago [-]
To some, "Now we can add the two equations together to eliminate y: might need a little explanation.
The (an) answer is that since the LHS and RHS are equal, you can choose to add or subtract them to another equation and preserve equality.
If I remember correctly, substitution (isolating x or y) was introduced before this technique.
anthk 12 hours ago [-]
Positive proportion - negative proportion = 0.
rzz3 14 hours ago [-]
I hadn’t, and your article lost me there to be honest. You didn’t explain the what, why, or when behind it, and it didn’t make sense to me at all. That said, I’m abnormally horrible at math.
egonschiele 5 hours ago [-]
Noted! I may make a totally separate post on gaussian elimination. Could you talk me through what parts were confusing, and would you be willing to review a post on gaussian elimination to see if it works for you?
thaumasiotes 11 hours ago [-]
> You didn’t explain the what, why, or when behind it
>> The trouble starts when you have two variables, and you need to combine them in different ways to hit two different numbers. That’s when Gaussian elimination comes in.
>> In the last one we were trying to make 23 cents with nickels and pennies. Here we have two foods. One is milk, the other is bread. They both have some macros in terms of carbs and protein:
>> and now we want to figure out how many of each we need to eat to hit this target of 5 carbs and 7 protein.
DwnVoteHoneyPot 13 hours ago [-]
You're assumption worked for me... I've seen gaussian elimination before (but not the linear algebra) which gave me an idea of what we were doing.
barrenko 10 hours ago [-]
Or something like to the tune of "what does it mean that we can eliminate", which is still unclear to me. But a lovely article, the way you (op) introduce the column perspective and really hepful for a novice such as myself.
+ there are many textbooks on LA. Not a lot of them introduce stuff in the same order or in the same manner. I think that's part of why LA is difficult to teach, and difficult to comprehend, and maybe there is no unique way to do it, so we kinda need all the perspectives we can get.
great_wubwub 7 hours ago [-]
This is clear and useful but I wish you'd picked different example numbers. Using 1 and 2 for both bread and milk makes it harder to look at the matrix form and immediately see whether a 1 in a matrix is the bread 1 or the milk 1. If you could use 1,2,3,4 instead of 1,2,1,2 it would make things much clearer.
moron4hire 5 hours ago [-]
I agree with this critique because, with learning linear algebra, there are a lot of numbers flying around and the order of them is very important. This is why I like to use the prime sequence for my example numbers, because you can also see where they contributed to results of multiplication operations.
timeinput 1 hours ago [-]
Agreed completely when ever I need random example sequences it is often sequences of primes or some subset like even indexed primes (meaning 2, 5, 11, ...) mixed with odd indexed (primes 3, 7, 13...) when dealing with complex numbers, or every fourth if I want two sequences of complex numbers. The only trouble is they do start going pretty large.
great_wubwub 3 hours ago [-]
ooh, yeah, prime numbers is an even better idea.
egonschiele 3 hours ago [-]
Agreed, I need to make this part less confusing
lackoftactics 1 days ago [-]
Aditya Bhargava did it again. I have to say I am a fan already from the old days of Grokking Algorithms.
egonschiele 24 hours ago [-]
Thank you! I loved writing that book.
MehdiHK 14 hours ago [-]
One of my favorite books! Any plans to make this series a book as well? (Will be an instant buy for me)
egonschiele 5 hours ago [-]
I'll definitely publish more chapters on substack! I'd love to do a full book on LA, but it depends on if I'll have time.
As an aside, Avro.im looks awesome!
cykill 3 hours ago [-]
I know this is going to be super controversial, but I genuinely find illustrations of mathematical concepts below a minimum threshold of complexity totally useless and frequently detrimental.
Below a certain level of complexity the human brain is much faster and efficient operating on abstract symbols, like 'x' and 'y'. You can solve equations and figure things out in a fraction of the time it takes you to visualize bananas, goats, coins, bread, milk, etc.
Visualizations have a role in developing intuitions about complex structures, such as what the a matrix does to a vector or what cosine similarity means, and so on.
But in recent years, everyone and the next man has suddenly assumed that visualizing the number 1 or 2 in terms of every day objects somehow helps learning. It doesn't.
egonschiele 3 hours ago [-]
Everyone is different! I personally find examples and visuals a very important part of teaching.
> But in recent years
Just to expand on this a bit: I have been teaching this way since at least 2016, when I published a book on algorithms called Grokking Algorithms. It is an illustrated guide to algorithms. If you didn't like this post, I imagine you won't like the book either :)
I think the milk and bread is just a helpful real world example of how an object might contain two number that need to be solved for simultaneously (carbs and protein). It's more of a why than a how.
vonnik 23 hours ago [-]
I really like this, and I think one way to make it even more clear would be to use other variable letters to represent breads and milks, because their x’s and y’s somehow morph into the x’s and y’s that represent carbs and protein in the graph.
egonschiele 16 hours ago [-]
Agreed, something about the variables is confusing, I need to think about what to change there!
RyanOD 22 hours ago [-]
This is nice. Until I took an actual semester of it in college, linear algebra was a total mystery to me. Great job.
For those unfamiliar with vectors, it might be helpful to briefly explain how the two vectors (their magnitude and direction) represent the one bread and one milk and how vectors can be moved around and added to each other.
lkirkwood 14 hours ago [-]
I wish there was more of this in the world. Educational math content is very hard to do well. Great stuff!
xwowsersx 1 days ago [-]
This is great. I really appreciate visual explanations and the way you build up the motivation. I'm using a few resources to learn linear algebra right now, including "The No Bullshit Guide to Linear Algebra", which has been pretty decent so far. Does anyone have other recommendations? I've found a lot of books to be too dense or academic for what I need. My goal is to develop a practical, working understanding I can apply directly.
i_don_t_know 10 hours ago [-]
I’ve really enjoyed this book:
Introduction to Applied Linear Algebra – Vectors, Matrices, and Least Squares
Ok, boy, I'm also reviewing LinAlg textbooks as we speak. Coming in with a similar interest for ML / AI.
I've done math on KA academy up to linear algebra, with other resources / textbooks / et al. depending on the topic.
People will recommend 3B1B, Strang (MIT OCW Lin Alg lessons). For me the 3B1B is too "intuitionist" for a first serious pass, and Strang can be wonderful but then go off on a tangent during a lecture that I can't follow, it's a staple resource that I use alongside others.
LADR4e is also nice but I can't follow the proofs there sadly (yet).
There is also 'Linear Algebra done wrong', as well as the Hefferon book, which all end up being proof-y quite quickly. They seem like they'll be good for a second / third pass at a linear algebra.
Side note - for a second or a third pass in LA it seems there is such a thing as 'abstract linear algebra' as a subject and the texbooks there don't seem that much harder to follow than the "basic" linear algebra ones designated for a second pass.
I've gotten off to the most of a start with ROB101 textbook (https://github.com/michiganrobotics/rob101/blob/main/Fall%20...), up until linear dependence / independence, along the MIT Strang lectures. ROB101 is nice as it deals with the coding aspect of it all, and I can follow in my head as I am used to the coding aspect of ML / AI.
I also have a couple obscure eastern european math texbook(s) for practice assignments.
That's quite the list! How does this one compare? Anything you think is missing?
dawnofdusk 24 hours ago [-]
>My goal is to develop a practical, working understanding I can apply directly.
Apply directly... to what? IMO it is weird to learn theory (like linear algebra) expressly for practical reasons: surely one could just pick up a book on those practical applications and learn the theory along the way? And if in this process, you end up really needing the theory then certainly there is no substitute for learning the theory no matter how dense it is.
For example, linear algebra is very important to learning quantum mechanics. But if someone wanted to learn linear algebra for this reason they should read quantum mechanics textbooks, not linear algebra textbooks.
xwowsersx 24 hours ago [-]
You're totally right. I left out the important context. I'm learning linear algebra mainly for applied use in ML/AI. I don't want to skip the theory entirely, but I've found that approaching it from the perspective of how it's actually used in models (embeddings, transformations, optimization, etc.) helps me with motivation and retaining.
So I'm looking for resources that bridge the gap, not purely computational "cookbook" type resources but also not proof-heavy textbooks. Ideally something that builds intuition for the structures and operations that show up all over ML.
blackbear_ 22 hours ago [-]
Strang's Linear algebra and learning from data is extremely practical and focused on ML
Although if your goal is to learn ML you should probably focus on that first and foremost, then after a while you will see which concepts from linear algebra keep appearing (for example, singular value decomposition, positive definite matrices, etc) and work your way back from there
xwowsersx 20 hours ago [-]
Thanks. I have a copy of Strang and have been going through it intermittently. I am primarily focused on ML itself and that's been where I'm spending most of my time. I'm hoping to simultaneously improve my mathematical maturity.
I hadn't known about Learning from Data. Thank you for the link!
imtringued 21 hours ago [-]
Since you're associating ML with singular value decomposition, do you know if it is possible to factor the matrices of neural networks for fast inverse jacobian products? If this is possible, then optimizing through a neural network becomes roughly as cheap as doing half a dozen forward passes.
blackbear_ 20 hours ago [-]
Not sure I am following; typical neural network training via stochastic gradient descent does not require Jacobian inversion.
Less popular techniques like normalizing flows do need that but instead of SVD they directly design transformations that are easier to invert.
imtringued 8 hours ago [-]
The idea is that you already have a trained model of the dynamics of a physical process and want to include it inside your quadratic programming based optimizer. The standard method is to linearize the problem by materializing the Jacobian. Then the Jacobian is inserted into the QP.
QPs are solved by finding the roots (aka zeroes) of the KKT conditions, basically finding points where the derivative is zero. This is done by solving a linear system of equations Ax=b. Warm starting QP solvers try to factorize the matrices in the QP formulation through LU decomposition or any other method. This works well if you have a linear model, but it doesn't if the model changes, because your factorization becomes obsolete.
egonschiele 23 hours ago [-]
> My goal is to develop a practical, working understanding I can apply directly
Same, and I think ML is a perfect use case for this. I also have a series for that coming.
ngriffiths 22 hours ago [-]
I feel like it's obligatory to also drop a link to the 3blue1brown series on linear algebra, for anyone interested in learning - it is a step up from what's in this post, but these videos are brilliant and still super accessible:
One of my favourite internet things is seeing other channels, for instance Reducible (https://www.youtube.com/@Reducible) use the framework. Everyone has their own special take on it, and it's so awesome that Grant made it OS!
As always with these types of things, it starts off well and I think “wow! finally someone is explaining math in a simple and straight forward way I can understand!”. And once again, they already lost me at Gaussian elimination.
suryajena 24 hours ago [-]
That "Bam!" thing just brought Josh Starmer to mind. Anyone remember his book with the illustrated ML stuff? I used to watch his YouTube channel too. I really dig these kinds of explainers; they make learning so much more fun.
dylan604 19 hours ago [-]
"Aside: Another solution for the above is 23 pennies. Or -4 nickels + 43 pennies."
This is where the math nerds just can't help themselves, and I'm here for it. However, these things drive me crazy at the same time. You cannot have -4 nickels. In pure math with only x and y, sure those values can be negative. But when using real world examples using physical objects, no, you cannot have a negative nickel. Maybe you owe your mate the value of 4 nickels, but that's outside the scope of this lesson. Your negative nickels are not in another dimension (because again, the math works that way). You want to help people understand math with real world concepts but then go and confuse things with pure math concepts. And these negative nickels are still not even getting into imaginary nickels territory like you have square root of -4 nickels.
airstrike 16 hours ago [-]
Doesn't that help rule out that other solution in favor of the reasonable one?
oatsandsugar 1 days ago [-]
That's really intuitive, especially your description of column notation. Excited to read your other guides!
Also, HT to your user name! Egon Schiele is one of my favorite artists! Loved seeing his works at the Neue in NYC.
egonschiele 15 hours ago [-]
Thanks! Obviously one of my favorite artists too :)
Miserlou57 18 hours ago [-]
about 15 years ago I started an aggregator to accumulate/sort/filter the best instruction of various topics, kinda like Reddit for learning. This is such a perfect example of the kind of thing I hoped would filter to the top. Thinking about trying to redo it. Is there a use for this sort of thing in today's world?
PanoptesYC 9 hours ago [-]
An easily searchable platform with curated high quality guides would be a good place to start when trying to do anything. Guides aren't something I'd want to stumble on, like YC posts, but something I would be seeking out. Probably a top feature would be a robust tagging system/search engine rather than the social Reddit elements like karma, hot page, trending subs, etc. Would be cool!
coolandsmartrr 16 hours ago [-]
Yeah, I like this kind of content too. Do you still have the aggregator available online?
nkoren 23 hours ago [-]
A: this is cool, well done.
B: I miss scroll bars. I really, really miss scroll bars.
egonschiele 21 hours ago [-]
Are you on a Mac? System Preferences > Appearances > Show scroll bars > Always
Syntonicles 23 hours ago [-]
I see a scroll bar in Firefox and in Chrome...
hn_throw_bs 22 hours ago [-]
I don’t like these examples because IRL nobody does things this way.
Try actual problems that require you to use these tools and the inter-relationships between them, where it becomes blindingly obvious why they exist. Calculus is a prime example and it’s comical most students find Calculus hard because their LA is weak. But Calculus has extensive uses, just not for doing basic carb counting.
potbelly83 1 hours ago [-]
Honestly all these cute websites give people a false sense that they're actually learning something. The only way to learn this stuff is get one of the million good LA books out there and work through the problems. But that's hard, so people look for shortcuts.
ebbi 22 hours ago [-]
I really wish I had math taught to me like this at school. I feel like my life would have gone in a very different direction!
neosat 1 days ago [-]
Delightful explanation! A great example of how deep concepts can be made accessible and fun.
bfors 1 days ago [-]
Thank you, I'm planning on diving into linear algebra as an exercise to mitigate brain rot
maxvij 23 hours ago [-]
I’m not even into math, but I enjoyed reading this very much. Kudos to the author!
mparnisari 1 days ago [-]
I love this. Well, in general, I love illustrated explanations :)
mixmastamyk 1 days ago [-]
Did it end right when it says it will discuss the dot product?
egonschiele 24 hours ago [-]
Yep, that's going to be the next chapter!
mixmastamyk 23 hours ago [-]
Ok, please add that sentence because I spent two minutes looking everywhere for the next paragraph.
thunkle 14 hours ago [-]
How do I get to chapter 2!
egonschiele 5 hours ago [-]
Coming soon! I about a month or so
deepriverfish 22 hours ago [-]
this was my least favorite math subject in college, probably one of the most difficult class I took.
adastra22 1 days ago [-]
Figures are blank on iOS Safari in dark mode.
egonschiele 24 hours ago [-]
Do you block images? Works for me on iOS Safari in dark mode. Every image also includes alt text (though I think the images add a lot).
adastra22 16 hours ago [-]
It has apparently been fixed.
thaumasiotes 11 hours ago [-]
> You can pick a point that sits on the first line to meet the carb goal. You can pick a point that sits on the second line to meet the protein goal. But you need a point that sits on both lines to hit both goals.
> How would a point sit on both lines? Well, it would be where the lines cross. Since these are straight lines, the lines cross only once, which makes sense because there’s only a single milk and bread combo that would get you to exactly five grams of carbs and seven grams of protein.
Geez. It's obvious that two straight lines can only cross once. It's not obvious that there's only one combination of discrete servings of bread and milk that can hit a particular target.
(It's so non-obvious that, in the general case, it isn't even true. Elimination might give you a row with all zeros.)
The fact that the solution is unique makes sense if you realize it must sit on these two lines. It makes far less sense to explain the fact that the two lines only cross once by channeling the external knowledge that the solution is unique. How did we learn that?
gowld 22 hours ago [-]
Seems a bit premature? This is "linear algebra" in the sense of middle/high school algebra in linear equations. I suppose many more chapters are coming?
3 hours ago [-]
lelanthran 14 hours ago [-]
This is great! Do one for calculus please.
egonschiele 5 hours ago [-]
Good idea! I'll probably do one on calc shortly, since I want to build up to ML.
hollowturtle 23 hours ago [-]
As much as I like posts like this I can't feel anything other than hate for the substack platform, it just sucks I'm sorry but I can't understand how people can rely on that bloated web app. I just click around and it's so slow and buggy, recently I canceled a subscription because it kepts signin me out and the signup signin experience just suck
Rendered at 18:23:27 GMT+0000 (Coordinated Universal Time) with Vercel.
Only nitpick I have is that it's a pity you use only 1 and 2 in the example with the carbs. Because of the symmetry it makes it harder to see which column/row matches which part of the vector/matrix because there's only 1s and 2s and it fits both horizontally and vertically...
I agree with the order, the Gaussian should come later I almost closed the article - glad I kept scrolling out of curiosity.
Also I felt like I had been primed to think about nickles and pennies as variables rather than coefficients due to the color scheme, so when I got to the food section I naturally expected to see the column picture first.
When I encountered the carb/protein matrix instead, I perceived it in the form:
[A][x], where the x is [milk bread].T
so I naturally perceived the matrix as a transformation and saw the food items as variables about to be "passed through" the matrix.
But another part of my brain immediately recognized the matrix as a dataset of feature vectors, [[milk].T [bread].T], yearning for y = f(W @ x).
I was never able to resolve this tension in my mind...
The (an) answer is that since the LHS and RHS are equal, you can choose to add or subtract them to another equation and preserve equality.
If I remember correctly, substitution (isolating x or y) was introduced before this technique.
>> The trouble starts when you have two variables, and you need to combine them in different ways to hit two different numbers. That’s when Gaussian elimination comes in.
>> In the last one we were trying to make 23 cents with nickels and pennies. Here we have two foods. One is milk, the other is bread. They both have some macros in terms of carbs and protein:
>> and now we want to figure out how many of each we need to eat to hit this target of 5 carbs and 7 protein.
+ there are many textbooks on LA. Not a lot of them introduce stuff in the same order or in the same manner. I think that's part of why LA is difficult to teach, and difficult to comprehend, and maybe there is no unique way to do it, so we kinda need all the perspectives we can get.
As an aside, Avro.im looks awesome!
Below a certain level of complexity the human brain is much faster and efficient operating on abstract symbols, like 'x' and 'y'. You can solve equations and figure things out in a fraction of the time it takes you to visualize bananas, goats, coins, bread, milk, etc.
Visualizations have a role in developing intuitions about complex structures, such as what the a matrix does to a vector or what cosine similarity means, and so on.
But in recent years, everyone and the next man has suddenly assumed that visualizing the number 1 or 2 in terms of every day objects somehow helps learning. It doesn't.
> But in recent years
Just to expand on this a bit: I have been teaching this way since at least 2016, when I published a book on algorithms called Grokking Algorithms. It is an illustrated guide to algorithms. If you didn't like this post, I imagine you won't like the book either :)
Here is an interview I did with Corey Quinn where I talk more about my teaching philosophy: https://www.youtube.com/watch?v=lZFvTTgR-V4
For those unfamiliar with vectors, it might be helpful to briefly explain how the two vectors (their magnitude and direction) represent the one bread and one milk and how vectors can be moved around and added to each other.
Introduction to Applied Linear Algebra – Vectors, Matrices, and Least Squares
https://web.stanford.edu/~boyd/vmls/
I've done math on KA academy up to linear algebra, with other resources / textbooks / et al. depending on the topic.
People will recommend 3B1B, Strang (MIT OCW Lin Alg lessons). For me the 3B1B is too "intuitionist" for a first serious pass, and Strang can be wonderful but then go off on a tangent during a lecture that I can't follow, it's a staple resource that I use alongside others.
LADR4e is also nice but I can't follow the proofs there sadly (yet). There is also 'Linear Algebra done wrong', as well as the Hefferon book, which all end up being proof-y quite quickly. They seem like they'll be good for a second / third pass at a linear algebra.
Side note - for a second or a third pass in LA it seems there is such a thing as 'abstract linear algebra' as a subject and the texbooks there don't seem that much harder to follow than the "basic" linear algebra ones designated for a second pass.
I've gotten off to the most of a start with ROB101 textbook (https://github.com/michiganrobotics/rob101/blob/main/Fall%20...), up until linear dependence / independence, along the MIT Strang lectures. ROB101 is nice as it deals with the coding aspect of it all, and I can follow in my head as I am used to the coding aspect of ML / AI.
I also have a couple obscure eastern european math texbook(s) for practice assignments.
Most lately I have been reviewing this course / book - https://www.math.ucdavis.edu/~linear/ (which has cool notes at https://www.math.ucdavis.edu/~linear/old), and getting a lot of mileage from https://math.berkeley.edu/~arash/54/notes/.
Apply directly... to what? IMO it is weird to learn theory (like linear algebra) expressly for practical reasons: surely one could just pick up a book on those practical applications and learn the theory along the way? And if in this process, you end up really needing the theory then certainly there is no substitute for learning the theory no matter how dense it is.
For example, linear algebra is very important to learning quantum mechanics. But if someone wanted to learn linear algebra for this reason they should read quantum mechanics textbooks, not linear algebra textbooks.
So I'm looking for resources that bridge the gap, not purely computational "cookbook" type resources but also not proof-heavy textbooks. Ideally something that builds intuition for the structures and operations that show up all over ML.
https://math.mit.edu/~gs/learningfromdata/
Although if your goal is to learn ML you should probably focus on that first and foremost, then after a while you will see which concepts from linear algebra keep appearing (for example, singular value decomposition, positive definite matrices, etc) and work your way back from there
I hadn't known about Learning from Data. Thank you for the link!
Less popular techniques like normalizing flows do need that but instead of SVD they directly design transformations that are easier to invert.
QPs are solved by finding the roots (aka zeroes) of the KKT conditions, basically finding points where the derivative is zero. This is done by solving a linear system of equations Ax=b. Warm starting QP solvers try to factorize the matrices in the QP formulation through LU decomposition or any other method. This works well if you have a linear model, but it doesn't if the model changes, because your factorization becomes obsolete.
Same, and I think ML is a perfect use case for this. I also have a series for that coming.
https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFit...
Highly recommended !
This is where the math nerds just can't help themselves, and I'm here for it. However, these things drive me crazy at the same time. You cannot have -4 nickels. In pure math with only x and y, sure those values can be negative. But when using real world examples using physical objects, no, you cannot have a negative nickel. Maybe you owe your mate the value of 4 nickels, but that's outside the scope of this lesson. Your negative nickels are not in another dimension (because again, the math works that way). You want to help people understand math with real world concepts but then go and confuse things with pure math concepts. And these negative nickels are still not even getting into imaginary nickels territory like you have square root of -4 nickels.
Also, HT to your user name! Egon Schiele is one of my favorite artists! Loved seeing his works at the Neue in NYC.
B: I miss scroll bars. I really, really miss scroll bars.
Try actual problems that require you to use these tools and the inter-relationships between them, where it becomes blindingly obvious why they exist. Calculus is a prime example and it’s comical most students find Calculus hard because their LA is weak. But Calculus has extensive uses, just not for doing basic carb counting.
> How would a point sit on both lines? Well, it would be where the lines cross. Since these are straight lines, the lines cross only once, which makes sense because there’s only a single milk and bread combo that would get you to exactly five grams of carbs and seven grams of protein.
Geez. It's obvious that two straight lines can only cross once. It's not obvious that there's only one combination of discrete servings of bread and milk that can hit a particular target.
(It's so non-obvious that, in the general case, it isn't even true. Elimination might give you a row with all zeros.)
The fact that the solution is unique makes sense if you realize it must sit on these two lines. It makes far less sense to explain the fact that the two lines only cross once by channeling the external knowledge that the solution is unique. How did we learn that?