NHacker Next

- new
- past
- show
- ask
- show
- jobs
- submit

login

Rendered at 07:16:55 GMT+0000 (Coordinated Universal Time) with Vercel.

NHacker Next

- new
- past
- show
- ask
- show
- jobs
- submit

login

Rendered at 07:16:55 GMT+0000 (Coordinated Universal Time) with Vercel.

> “Mathematics is not just about proving theorems — it’s about a way to interact with reality, maybe.”

This one I like it because in the current trend of trying to achieve theorem proving in AI only looking at formal systems, people rarely mention this.

And this one:

> Just what will emerge from those explorations is hard to foretell. “That’s the problem with originality,” Granville said. But “he’s definitely got something pretty cool.”

When has that been a "problem" with originality? Hahah but I understand what he means.

> another asks whether there are infinitely many pairs of primes that differ by only 2, such as 11 and 13

Is it that many questions were successfully dis/proved and so were left with some that seem arbitrary? Or is there something special about the particular questions mathematicians focus on that a layperson has no hope of appreciating?

My best guess is questions like the one above may not have any immediate utility, but could at any time (for hundreds of years) become vital to solving some important problem that generates huge value through its applications.

unimportant], like "how hard is it to factor big prime numbers?" or "what kind of mathematical structure describes composed rotations of a sphere" turned out to be way more important than they seemed. The former is at the root of a substantial amount of cryptography (either directly in e.g. RSA or indirectly in e.g. elliptic curve cryptography) and the latter turns out to form the foundation for quantum mechanics, neither of which was remotely in the mind of people originally considering either problem.Seemingly simple, but practically intractable, problems like the Collatz conjecture, twin primes, etc. tell us that there's something we don't know about relatively simple structures, and the hope is that by studying those simple structures, we might discover something new that is more generally applicable. The interconnections in math tend to form an extremely dense graph, so these long-unsolved problems tend to have lots of established connections to other questions. For example, the Collatz conjecture would tell us something about computability theory, since the next Busy Beaver number we know little about turns out to depend on Turing machines that encode it (alongside other very similar problems).

Mathematicians focus on esoteric stuff too; it usually doesn't make it into the popular press. "Is the fifth fundamental group of the 19-dimensional Pulasky-Robinson manifold of type III infinite?" is the sort of question that comes up all the time but never makes it onto the front page of HN. Questions like "what's the smallest uncomputable busy beaver number" or "is the twin prime conjecture true" are the ones you'll hear about because they're the ones where the question makes some sense to a layperson.

primenumbers are of course trivial to factor. Bigcompositenumbers are much harder. Checking whether an unknown number is prime or composite is nontrivial but fairly fast.https://en.wikipedia.org/wiki/Landau%27s_problems

So we do know that there are 100,000,000,000! primes that are equidistant from one another, which is neat.

You can find relationships between ideas or topics that are seemingly unrelated, for instance, even perfect numbers and Mersenne primes have a 1:1 mapping and therefore they're logically equivalent and a proof that either set is either infinite or finite is sufficient to prove the other's relationship with infinity. There's little to no intuitive relationship between these ideas, but the fact that they're linked is somewhat humbling - a fun quirk in the fabric of the universe, if you will.

[1] https://en.wikipedia.org/wiki/A_Mathematician's_Apology

G.H.Hard. Eureka, issue 3, Jan 1940

Knowing that history illuminates the context and importance of problems like the above; but it makes for a long, taxing and sometimes boring read for the unmotivated, unlike the sexy "quantum blockchain intelligence" blurbs or "grand unification of mathematics" silliness in pure math. So, few popularizations care to trace the historical trail from the basics to the state of the art.

What does this mean? These problems are like the Busy Beaver analogue for formal mathematical systems, which implies that if you solve one, in theory you solve entire classes of lower complexity problems simultaneously (similar to how BB(5) solves the halting problem for all programs below a certain complexity).

If you are fascinated by primes, then you just want to know the answer, independent of any application.

If you could prove the twin prime conjecture, that means you have figured out something about prime numbers. If you found a pattern or formula for figuring out the distances between twin primes, that means you're able to identify certain primes.

And maybe that formula becomes the basis for a formula for the distance between primes in general. And once you have that formula, you have a simple way to test for primes.

And I'm not saying that is happening or will happen or is even possible, but that would be a reason to study proofs of the twin prime conjecture.

So picking off little morsels of the bigger problem is kind of the first steps towards climbing this metaphorical mountain.

You could almost equally ask why physicists are so obsessed with elementary particles.

the Riemann Hypothesis is an example of this. it starts from a pair of observations: 1. the sum of the logarithms of the prime numbers less than n approximates the graph of y = x to absurd precision. there's no obvious reason why this should be true but the result just kind of falls out if you attempt to estimate the sum, even in really crude ways. 2. the sum of the reciprocals of the squares was calculated by Euler centuries ago. the way he solved the problem was to connect two different ways of calculating sin(x) -- the Taylor series and a new idea he developed which you may have tried in a calculus class. namely, he wrote down sin(x) as a infinite product of its roots. one result that happens to fall out of this comparison between the infinite sum and the infinite product is that you can factor equations like sum(1/n^2) over all the integers into a product of the reciprocals of the primes.

this second fact leads very directly to a proof of the first through some clever algebra but it's far more general. in fact, we not only get exact values for sum(1/n^2), but exact values for all expressions of the form sum(1/n^s) where s is a positive even integer -- odd powers remain an unsolved problem to this very day. in fact, these sums are hiding in the Taylor expansion of sin(x). so somehow, we've connected a seemingly arbitrary question about how to take a certain kind of sum of the integers, connected it to a product of primes, and tied both to an analytical function -- sin(x). already bizarre.

but it gets even stranger. if you let s vary as a complex number and not just as a positive number > 1, this very same approach lets you calculate the sums for negative even powers as well. remember, we're calculating sum(1/n^s) -- how can we get any result but infinity for sum(1/n^-2) -- i.e. sum(n^2)? more perplexing is that the value you can calculate from algebra and calculus over the complex numbers, following Euler's method, and Riemann's extension of it, is 0 for

everynegative even power. this is also where the notion that the sum of the integers is -1/12 comes from. this is the Riemann Zeta function.so now, we've connected a seemingly arbitrary sum of the integers to an analytic function over the complex numbers via prime numbers. and worse still, if we connect this back to the first point, we discover that this Zeta function tells us the order of the error term on the estimate for sum(log p, p < x). remember, this function looks like the graph of y = x. if the Zeta function has a zero at s = 1/2, we have an exact formula for that sum and the error term is precisely O(x^1/2). we can even calculate the constant factors. this fixes all the prime numbers. it in fact gives you an exact formula to calculate them, not as a recursive function, but directly. all because some asshole in the 1700s posed a problem no one could answer until Euler: what's sum(1/n^2)?

so this is why prime numbers fascinate mathematicians. nothing we understand about them is obvious. but being able to answer seemingly arbitrary questions about them unlocks whole fields of mathematics. and new techniques in completely unrelated fields of mathematics suddenly answer deep questions about the structure of the natural numbers.

if this topic interests you, I highly recommend this series of videos from zetamath: https://www.youtube.com/watch?v=oVaSA_b938U&list=PLbaA3qJlbE... -- he walks you through the basic algebra and calculus that lead to all of these conclusions. I can't recommend this channel enough.

If anybody has a learning path or a primer recommendation for modular forms (assuming formal math education), I'd be very interested in reading more about it.

But as such I doubt it can be more accessible to everybody.

Diamond, Shurman, A First Course in Modular Forms, Springer GTM vol. 228.

Next, and perhaps I shouldn't be suggesting it, Serre's A Course in Arithmetic. Despite it's reputation of terseness it is a great book by one of history's great mathematicians and worth every sweat and tear spent over its short number of pages.

Yet another way is to see the lectures by Richard Borcherds (who won the fields for the moonshine conjecture) on youtube.

Finally, since this hacker news check out William Stein's Modular Forms: A Computational Approach (pdf online)

This summary is unsatisfying.

Ah, here it is in clear: https://arxiv.org/abs/2312.03566