> We assert that artificial intelligence is a natural evolution of human tools developed throughout history to facilitate the creation, organization, and dissemination of ideas, and argue that it is paramount that the development and application of AI remain fundamentally human-centered.
While this is a noble goal, it seems obvious that this isn't how it usually goes.
For instance, "free market" is often used as a dogma against companies that are actively harmful to society, as "globalization" might be.
An unstoppable force, so any form of opposition is "luddite behavior".
Another one is easier transport and remote communication, that generally broke down the social fabric.
Or social media wreaking havoc among teen's minds.
From there, it's easy to see why the technological system might be seen as an inherent evil.
In 1872's Erewhon, Butler already described the technological system as a force that human society could contain as soon as it tolerated it.
There are already many companies persecuting their employees for not using AI enough, even when the employee's response is that the quality of its output is not good enough for the work at hand, rather than any ideological reason.
I'm neither optimistic nor pessimistic about the changes that AI might bring, but hoping it to become "human-centered" seems almost as optimistic as hoping for "humane wars".
cowpig 3 hours ago [-]
> "free market" is often used as a dogma against companies that are actively harmful to society
This is a predominantly America-specific piece of propaganda, and it's pretty recent.
Adam Smith's ideas are primarily arguments against mercantilism (e.g. things like using tariffs to wield self-interested state power), something he showed to be against the common good. The "invisible hand" concept is used to show how self-interested action can, under conditions of *competitive markets*, lead to unintentional alignment with the common good.
Obviously that's a significant departure from the way it's commonly used today, where Thiel's book has influenced so many entrepreneurs into believing Monopolies are Good.
But the history of this is very Cold War-influenced, where "free markets" were politically positioned as alternatives to the USSR's "planned economy", and slowly pushed to depart further and further from Adam Smith's original argument about moral philosophy.
abdullahkhalids 2 hours ago [-]
Economic behavior is inherently game theoretic - agents take various actions and get some positive/negative reward as a result. Whether an agent's reward is positive or negative and of what magnitude, depends on the strategies employed by all agents. If some agents adopt new strategies, the reward calculus for everyone involved can completely change [1].
Over the past few centuries, countless new economic structures and strategies have been discovered and practiced. The rewards for the same action today and in the past can be completely different due to this.
So to me, if someone claimed more than a few decades ago that certain economic strategies and structures are good or bad, its simply not worth listening to them, unless someone reconfirms that the old finding still holds with the latest range of strategies. In that case, the credit and citation goes to that new someone, not the ghosts of the past.
> if someone claimed more than a few decades ago that certain economic strategies and structures are good or bad
As you point out, it is all game theory.
But things that arrange for the game to be more beneficial to everyone, that align our interests more, deserve to be called "good", regardless of their inability to universally do so.
The latter would be an impossible bar for anything.
Where I find things frustrating, is when someone thinks because something is "good", it somehow becomes "enough". (Think, capitalized versions of different economic schools of thought.)
schmidtleonard 2 hours ago [-]
> arguments against mercantilism
It has been funny to watch the rise of "China is beating us" rhetoric against the steady backdrop of "mercantilism is obsolete/bad" dogma, because the elephant in the room is that China has been running a textbook mercantilist playbook.
naasking 1 hours ago [-]
> Thiel's book has influenced so many entrepreneurs into believing Monopolies are Good.
Haven't read his book, but the idea that monopolies are good isn't typically made in a vacuum, it's made relative to alternatives, most often "ham-fisted government intervention". It's easier to take down a badly behaving monopoly than to change government, so believing monopolies are better than the alternatives seems like a decent heuristic.
layer8 16 minutes ago [-]
How would a bad monopoly be likely to be taken down if not by government intervention?
mrcincinnatus 43 minutes ago [-]
Good for whom, exactly?
billiam 27 minutes ago [-]
This seems like a classic straw man argument. Plutocratic oligarchs have been making the argument that private monopolies are better than representative democracy at basically any societal function for decades without any actual data.
billiam 37 minutes ago [-]
This seems like a classic straw man argument. Plutocratic oligarchs have been making the argument that private monopolies are better than representative democracy at basically anything for decades.
vonneumannstan 58 minutes ago [-]
>Haven't read his book, but the idea that monopolies are good isn't typically made in a vacuum, it's made relative to alternatives, most often "ham-fisted government intervention". It's easier to take down a badly behaving monopoly than to change government, so believing monopolies are better than the alternatives seems like a decent heuristic.
What? How is the first alternative poor government instead of multipolar competing companies? When was the last time a Monopoly was actually broken up in the US? ATT/Bell 50 years ago? lol
Izikiel43 3 hours ago [-]
Globalization was great for poor countries, not so much developed economies.
js8 2 hours ago [-]
No it wasn't. Look at Joseph Stiglitz (Globalization and Its Discontents) and Ha-Joon Chang (Bad Samaritans, Kicking Away the Ladder) for counter-examples.
intended 2 hours ago [-]
Oh come now - globalizations was great at the regional level.
It was not that great for sub groups within developed nations.
The original thesis believed that people would be retrained into other equally well paying roles.
Turns out people can’t retrain into new domains, and led to under employment.
nutjob2 2 hours ago [-]
This isn't correct. The deal is that the poor countries get development and increased employment, and the rich countries get lower prices. Generally speaking both types of countries get richer.
That some workers lost their jobs is a symptom of any change. I don't know why people always get upset people losing their jobs. It's like death, if no one died relatively few people would be born. If you resist job losses you reduce overall employment and economic development.
chromacity 2 hours ago [-]
Are you serious? People get upset about losing jobs because they need jobs to pay their bills. Further, we often build our life identities around work; if you're a good car mechanic or a successful restaurant owner, you're proud of that. It's a part of you.
Having to repeatedly restart your career is risky, painful, and demoralizing. I have no problem seeing why people don't like that and why it can lead to populist backlash or even violent revolutions (as it did in the past).
By the way, to address your closing comment: people don't like dying either and tend to get upset when others die?
shafoshaf 1 hours ago [-]
I don't think the point is that the transition isn't difficult. It is that there is an overall benefit that outweighs the challenges of the transition.
The sad part is that industrializing societies have not been very good at reconciling the benefits with the costs. The benefits first go to a select few and have seeped out to the masses slowly. Railroads in the US are a good example. The wealth accumulated by the Vanderbilts, Hills and Harrimans, did not get redistributed in any kind of equitable manner. However, everyday people did eventually gain a lot of benefit form of those railroads through economic expansion. (None of which address the loss of the native Americans, whose losses should also be part of the equation.)
layer8 11 minutes ago [-]
My impression is that the transition is such an open-ended process that you can’t really call it that. It’s unclear if and when the challenges will be overcome.
km3r 2 hours ago [-]
Society is better if we sacrifice one horse and buggy driver job for two engineering jobs. The drivers suffer from that, but the net win for society is so plainly obvious that it's a better investment to retrain the driver or just pay the off rather than support a job that dying anyways.
palmotea 1 hours ago [-]
> Society is better if we sacrifice one horse and buggy driver job for two engineering jobs.
That's a "statistic" you're pulling out of your butt, and it's doing a lot of work. No one ever knows if something like that will actually happen.
It could actually turn out that AI sacrifices 100 engineering jobs for 10 low-level service or prostitution jobs and a crap-ton of wealth to those already rich.
> The drivers suffer from that, but the net win for society is so plainly obvious that it's a better investment to retrain the driver or just pay the off rather than support a job that dying anyways.
But what actually happens is our free-market society doesn't give a shit. No meaningful retraining happens, no meaningful effort goes into cushioning the blow for the "horse and buggy driver." Our society (or more accurately, the elites in charge) go tell those harmed to fuck off and deal with it.
sendes 3 hours ago [-]
> We assert that artificial intelligence is a natural evolution of human tools.
While nowhere in the paper this is actually asserted but the abstract, a whiggish narrative of a genuinely unprecedented technology --such that it can replace and supersede human "labour" altogether (one is reminded of The Evolution of Human Science by Ted Chiang)-- sounds naive at best, dangerous at worst.
jebarker 3 hours ago [-]
I don’t see why “natural evolution of human tools” implies “such that it can replace and supersede human labor altogether”. Can you clarify?
sendes 3 hours ago [-]
A common error in historical thinking tends to see human tools essentially as a positive linear plot between time and progress. But these tools until AI had the common property of being enhancing of human cognition, because they couldn't do the thinking _for you_. AI can do just that, and for all the benefit it brings, seeing it simply as the next step in the "natural evolution of human tools" is alarmingly disarming coming from frontier thinkers.
lovelearning 2 hours ago [-]
> these tools until AI had the common property of being enhancing of human cognition, because they couldn't do the thinking for you
I have a different take, centered around this idea: Not everyone was into thinking about everything all the time even before AI. I'd say most people most of the time outsourced actual thinking to someone else.
1) Reading non-fiction books:
Not all books, even the non-fiction ones, necessarily require any thinking by the reader. A book that narrates history, for example, requires much less thinking than something like "The Road to Reality" or "Godel Escher Bach."
Most of us outsourced the thinking and historical method to the authors of the history book and just passively consumed some facts or factoids. Some of us memorize and remember these factoids well, but that's not thinking, just knowledge storage.
Philosophically, what's the difference between consuming books this way and reading an LLM's output?
2) Reading research papers:
Most people don't read any research papers at all. No thinking there.
Most people don't head to some forum to ask about latest research either.
Also, researchers in most fields don't come out and do outreach regularly.
Indeed, an LLM may actually be the only pathway for a lot of people to get at least _some_ knowledge and awareness about latest research.
Those of us in scientific, engineering, humanities, healthcare fields may read some to many papers.
But only a small subset reads very critically, looking for data errors, inconsistencies, etc.
For most of us, the knowledge and techniques may be beyond our current understanding and possibly without any interest in understanding them in future either.
Most of us are just interested in the observations or conclusions or applications. Those may involve some thinking but also may not involve any thinking, just blind acceptance of the paper's claims and possible applications.
3) Coding:
Again, deep thinking is only done by a small set of programmers. Like the ones who write kernels, compilers, distributed algorithms, complex libraries.
But most are just passive consumers who read some examples online or ask stackoverflow or reddit for direct answers.
Some even outsource all their coding entirely to gig sites. Not much thinking there except pricing and scheduling.
What's the difference between that and asking an LLM or copying an LLM's answers? At least, the LLMs patiently explain their code, unlike salty SO users!
----
IMO, most people weren't doing much thinking even pre-AI.
Post-AI, it's true that some people who did do some thinking may reduce it.
But it's equally true that those people who weren't doing much thinking due to access or language barriers can actually start doing some thinking now with the help of AI.
sendes 2 hours ago [-]
> I'd say most people most of the time outsourced actual thinking to someone else.
Someone else being human, until now. That may change. That's the whole point!
But I concur with your general point on the upstream production of thinking and knowledge. Indeed, such elite thinkers are those in economic history referred to as the "upper-tail human capital". Terence Tao being one of them giving license to the kind of thinking that accepts AI as a simple tool that is not fundamentally breaking our relationship with technology is what exactly I am protesting.
> But it's equally true that those people who weren't doing much thinking due to access or language barriers can actually start doing some thinking now with the help of AI.
If only we keep thinking that thinking is a comparative advantage of our species, I suppose!
nutjob2 2 hours ago [-]
* For certain speculative definitions of AI
Zigurd 3 hours ago [-]
I'm glad I can still count on HN to come across the correct use of a lesser known definition of a word.
nutjob2 2 hours ago [-]
> supersede human "labour" altogether
For certain types of labor this has always been the case.
The idea that AI will entirely replace all, or most, human labor makes no sense and is just AI hype.
Like all technology before it AI will improve most people's lives.
palmotea 1 hours ago [-]
> Like all technology before it AI will improve most people's lives.
1. Let's be clear: what you're describing is faith.
2. And what are you smoking to assert "all technology before ... AI [improved] most people's lives"?
gradstudent 3 hours ago [-]
I skimmed the paper a couple of times, hoping to find the promised (from the abstract)
> pathway to integrating AI into our most challenging and intellectually rigorous fields to the benefit of all humankind.
There's very little insight here though. It seems mostly a retread of conversations we've been having in the academic community for a few years now. In particular, I was hoping to see some discussion of how we might restructure our educational institutions around this technology, when the machines rob students of the opportunity to develop critical thinking skills. Right now our best idea seems to be a retreat to oral and written examinations; an idea which doesn't scale and which ignores the supposed benefits of human+AI reasoning. The alternative suggestion I've seen is to teach prompt engineering, which seems (a) hard for foundational subjects and (b) again, seems to outsource much of the thinking to the AI, instead of extending the reach of human thought.
ak_111 22 minutes ago [-]
Wait it seems like doing unscalable things - like face-to-face teaching/examination - is exactly the sort of things that humanity can afford to do as it benefits from the surplus free time generated by AI efficiently doing the scalable things.
BDPW 3 hours ago [-]
Physical classrooms don't really scale either, is that really a fundamental problem?
bonoboTP 2 hours ago [-]
Yes. Tools like Khan Academy help lots of talented kids to progress in the curriculum beyond what's available in physical classrooms available to them.
lo_zamoyski 3 hours ago [-]
Indeed. Education isn't supposed to "scale". We've mucked around with education so much and subjected it to tech fad after tech fad that we hardly have anything resembling education.
Because this has been going on so long, most people's reference point for what constitutes "education" is simply off, mistaking "training" or something like that for it. But the purpose of education is intellectual formation, the ability to reason competently, and the comprehension of basic reality, which enables genuine intellectual freedom (there are moral presuppositions, too; immorality deranges the mind). This is what the classical liberal arts were about.
The very bare minimum criterion (and it is a very bare minimum) for someone to be able to claim to be educated is not only knowledge of their field, but knowledge of the intellectual nature, foundations, and basis of their field in the greater intellectual scope. I would not hold someone with only that bare minimum in especially high esteem vis-a-vis education, but even that bar is higher than what education today provides.
bonoboTP 2 hours ago [-]
There are simply not enough teachers who can provide such an ideal, imagined education, at least not for the current rate of teacher salaries (and it's very far off). The educational strategy has to scale to real people, real teachers and real students as they are in the flesh, not some ivory tower pipe dream. We've had decades of this "we should teach how to think, not what to think".
Alternatively,if you don't care about scale, as in rolling out a system to the population at large, then yeah, this kind of advanced education exists, it's just very selective and is in advanced extracurricular or obtained through private tutors.
lo_zamoyski 1 hours ago [-]
This also assumes that universal education is a sensible aim. I think that's doubtful and that it contributes to these sorts of burdens and waters down the quality of education in the process.
As a concrete example, for a few decades now, we're been pushing primary school students toward university education quite aggressively and broadly. It was quite common to scare students toward university by claiming that without a university degree, they would be flipping burgers at McDonalds. This, of course, is completely false and it is disgraceful that such dishonest and manipulative tactics were used. Today, because of rising university costs and the dubious value of most university education, we're seeing this idea challenged at the level of the university. Gen Z's interest in trades has increased by something like 1500%. I don't see this as a negative. In Germany, for instance, there is a more balanced distribution across trades and university.
Now, I admit that the situation is a bit different in the case of primary education, but here, too, I think we do well to think in terms of reform rather than technology and patching up a pedagogically and administratively broken system. The American education system spends an inordinate amount of money on each student with little to show for it. If, for instance, those funds were allocated wisely, then a number of problems would likely go away or become smaller issues.
Of course, what does "allocate wisely" mean? Education systems require a principled grasp of what education is for. If you don't have a sound anthropological grasp of what it means to be human and how education is supposed to enable one's humanity and serve human persons, then you are in no position to run an education system or decide school curricula. I cannot stress this enough. Our education system today is very "pragmatist"; we're constantly told we're being prepared for a career and a job market. That's not education: it's job training. Of course, schools are quite mediocre as training facilities, because they're sort of a halfway house between training and whatever residue of classical education still lingers. So that's one distinction: training vs. education. Now, if we simply accept this distinction, we should ask: how should one organize training on the one hand and education on the other to enable each to be successful within its own circumscribed domain? And what if we keep things as local and decentralized as possible? I guarantee you would not see the inept system we have today.
So, with this...
> There are simply not enough teachers who can provide such an ideal, imagined education
...I agree, but again, my view is that at best we are buying time with these sorts of technological gimmicks. We're also social animals. We cannot keep isolating ourselves behind technology under the pretext of "practicality".
nutjob2 2 hours ago [-]
> when the machines rob students of the opportunity to develop critical thinking skills
This is a fundamental misunderstanding of human nature. Machines don't rob people of critical thinking skills, people do. Mostly people do it to themselves, often inheriting it from their parents or social environment.
GodelNumbering 3 hours ago [-]
> Today, unlike in the Luddites’ time, we are already seeing skilled workers replaced not with lower-wage human labor, but with AI.
To me this is the weakest claim of the article. This claim been thrown around endlessly without proof.
Software Engineer job openings for instance is at 2 year high (still far lower than covid dislocations though), but arguably all Enterprise AI was built or deployed in the last two years. We should have seen a crash in the job openings if the AI job replacement claim was correct.
Is there a better illustration of the power of UX than the fact that a messaging chat interface was able to set free all of human knowledge from copyright, whereas a bittorrent client couldn't?
I enjoyed the human->depth vs AI->breadth discussion and the waterline rising slowly to fill the 50 lowest hanging Erdos problems but struggling on the next few.
anotherpaulg 4 hours ago [-]
Recorded 10 February 2026. Terence Tao of the University of California, Los Angeles, presents "Machine assistance and the future of research mathematics" at IPAM's AI for Science Kickoff.
dude250711 2 hours ago [-]
It's not "the age of AI", it's just a Slop Decade.
And the tools did not become "exponentially sophisticated", one thing it's logarithmic, another is that the improvements are questionable. But "pervasive" - yes, granted.
2 hours ago [-]
onlinealarmkur 1 hours ago [-]
[dead]
bluecheese452 4 hours ago [-]
Enough Terence Tao spam.
ancillary 2 hours ago [-]
So much of HN is half-baked anecdotes about and by LLMs or philosophizing from VCs who talked to an LLM about Rene Girard for twenty minutes or pop sci articles that appear to be posted so that some bored developer can read the abstract and one experiment and dunk on it. Tao is uniquely positioned as a mathematician who has made enormous contributions to many areas and is old enough to contextualize it all against the past and young enough to be open to its possible futures. More Tao spam sounds good to me!
mchinen 2 hours ago [-]
I haven't seen any negative sentiment toward Terence Tao before. Coming from outside the academic math sphere, genuinely curious if there's a real issue or if this comment is just spam itself.
myhf 32 minutes ago [-]
A big trend in AI spam is to take achievements in one field that could be called "AI" and use them as evidence of advancement in other fields that happen to be called "AI".
Tao has been doing a lot of demonstrations of using LLMs for search and translation by experts who already know enough about a field to judge whether generated text is valid or meaningful. Those are valid demonstrations, but they don't justify the LLM-as-intelligent-agent narrative being pushed by most of the reporting on the topic, so the whole situation reeks of payola.
tines 55 minutes ago [-]
This comment is spam. When Tao says something we should take it seriously.
Rendered at 16:44:59 GMT+0000 (Coordinated Universal Time) with Vercel.
While this is a noble goal, it seems obvious that this isn't how it usually goes. For instance, "free market" is often used as a dogma against companies that are actively harmful to society, as "globalization" might be. An unstoppable force, so any form of opposition is "luddite behavior". Another one is easier transport and remote communication, that generally broke down the social fabric. Or social media wreaking havoc among teen's minds. From there, it's easy to see why the technological system might be seen as an inherent evil. In 1872's Erewhon, Butler already described the technological system as a force that human society could contain as soon as it tolerated it. There are already many companies persecuting their employees for not using AI enough, even when the employee's response is that the quality of its output is not good enough for the work at hand, rather than any ideological reason.
I'm neither optimistic nor pessimistic about the changes that AI might bring, but hoping it to become "human-centered" seems almost as optimistic as hoping for "humane wars".
This is a predominantly America-specific piece of propaganda, and it's pretty recent.
Adam Smith's ideas are primarily arguments against mercantilism (e.g. things like using tariffs to wield self-interested state power), something he showed to be against the common good. The "invisible hand" concept is used to show how self-interested action can, under conditions of *competitive markets*, lead to unintentional alignment with the common good.
Obviously that's a significant departure from the way it's commonly used today, where Thiel's book has influenced so many entrepreneurs into believing Monopolies are Good.
But the history of this is very Cold War-influenced, where "free markets" were politically positioned as alternatives to the USSR's "planned economy", and slowly pushed to depart further and further from Adam Smith's original argument about moral philosophy.
Over the past few centuries, countless new economic structures and strategies have been discovered and practiced. The rewards for the same action today and in the past can be completely different due to this.
So to me, if someone claimed more than a few decades ago that certain economic strategies and structures are good or bad, its simply not worth listening to them, unless someone reconfirms that the old finding still holds with the latest range of strategies. In that case, the credit and citation goes to that new someone, not the ghosts of the past.
[1] A good interactive demo https://ncase.me/trust/
As you point out, it is all game theory.
But things that arrange for the game to be more beneficial to everyone, that align our interests more, deserve to be called "good", regardless of their inability to universally do so.
The latter would be an impossible bar for anything.
Where I find things frustrating, is when someone thinks because something is "good", it somehow becomes "enough". (Think, capitalized versions of different economic schools of thought.)
It has been funny to watch the rise of "China is beating us" rhetoric against the steady backdrop of "mercantilism is obsolete/bad" dogma, because the elephant in the room is that China has been running a textbook mercantilist playbook.
Haven't read his book, but the idea that monopolies are good isn't typically made in a vacuum, it's made relative to alternatives, most often "ham-fisted government intervention". It's easier to take down a badly behaving monopoly than to change government, so believing monopolies are better than the alternatives seems like a decent heuristic.
What? How is the first alternative poor government instead of multipolar competing companies? When was the last time a Monopoly was actually broken up in the US? ATT/Bell 50 years ago? lol
It was not that great for sub groups within developed nations.
The original thesis believed that people would be retrained into other equally well paying roles.
Turns out people can’t retrain into new domains, and led to under employment.
That some workers lost their jobs is a symptom of any change. I don't know why people always get upset people losing their jobs. It's like death, if no one died relatively few people would be born. If you resist job losses you reduce overall employment and economic development.
Having to repeatedly restart your career is risky, painful, and demoralizing. I have no problem seeing why people don't like that and why it can lead to populist backlash or even violent revolutions (as it did in the past).
By the way, to address your closing comment: people don't like dying either and tend to get upset when others die?
The sad part is that industrializing societies have not been very good at reconciling the benefits with the costs. The benefits first go to a select few and have seeped out to the masses slowly. Railroads in the US are a good example. The wealth accumulated by the Vanderbilts, Hills and Harrimans, did not get redistributed in any kind of equitable manner. However, everyday people did eventually gain a lot of benefit form of those railroads through economic expansion. (None of which address the loss of the native Americans, whose losses should also be part of the equation.)
That's a "statistic" you're pulling out of your butt, and it's doing a lot of work. No one ever knows if something like that will actually happen.
It could actually turn out that AI sacrifices 100 engineering jobs for 10 low-level service or prostitution jobs and a crap-ton of wealth to those already rich.
> The drivers suffer from that, but the net win for society is so plainly obvious that it's a better investment to retrain the driver or just pay the off rather than support a job that dying anyways.
But what actually happens is our free-market society doesn't give a shit. No meaningful retraining happens, no meaningful effort goes into cushioning the blow for the "horse and buggy driver." Our society (or more accurately, the elites in charge) go tell those harmed to fuck off and deal with it.
While nowhere in the paper this is actually asserted but the abstract, a whiggish narrative of a genuinely unprecedented technology --such that it can replace and supersede human "labour" altogether (one is reminded of The Evolution of Human Science by Ted Chiang)-- sounds naive at best, dangerous at worst.
I have a different take, centered around this idea: Not everyone was into thinking about everything all the time even before AI. I'd say most people most of the time outsourced actual thinking to someone else.
1) Reading non-fiction books:
Not all books, even the non-fiction ones, necessarily require any thinking by the reader. A book that narrates history, for example, requires much less thinking than something like "The Road to Reality" or "Godel Escher Bach."
Most of us outsourced the thinking and historical method to the authors of the history book and just passively consumed some facts or factoids. Some of us memorize and remember these factoids well, but that's not thinking, just knowledge storage.
Philosophically, what's the difference between consuming books this way and reading an LLM's output?
2) Reading research papers:
Most people don't read any research papers at all. No thinking there. Most people don't head to some forum to ask about latest research either. Also, researchers in most fields don't come out and do outreach regularly.
Indeed, an LLM may actually be the only pathway for a lot of people to get at least _some_ knowledge and awareness about latest research.
Those of us in scientific, engineering, humanities, healthcare fields may read some to many papers. But only a small subset reads very critically, looking for data errors, inconsistencies, etc. For most of us, the knowledge and techniques may be beyond our current understanding and possibly without any interest in understanding them in future either.
Most of us are just interested in the observations or conclusions or applications. Those may involve some thinking but also may not involve any thinking, just blind acceptance of the paper's claims and possible applications.
3) Coding:
Again, deep thinking is only done by a small set of programmers. Like the ones who write kernels, compilers, distributed algorithms, complex libraries.
But most are just passive consumers who read some examples online or ask stackoverflow or reddit for direct answers. Some even outsource all their coding entirely to gig sites. Not much thinking there except pricing and scheduling. What's the difference between that and asking an LLM or copying an LLM's answers? At least, the LLMs patiently explain their code, unlike salty SO users!
----
IMO, most people weren't doing much thinking even pre-AI.
Post-AI, it's true that some people who did do some thinking may reduce it.
But it's equally true that those people who weren't doing much thinking due to access or language barriers can actually start doing some thinking now with the help of AI.
Someone else being human, until now. That may change. That's the whole point!
But I concur with your general point on the upstream production of thinking and knowledge. Indeed, such elite thinkers are those in economic history referred to as the "upper-tail human capital". Terence Tao being one of them giving license to the kind of thinking that accepts AI as a simple tool that is not fundamentally breaking our relationship with technology is what exactly I am protesting.
> But it's equally true that those people who weren't doing much thinking due to access or language barriers can actually start doing some thinking now with the help of AI.
If only we keep thinking that thinking is a comparative advantage of our species, I suppose!
For certain types of labor this has always been the case.
The idea that AI will entirely replace all, or most, human labor makes no sense and is just AI hype.
Like all technology before it AI will improve most people's lives.
1. Let's be clear: what you're describing is faith.
2. And what are you smoking to assert "all technology before ... AI [improved] most people's lives"?
> pathway to integrating AI into our most challenging and intellectually rigorous fields to the benefit of all humankind.
There's very little insight here though. It seems mostly a retread of conversations we've been having in the academic community for a few years now. In particular, I was hoping to see some discussion of how we might restructure our educational institutions around this technology, when the machines rob students of the opportunity to develop critical thinking skills. Right now our best idea seems to be a retreat to oral and written examinations; an idea which doesn't scale and which ignores the supposed benefits of human+AI reasoning. The alternative suggestion I've seen is to teach prompt engineering, which seems (a) hard for foundational subjects and (b) again, seems to outsource much of the thinking to the AI, instead of extending the reach of human thought.
Because this has been going on so long, most people's reference point for what constitutes "education" is simply off, mistaking "training" or something like that for it. But the purpose of education is intellectual formation, the ability to reason competently, and the comprehension of basic reality, which enables genuine intellectual freedom (there are moral presuppositions, too; immorality deranges the mind). This is what the classical liberal arts were about.
The very bare minimum criterion (and it is a very bare minimum) for someone to be able to claim to be educated is not only knowledge of their field, but knowledge of the intellectual nature, foundations, and basis of their field in the greater intellectual scope. I would not hold someone with only that bare minimum in especially high esteem vis-a-vis education, but even that bar is higher than what education today provides.
Alternatively,if you don't care about scale, as in rolling out a system to the population at large, then yeah, this kind of advanced education exists, it's just very selective and is in advanced extracurricular or obtained through private tutors.
As a concrete example, for a few decades now, we're been pushing primary school students toward university education quite aggressively and broadly. It was quite common to scare students toward university by claiming that without a university degree, they would be flipping burgers at McDonalds. This, of course, is completely false and it is disgraceful that such dishonest and manipulative tactics were used. Today, because of rising university costs and the dubious value of most university education, we're seeing this idea challenged at the level of the university. Gen Z's interest in trades has increased by something like 1500%. I don't see this as a negative. In Germany, for instance, there is a more balanced distribution across trades and university.
Now, I admit that the situation is a bit different in the case of primary education, but here, too, I think we do well to think in terms of reform rather than technology and patching up a pedagogically and administratively broken system. The American education system spends an inordinate amount of money on each student with little to show for it. If, for instance, those funds were allocated wisely, then a number of problems would likely go away or become smaller issues.
Of course, what does "allocate wisely" mean? Education systems require a principled grasp of what education is for. If you don't have a sound anthropological grasp of what it means to be human and how education is supposed to enable one's humanity and serve human persons, then you are in no position to run an education system or decide school curricula. I cannot stress this enough. Our education system today is very "pragmatist"; we're constantly told we're being prepared for a career and a job market. That's not education: it's job training. Of course, schools are quite mediocre as training facilities, because they're sort of a halfway house between training and whatever residue of classical education still lingers. So that's one distinction: training vs. education. Now, if we simply accept this distinction, we should ask: how should one organize training on the one hand and education on the other to enable each to be successful within its own circumscribed domain? And what if we keep things as local and decentralized as possible? I guarantee you would not see the inept system we have today.
So, with this...
> There are simply not enough teachers who can provide such an ideal, imagined education
...I agree, but again, my view is that at best we are buying time with these sorts of technological gimmicks. We're also social animals. We cannot keep isolating ourselves behind technology under the pretext of "practicality".
This is a fundamental misunderstanding of human nature. Machines don't rob people of critical thinking skills, people do. Mostly people do it to themselves, often inheriting it from their parents or social environment.
To me this is the weakest claim of the article. This claim been thrown around endlessly without proof.
https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE
Software Engineer job openings for instance is at 2 year high (still far lower than covid dislocations though), but arguably all Enterprise AI was built or deployed in the last two years. We should have seen a crash in the job openings if the AI job replacement claim was correct.
This is something I've spend some time thinking about (personally written article, not AI slop): https://www.signalbloom.ai/posts/why-task-proficiency-doesnt...
https://www.youtube.com/watch?v=zJvuaRVc8Bg
I enjoyed the human->depth vs AI->breadth discussion and the waterline rising slowly to fill the 50 lowest hanging Erdos problems but struggling on the next few.
And the tools did not become "exponentially sophisticated", one thing it's logarithmic, another is that the improvements are questionable. But "pervasive" - yes, granted.
https://en.wikipedia.org/wiki/Package-deal_fallacy
Tao has been doing a lot of demonstrations of using LLMs for search and translation by experts who already know enough about a field to judge whether generated text is valid or meaningful. Those are valid demonstrations, but they don't justify the LLM-as-intelligent-agent narrative being pushed by most of the reporting on the topic, so the whole situation reeks of payola.