They will never admit it, but many are scared of losing their jobs.
This threat, while not yet realized, is very real from a strictly economic perspective.
AI or not, any tool that improves productivity can lead to workforce reduction.
Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
You have a choice: fire 5 people, or produce 2,000 loaves per month. But does the city really need that many loaves?
To make matters worse, all your competitors also have the same semi-automatic ovens...
hansmayer 10 minutes ago [-]
> Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
That is actually the case with a lot of bakeries these days. But the one major difference being,the baker can rely with almost 100% reliability that the form, shape and ingredients used will be exact to the rounding error. Each time. No matter how many times they use the oven. And they don't have to invent strategies on how to "best use the ovens", they don't claim to "vibe-bake" 10x more than what they used to bake before etc... The semi-automated ovens just effing work!
Now show me an LLM that even remotely provides this kind of experience.
0x3f 10 minutes ago [-]
A bit simplistic. The bakery can just expand its product range or do various other things to add work. In fact that's exactly what I would expect to happen at a tech company, ceteris paribus.
bojan 8 minutes ago [-]
On another note, if you had 100 engineers and you lay almost all of them off and keep 5 super-AI-accelerated engineers, and your competitor keeps 50 of such engineers, your competitor is still able to iterate 10x as fast. So you still lay people off at the risk of falling behind.
turblety 9 minutes ago [-]
Maybe the bakery expands to make more than just loaves of bread, maybe different cakes, sandwiches, maybe expand delivery to nearby towns.
noemit 1 hours ago [-]
Not a day goes by that a fellow engineer doesn't text me a screenshot of something stupid an AI did in their codebase. But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
The catch about the "guided" piece is that it requires an already-good engineer. I work with engineers around the world and the skill level varies a lot - AI has not been able to bridge the gap. I am generalizing, but I can see how AI can 10x the work of the typical engineer working in Startups in California. Even your comment about curiosity highlights this. It's the beginning of an even more K-shaped engineering workforce.
Even people who were previously not great engineers, if they are curious and always enjoyed the learning part - they are now supercharged to learn new ways of building, and they are able to try it out, learn from their mistakes at an accelerated pace.
Unfortunately, this group, the curious ones, IMHO is a minority.
tern 27 minutes ago [-]
I am solidly in this "curious" camp. I've read HN for the past 15(?) years. I dropped out of CS and got an art agree instead. My career is elsewhere, but along the way, understanding systems was a hobby.
I always kind of wanted to stop everything else and learn "real engineering," but I didn't. Instead, I just read hundreds (thousands?) of arcane articles about enterprise software architecture, programming language design, compiler optimization, and open source politics in my free time.
There are many bits of tacit knowledge I don't have. I know I don't have it, because I have that knowledge in other domains. I know that I don't know what I don't know about being a "real engineer."
But I also know what taste is. I know what questions to ask. I know the magic words, and where to look for answers.
For people like me, this feels like an insane golden age. I have no shortage of ideas, and now the only thing I have a shortage of is hands, eyes, and on a good week, tokens.
wartywhoa23 14 minutes ago [-]
Yet another wannabe systems engineer cheers the robbery and loss of job of real systems engineers.
sd9 3 minutes ago [-]
Calling somebody a wannabe systems engineer is unneccessarily antagonistic.
tern 2 minutes ago [-]
I know it's not anyone's fault exactly, but the current state of systems in general is an absolute shit show. If you care about what you do, I'd expect you to be cheering that we just might have an opportunity for a renaissance.
Moreover, this kind of thinking is incredibly backward. If you were better than me then, you can easily become much better than I'll ever be in the future.
kdheiwns 43 minutes ago [-]
Engineers will go back in and fix it when they notice a problem. Or find someone who can. AI will send happy little emoji while it continues to trash your codebase and brings it to a state of total unmaintainability.
javadhu 49 minutes ago [-]
I agree on the curiosity part, I have a non CS background but I have learned to program just out of curiosity. This led me to build production applications which companies actually use and this is before the AI era.
Now, with AI I feel like I have an assistant engineer with me who can help me build exciting things.
noemit 28 minutes ago [-]
I'm currently teaching a group of very curious non-technical content creators at one of the firms I consult at. I set up Codex for them, created the repo to have lots of hand-holding built in - and they took off. It's been 4 weeks and we already have 3 internal tools deployed, one of which eliminated the busy work of another team so much that they now have twice the capacity. These are all things 'real' engineers and product managers could have done, but just empowering people to solve their own problems is way faster. Today, several of them came to me and asked me to explain what APIs are (They want to use the google workspace APIs for something)
I wrote out a list of topics/key words to ask AI about and teach themselves. I've already set up the integration in an example app I will give them, and I literally have no idea what they are going to build next, but I'm .. thrilled. Today was the first moment I realized, maybe these are the junior engineers of the future. The fact that they have nontechnical backgrounds is a huge bonus - one has a PhD in Biology, one a masters in writing - they bring so much to the process that a typical engineering team lacks. Thinking of writing up this case study/experience because it's been a highlight of my career.
hansmayer 9 minutes ago [-]
> But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
Because the instances of this happening are a) random and b) rarely ever happening ?
pydry 48 minutes ago [-]
>But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
Are you serious? I've been hearing this constantly. since mid 2025.
The gaslighting over AI is really something else.
Ive also never seen jobs advertised before whose job was to lobby skeptical engineers over about how to engage in technical work. This is entirely new. There is a priesthood developing over this.
brabel 35 minutes ago [-]
I wrote code by hand for 20 years. Now I use AI for nearly all code. I just can’t compete in speed and thoroughness. As the post says, you must guide the AI still. But if you think you can continue working without AI in a competitive industry, I am absolutely sure you will eventually have a very bad time.
pydry 15 minutes ago [-]
>I just can’t compete in speed and thoroughness
I certainly know engineers for which this is true but unfortunately they were never particularly thorough or fast to begin with.
I believe you can tell which way the wind is blowing by looking at open source.
Other than being flooded with PRs high profile projects have not seen a notable difference - certainly no accelerated enhancements. there has definitely been an explosion of new projects, though, most of dubious quality.
Spikes and research are definitely cheaper now.
kolinko 43 minutes ago [-]
you’ve been hearing that since mid 2025 bc that’s when it became true.
input_sh 51 minutes ago [-]
Quite frankly, if AI can write better code than most of your engineers "hundreds of times", then your hiring team is doing something terribly wrong.
Cthulhu_ 37 minutes ago [-]
Maybe. The reality of software engineering is that there's a lot of mediocre developers on the market and a lot of mediocre code being written; that's part of the industry, and the jobs of engineers working with other engineers and/or LLMs is that of quality control, through e.g. static analysis, code reviews, teaching, studying, etc.
input_sh 16 minutes ago [-]
And those mediocre engineers put their work online, as do top-tier developers. In fact, I would say that the scale is likely tilted towards mediocre engineers putting more stuff online than really good ones.
So statistically speaking, when the "AI" consumes all of that as its training data and returns the most likely answer when prompted, what percentage of developers will it be better than?
wartywhoa23 6 minutes ago [-]
These people also prefer plastic averaged-out images of AI girls to real ones.
The Average is their top-tier.
41 minutes ago [-]
theshrike79 41 minutes ago [-]
The "most engineers" not "most engineers we've hired".
But also "most engineers" aren't very good. AIs know tricks that the average "I write code for my dayjob" person doesn't know or frankly won't bother to learn.
input_sh 28 minutes ago [-]
Even speaking from a pure statistical perspective, it is quite literally impossible for "AI" that outputs world's-most-average-answer to be better than "most engineers".
In fact, it's pretty easy to conclude what percentage of engineers it's better than: all it does is it consumes as much data as possible and returns the statistically most probable answer, therefore it's gonna be better than roughly 50% of engineers. Maybe you can claim that it's better than 60% of engineers because bottom-of-the-barrel engineers tend to not publish their works online for it to be used as training data, but for every one of those you have a bunch of non-engineers that don't do this for a living putting their shitty attempts at getting stuff done using code online, so I'm actually gonna correct myself immediately and say that it's about 40%.
The same goes for every other output: it's gonna make the world's most average article, the most average song in a genre and so on. You can nudge it to be slightly better than the average with great effort, but no, you absolutely cannot make it better than most.
theshrike79 15 minutes ago [-]
The thing that separates AI Agents from normal programmers is that agents don't get bored or tired.
For most engineers the ability might be there, but the motivation or willingness to write, for example, 20 different test cases checking the 3 line bug you just fixed is fixed FOR SURE usually isn't there. You add maybe 1-2 tests because they're annoying boilerplate crap to write and create the PR. CI passes, you added new tests, someone will approve it. (Yes, your specific company is of course better than this and requires rigorous testing, but the vast majority isn't. Most don't even add the two tests as long as the issue is fixed.)
An AI Agent will happily and without complaining use Red/Green TDD on the issue, create the 20 tests first, make sure they fail (as they should), fix the issue and then again check that all tests pass. And it'll do it in 30 minutes while you do something else.
rel_ic 14 minutes ago [-]
This is kind of like saying a kid can never become a better programmer than the average of his teachers.
IMHO, the reasons not to use AI are social, not logical.
input_sh 7 minutes ago [-]
The kid can learn and become better over time, while "AI" can only be retrained using better training data.
I'm not against using AI by any means, but I know what to use it for: for stuff where I can only do a worse than half the population because I can't be bothered to learn it properly. I don't want to toot my own horn, but I'd say I'm definitely better at my niche than 50% of the people. There are plenty of other niches where I'm not.
jwr 2 minutes ago [-]
Finally a take that I can agree with.
thefounder 8 minutes ago [-]
The issue is that you become lazy after a while and stop “leading the design”. And I think that’s ok because most of the code is just throwaway code.
You would rewrite your project/app several times by the time it’s worth it to pay attention to “proper” architecture. I wish I had these AIs 10 years ago so that I can focus on everything I wanted to build instead to become a framework developer/engineer.
roli64 47 minutes ago [-]
Lost me at "I’m building something right now. I won’t get into the details. You don’t give away the idea."
rl3 42 minutes ago [-]
Perhaps execution is cheap now and ideas aren't?
Personally I'm quite pleased with this inversion.
codemog 36 minutes ago [-]
It’s kind of funny seeing all the AI hype guys talking about their 10 OpenClaw instances all running doing work and when you ask what it is, you can never get a straight answer..
For the record though, I love agentic coding. It deals with the accumulated cruft of software for me.
q3k 28 minutes ago [-]
The work is mysterious and important.
rimmontrieu 32 minutes ago [-]
> But guided? The models can write better code than most developers. That’s the part people don’t want to sit with. When guided.
Where do you draw the line between just enough guidance vs too much hand holding to an agent? At some point, wouldn't it be better to just do it yourself and be done with the project (while also build your muscle memory, experiences and the mental model for future projects, just like tons of regular devs have done in the past)
v3xro 11 minutes ago [-]
The only way I see out of this crisis (yes I'm not on the token-using side of this) is strict liability for companies making software products (just like in the physical world). Then it doesn't matter if the token-generator spits out code or a software engineer spits out code - the company's incentives are aligned such that if something breaks it's on them to fix it and sort out any externalities caused. This will probably mean no vibe-coded side hustles but I personally am OK with that.
duggan 43 minutes ago [-]
Very much on the same page as the author, I think AI is a phenomenal accelerant.
If you're going in the right direction, acceleration is very useful. It rewards those who know what they're doing, certainly. What's maybe being left out is that, over a large enough distribution, it's going to accelerate people who are accidentally going in the right direction, too.
There's a baseline value in going fast.
47 minutes ago [-]
Bukhmanizer 55 minutes ago [-]
This essay somehow sounds worse than AI slop, like ChatGPT did a line of coke before writing this out.
I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.
amelius 58 minutes ago [-]
> Building systems that supervise AI agents, training models, wiring up pipelines where the AI does the heavy lifting and I do the thinking. Honestly? I’m having more fun than ever.
I'm sure some people are having fun that way.
But I'm also sure some people don't like to play with systems that produce fuzzy outputs and break in unexpected moments, even though overall they are a net win.
It's almost as if you're dealing with humans. Some people just prefer to sit in a room and think, and they now feel this is taken away from them.
nbvkappowqpeop 55 minutes ago [-]
I'm just an old school programmer who loves writing code, and the recent AI developments have just taken the most fun part away from me.
kirito1337 33 minutes ago [-]
fr, like in 2020 I started to learn programming in C/C++ at 9 and in 2023 when the AI bubble just went on, it feels like I did it all for nothing
CrzyLngPwd 37 minutes ago [-]
It sounds a bit no-true-scotsman to me.
ChrisMarshallNY 59 minutes ago [-]
> The problem is: you can’t justify this throughput to someone who doesn’t understand real software engineering. They see the output and think “well the AI did it.” No. The AI executed it. I designed it. I knew what to ask for, how to decompose the problem, what patterns to use, when the model was going off track, and how to correct it. That’s not prompting. That’s engineering.
That’s the “money quote,” for me. Often, I’m the one that causes the problem, because of errors in prompting. Sometimes, the AI catches it, sometimes, it goes into the ditch, and I need to call for a tow.
The big deal, is that I can considerably “up my game,” and get a lot done, alone. The velocity is kind of jaw-dropping.
I’m not [yet] at the level of the author, and tend to follow a more “synchronous” path, but I’m seeing similar results (and enjoying myself).
noemit 37 minutes ago [-]
There are two types of engineers who use AI:
- Ones who see it generated something bad, and blame the AI.
- Ones who see it generated something bad, and revert it and try to prompt better, with more clarity and guidance.
miningape 25 minutes ago [-]
- Ones who see it generated something bad, and realise it'd be faster to just hand fix the issues than babysit an LLM
ChrisMarshallNY 29 minutes ago [-]
Three types:
- Ones that use it as a “pair partner,” as opposed to an employee.
Thanks for the implicit insult. That was helpful.
bambax 35 minutes ago [-]
I agree wholeheartedly with all that is said in this article. When guided, AI amplifies the productivity of experts immensely.
There are two problems left, though.
One is, laypersons don't understand the difference between "guided" and "vibe coded". This shouldn't matter, but it does, because in most organizations managers are laypersons who don't know anything about coding whatsoever, aren't interested by the topic at all, and think developers are interchangeable.
The other problem is, how do you develop those instincts when you're starting up, now that AI is a better junior coder than most junior coders? This is something one needs to think about hard as a society. We old farts are going to be fine, but we're eventually going to die (retire first, if we're lucky; then die).
What comes after? How do we produce experts in the age of AI?
jstanley 33 minutes ago [-]
I think the problem is overstated.
People always learn the things they need to learn.
Were people clutching their pearls about how programmers were going to lack the fundamentals of assembly language after compilers came along? Probably, but it turned out fine.
People who need to program in assembly language still do. People who need to touch low-level things probably understand some of it but not as deeply. Most of us never need to worry about it.
jruz 27 minutes ago [-]
I find really sad how people are so stubborn to dismiss AI as a slop generator.
I completely agree with the author, once you spend the time building a good enough harness oh boy you start getting those sweet gains, but it takes a lot of time and effort but is absolutely worth it.
holyra 22 minutes ago [-]
Personally, I dismiss AI, mainly agenetic ones, because of its environmental impact. I hope that one day everyone will be held accountable for it.
holyra 16 minutes ago [-]
what about the environmental impact of AI, especially agentic AI? I keep reading praise for AI on the orange site, but its environmental impact is rarely discussed. It seems that everyone has already adopted this technology, which is destroying our world a little more.
bob1029 5 minutes ago [-]
I believe the orange site's consensus was that it's approximately one additional mini fridge or dish washer worth of consumption on average. You've got users who use these tools barely 1k tokens per week. Assuming it's all batched ideally that's like running an LED floodlight for a minute or so. The other end of the spectrum can be pretty extreme in consumption but it's also rare. Most people just use the adhoc stuff.
octoclaw 52 minutes ago [-]
[dead]
bitwize 1 hours ago [-]
The phrase "shape up or ship out" is an apt one I've heard. Agentic AI is a core part of software engineering. Either you are learning and using these tools, or you're not a professional and don't belong in the field.
Rendered at 10:53:54 GMT+0000 (Coordinated Universal Time) with Vercel.
This threat, while not yet realized, is very real from a strictly economic perspective.
AI or not, any tool that improves productivity can lead to workforce reduction.
Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
You have a choice: fire 5 people, or produce 2,000 loaves per month. But does the city really need that many loaves?
To make matters worse, all your competitors also have the same semi-automatic ovens...
That is actually the case with a lot of bakeries these days. But the one major difference being,the baker can rely with almost 100% reliability that the form, shape and ingredients used will be exact to the rounding error. Each time. No matter how many times they use the oven. And they don't have to invent strategies on how to "best use the ovens", they don't claim to "vibe-bake" 10x more than what they used to bake before etc... The semi-automated ovens just effing work!
Now show me an LLM that even remotely provides this kind of experience.
The catch about the "guided" piece is that it requires an already-good engineer. I work with engineers around the world and the skill level varies a lot - AI has not been able to bridge the gap. I am generalizing, but I can see how AI can 10x the work of the typical engineer working in Startups in California. Even your comment about curiosity highlights this. It's the beginning of an even more K-shaped engineering workforce.
Even people who were previously not great engineers, if they are curious and always enjoyed the learning part - they are now supercharged to learn new ways of building, and they are able to try it out, learn from their mistakes at an accelerated pace.
Unfortunately, this group, the curious ones, IMHO is a minority.
I always kind of wanted to stop everything else and learn "real engineering," but I didn't. Instead, I just read hundreds (thousands?) of arcane articles about enterprise software architecture, programming language design, compiler optimization, and open source politics in my free time.
There are many bits of tacit knowledge I don't have. I know I don't have it, because I have that knowledge in other domains. I know that I don't know what I don't know about being a "real engineer."
But I also know what taste is. I know what questions to ask. I know the magic words, and where to look for answers.
For people like me, this feels like an insane golden age. I have no shortage of ideas, and now the only thing I have a shortage of is hands, eyes, and on a good week, tokens.
Moreover, this kind of thinking is incredibly backward. If you were better than me then, you can easily become much better than I'll ever be in the future.
Now, with AI I feel like I have an assistant engineer with me who can help me build exciting things.
I wrote out a list of topics/key words to ask AI about and teach themselves. I've already set up the integration in an example app I will give them, and I literally have no idea what they are going to build next, but I'm .. thrilled. Today was the first moment I realized, maybe these are the junior engineers of the future. The fact that they have nontechnical backgrounds is a huge bonus - one has a PhD in Biology, one a masters in writing - they bring so much to the process that a typical engineering team lacks. Thinking of writing up this case study/experience because it's been a highlight of my career.
Because the instances of this happening are a) random and b) rarely ever happening ?
Are you serious? I've been hearing this constantly. since mid 2025.
The gaslighting over AI is really something else.
Ive also never seen jobs advertised before whose job was to lobby skeptical engineers over about how to engage in technical work. This is entirely new. There is a priesthood developing over this.
I certainly know engineers for which this is true but unfortunately they were never particularly thorough or fast to begin with.
I believe you can tell which way the wind is blowing by looking at open source.
Other than being flooded with PRs high profile projects have not seen a notable difference - certainly no accelerated enhancements. there has definitely been an explosion of new projects, though, most of dubious quality.
Spikes and research are definitely cheaper now.
So statistically speaking, when the "AI" consumes all of that as its training data and returns the most likely answer when prompted, what percentage of developers will it be better than?
The Average is their top-tier.
But also "most engineers" aren't very good. AIs know tricks that the average "I write code for my dayjob" person doesn't know or frankly won't bother to learn.
In fact, it's pretty easy to conclude what percentage of engineers it's better than: all it does is it consumes as much data as possible and returns the statistically most probable answer, therefore it's gonna be better than roughly 50% of engineers. Maybe you can claim that it's better than 60% of engineers because bottom-of-the-barrel engineers tend to not publish their works online for it to be used as training data, but for every one of those you have a bunch of non-engineers that don't do this for a living putting their shitty attempts at getting stuff done using code online, so I'm actually gonna correct myself immediately and say that it's about 40%.
The same goes for every other output: it's gonna make the world's most average article, the most average song in a genre and so on. You can nudge it to be slightly better than the average with great effort, but no, you absolutely cannot make it better than most.
For most engineers the ability might be there, but the motivation or willingness to write, for example, 20 different test cases checking the 3 line bug you just fixed is fixed FOR SURE usually isn't there. You add maybe 1-2 tests because they're annoying boilerplate crap to write and create the PR. CI passes, you added new tests, someone will approve it. (Yes, your specific company is of course better than this and requires rigorous testing, but the vast majority isn't. Most don't even add the two tests as long as the issue is fixed.)
An AI Agent will happily and without complaining use Red/Green TDD on the issue, create the 20 tests first, make sure they fail (as they should), fix the issue and then again check that all tests pass. And it'll do it in 30 minutes while you do something else.
IMHO, the reasons not to use AI are social, not logical.
I'm not against using AI by any means, but I know what to use it for: for stuff where I can only do a worse than half the population because I can't be bothered to learn it properly. I don't want to toot my own horn, but I'd say I'm definitely better at my niche than 50% of the people. There are plenty of other niches where I'm not.
Personally I'm quite pleased with this inversion.
For the record though, I love agentic coding. It deals with the accumulated cruft of software for me.
Where do you draw the line between just enough guidance vs too much hand holding to an agent? At some point, wouldn't it be better to just do it yourself and be done with the project (while also build your muscle memory, experiences and the mental model for future projects, just like tons of regular devs have done in the past)
If you're going in the right direction, acceleration is very useful. It rewards those who know what they're doing, certainly. What's maybe being left out is that, over a large enough distribution, it's going to accelerate people who are accidentally going in the right direction, too.
There's a baseline value in going fast.
I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.
I'm sure some people are having fun that way.
But I'm also sure some people don't like to play with systems that produce fuzzy outputs and break in unexpected moments, even though overall they are a net win. It's almost as if you're dealing with humans. Some people just prefer to sit in a room and think, and they now feel this is taken away from them.
That’s the “money quote,” for me. Often, I’m the one that causes the problem, because of errors in prompting. Sometimes, the AI catches it, sometimes, it goes into the ditch, and I need to call for a tow.
The big deal, is that I can considerably “up my game,” and get a lot done, alone. The velocity is kind of jaw-dropping.
I’m not [yet] at the level of the author, and tend to follow a more “synchronous” path, but I’m seeing similar results (and enjoying myself).
- Ones who see it generated something bad, and blame the AI.
- Ones who see it generated something bad, and revert it and try to prompt better, with more clarity and guidance.
- Ones that use it as a “pair partner,” as opposed to an employee.
Thanks for the implicit insult. That was helpful.
There are two problems left, though.
One is, laypersons don't understand the difference between "guided" and "vibe coded". This shouldn't matter, but it does, because in most organizations managers are laypersons who don't know anything about coding whatsoever, aren't interested by the topic at all, and think developers are interchangeable.
The other problem is, how do you develop those instincts when you're starting up, now that AI is a better junior coder than most junior coders? This is something one needs to think about hard as a society. We old farts are going to be fine, but we're eventually going to die (retire first, if we're lucky; then die).
What comes after? How do we produce experts in the age of AI?
People always learn the things they need to learn.
Were people clutching their pearls about how programmers were going to lack the fundamentals of assembly language after compilers came along? Probably, but it turned out fine.
People who need to program in assembly language still do. People who need to touch low-level things probably understand some of it but not as deeply. Most of us never need to worry about it.