NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Agentic Engineering Patterns (simonwillison.net)
benrutter 2 hours ago [-]
I use AI in my workflow mostly for simple boilerplate, or to troubleshoot issues/docs.

I've dipped into agentic work now and again, but never been very impressed with the output (well, that there is any functioning output is insanely impressive, but it isn't code I want to be on the hook for complaining).

I hear a lot of people saying the same, but similarly a bunch of people I respect saying they barely write code anymore. It feels a little tricky to square these up sometimes.

Anyway, really looking forward to trying some if these patterns as the book develops to see if that makes a difference. Understanding how other peopke really use these tools is a big gap for me.

pkorzeniewski 6 minutes ago [-]
One thing I rarely see mentioned is that often creating code by hand is simply faster (at least for me) than using AI. Creating a plan for AI, waiting for execution, verifying, prompting again etc. takes more time than just doing it on my own with a plan in my head. Creating something from scratch or doing advanced refactoring is almost always faster with AI, but most of my daily tasks are bugs or features that are 10% coding and 90% knowing how to do it.
panstromek 11 minutes ago [-]
> It feels a little tricky to square these up sometimes.

In my experience, this heavily depends on the task, and there's a massive chasm between tasks where it's a good and bad fit. I can definitely imagine people working only on one side of this chasm and being perplexed by the other side.

fnands 2 hours ago [-]
When was the last time you tried?

I think trying agents to do larger tasks was always very hit or miss, up to about the end of last year.

In the past couple of months I have found them to have gotten a lot better (and I'm not the only one).

My experience with what coding assistants are good for shifted from:

smart autocomplete -> targeted changes/additions -> full engineering

maccard 1 hours ago [-]
I’m not OP but every time I post a comment with this sentiment I get told “the latest models are what you need”. If every 3 months you are saying “it’s ready as long as you use the latest model”, then it wasn’t ready 3 months ago and it’s not likely to be ready now.

To answer your question, I’ve tried both Claude code and Antigravity in the last 2 weeks and I’m still finding them struggling. AG with Gemini regularly gets stuck on simple issues and loops until I run out of requests, and Claude still just regularly goes on wild tangents not actually solving the problem.

anon7000 13 minutes ago [-]
I don’t think that’s true. Claude Opus 4.5/4.6 in Cursor have marked the big shift for me. Before that, agentic development mostly made me want to just do it myself, because it was getting stuck or going on tangents.

I think it can (and is) shifting very rapidly. Everyone is different, and I’m sure models are better at different types of work (or styles of working), but it doesn’t take much to make it too frustrating to use. Which also means it doesn’t take much to make it super useful.

sergiosgc 28 minutes ago [-]
Have you tried it with something like OpenSpec? Strangely, taking the time to lay out the steps in a large task helps immensely. It's the difference between the behavior you describe and just letting it run productively for segments of ten or fifteen minutes.
techpression 11 minutes ago [-]
Agree, it’s strange, I will just assume that the people who say this are building react apps. I still have so much ”certainly, I should not do this in a completely insane way, let me fix that” … -400+2. It’s not always, and it is better than it was, but that’s it.
benrutter 1 hours ago [-]
> When was the last time you tried?

Pretty recently (a couple weeks ago). I give agentic workflows a go every couple of weeks or so.

I should say, I don't find them abysmal, but I tend to work in codebases where I understand them, and the patterns really well. The use cases I've tried so far, do sort of work, just not yet at least, faster than I'm able to actual write the code myself.

lumpilumpi 1 hours ago [-]
My experience is that the first iteration output from a single agent is not what I want to be on the hook for. What squares it for me with "not writing code anymore" is the iterative process to improve outputs:

1) Having review loops between agents (spawn separate "reviewer" agents) and clear tests / eval criteria improved results quite a bit for me. 2) Reviewing manually and giving instructions for improvements is necessary to have code I can own

rsynnott 30 minutes ago [-]
Is that… actually faster than just doing it yourself, tho? Like, “I could write the right thing, or I could have this robot write the wrong thing and then nag it til it corrects itself” seems to suggest a fairly obvious choice.

I’ve yet to see these things do well on anything but trivial boilerplate.

birdfood 2 hours ago [-]
I was in the same boat as you until I saw DHH post about how he’s changed his use of agents. In his talk with Lex Fridman his approach was similar to mine and it really felt like a kernel of sanity amongst the hype. So when he said he’s changed his approach I had another look. I’m using agents (Claude code) every day now. I still write code every day too. (So does Dax Raad from OpenCode to throw a bit more weight behind this stance). I’m not convinced the models can own a production code base and that therefore engineers need to maintain their skills sufficiently to be responsible. I find agents helpful for a lot of stuff, usually heavily patterned code with a lot of prior art. I find CC consistently sucks at writing polars code. I honestly don’t enjoy using agents at all and I don’t think anyone can honestly claim they know how this is going to shake out. But I feel by using the tools myself I have a much stronger sense of reality amongst the hype.
jkhdigital 1 hours ago [-]
I strongly agree with that last statement—I hate using agents because their code smells awful even if it works. But I have to use them now because otherwise I’m going to wake up one day and be 100% obsolete and never even notice how it happened.
yoaviram 1 hours ago [-]
Yesterday I wrote a post about exactly this. Software development, as the act of manually producing code, is dying. A new discipline is being born. It is much closer to proper engineering.

Like an engineer overseeing the construction of a bridge, the job is not to lay bricks. It is to ensure the structure does not collapse.

The marginal cost of code is collapsing. That single fact changes everything.

https://nonstructured.com/zen-of-ai-coding/

raincole 6 seconds ago [-]
> wrote

Quite heavy-lifting word here. You understand why people flagged that post right? It's painfully non-human. I'm not criticizing the use of LLM, but I highly suggest you read Simon's posts. He's obviously a heavy AI user, but his blog posts aren't that inorganic and that's why he became the new HN blog babe.

sltr 13 minutes ago [-]
The formal engineering disciplines are not defined by the construction vs design distinction so much as the regulatory gates they have passed and the ethical burdens they shoulder for society's benefit.

https://www.slater.dev/2025/09/its-time-to-license-software-...

hollowturtle 38 minutes ago [-]
We have the entire web built on technical debt and LLMs mostly trained on that, what could go wrong? Cost will reside somewhere else if not on code
hresvelgr 58 minutes ago [-]
> It is much closer to proper engineering.

I would not equate software engineering to "proper" engineering insofar as being uttered in the same sentence as mechanical, chemical, or electrical engineering.

The cost of code is collapsing because web development is not broadly rigorous, robust software was never a priority, and everyone knows it. The people complaining that AI isn't good enough yet don't grasp that neither are many who are in the profession currently.

Arkhaine_kupo 29 minutes ago [-]
> The people complaining that AI isn't good enough yet don't grasp that neither are many who are in the profession currently.

I think the externalities are being ignored. Having time and money to train engineers is expensive. Having all the data of your users being stolen is a slap in the wrist.

So replacing those bad worekrs with AI is fine. Unless you remove the incentives to be fast instead of good, then yeah AI can be good enough for some cases.

6LLvveMx2koXfwn 46 minutes ago [-]
Indeed, it's like those complaining self-driving cars occasionally crash when their crash rates are up to 90% less than humans . . .
mohsen1 3 hours ago [-]
I've experimented with agentic coding/engineering a lot recently. My observation is that software that is easily tested are perfect for this sort of agentic loop.

In one of my experiments I had the simple goal of "making Linux binaries smaller to download using better compression" [1]. Compression is perfect for this. Easily validated (binary -> compress -> decompress -> binary) so each iteration should make a dent otherwise the attempt is thrown out.

Lessons I learned from my attempts:

- Do not micro-manage. AI is probably good at coming up with ideas and does not need your input too much

- Test harness is everything, if you don't have a way of validating the work, the loop will go stray

- Let the iterations experiment. Let AI explore ideas and break things in its experiment. The iteration might take longer but those experiments are valuable for the next iteration

- Keep some .md files as scratch pad in between sessions so each iteration in the loop can learn from previous experiments and attempts

[1] https://github.com/mohsen1/fesh

medi8r 2 hours ago [-]
You have to have really good tests as it fucks up in strange ways people don't (because I think experienced programmers run loops in their brain as they code)

Good news - agents are good at open ended adding new tests and finding bugs. Do that. Also do unit tests and playwright. Testing everything via web driving seems insane pre agents but now its more than doable.

CloakHQ 1 hours ago [-]
The test harness point is the one that really sticks for me too. We've been using agentic loops for browser automation work, and the domain has a natural validation signal: either the browser session behaves the way a real user would, or it doesn't. That binary feedback closes the loop really cleanly.

The tricky part in our case is that "behaves correctly" has two layers - functional (did it navigate correctly?) and behavioral (does it look human to detection systems?). Agents are fine with the first layer but have no intuition for the second. Injecting behavioral validation into the loop was the thing that actually made it useful.

The .md scratch pad between sessions is underrated. We ended up formalizing it into a short decisions log - not a summary of what happened, just the non-obvious choices and why. The difference between "we tried X" and "we tried X, it failed because Y, so we use Z instead" is huge for the next session.

Schlagbohrer 56 minutes ago [-]
What are you developing that technology for?
octoclaw 23 minutes ago [-]
[dead]
jkhdigital 1 hours ago [-]
Today I gave a lecture to my undergraduate data structures students about the evolution of CPU and GPU architectures since the late 1970s. The main themes:

- Through the last two decades of the 20th century, Moore’s Law held and ensured that more transistors could be packed into next year’s chips that could run at faster and faster clock speeds. Software floated on a rising tide of hardware performance so writing fast code wasn’t always worth the effort.

- Power consumption doesn’t vary with transistor density but varies with the cube of clock frequency, so by the early 2000s Intel hit a wall and couldn’t push the clock above ~4GHz with normal heat dissipation methods. Multi-core processors were the only way to keep the performance increasing year after year.

- Up to this point the CPU could squeeze out performance increases by parallelizing sequential code through clever scheduling tricks (and compilers could provide an assist by unrolling loops) but with multiple cores software developers could no longer pretend that concurrent programming was only something that academics and HPC clusters cared about.

CS curricula are mostly still stuck in the early 2000s, or at least it feels that way. We teach big-O and use it to show that mergesort or quicksort will beat the pants off of bubble sort, but topics like Amdahl’s Law are buried in an upper-level elective when in fact it is much more directly relevant to the performance of real code, on real present-day workloads, than a typical big-O analysis.

In any case, I used all this as justification for teaching bitonic sort to 2nd and 3rd year undergrads.

My point here is that Simon’s assertion that “code is cheap” feels a lot like the kind of paradigm shift that comes from realizing that in a world with easily accessible massively parallel compute hardware, the things that matter for writing performant software have completely shifted: minimizing branching and data dependencies produces code that looks profoundly different than what most developers are used to. e.g. running 5 linear passes over a column might actually be faster than a single merged pass if those 5 passes touch different memory and the merged pass has to wait to shuffle all that data in and out of the cache because it doesn’t fit.

What all this means for the software development process I can’t say, but the payoff will be tremendous (10-100x, just like with properly parallelized code) for those who can see the new paradigm first and exploit it.

ukuina 4 hours ago [-]
I find StrongDM's Dark Factory principles more immediately actionable (sorry, Simon!): https://factory.strongdm.ai/principles
eviluncle 1 minutes ago [-]
Not sure there's anything to be sorry for, he literally wrote about it a few weeks ago:

https://simonwillison.net/2026/Feb/7/software-factory/

9wzYQbTYsAIc 3 hours ago [-]
I second that, sometimes it's defensibly worth throwing token fuel at the problem and validate as you go.
sd9 1 hours ago [-]
I've recently got into red/greed TDD with claude code, and I have to agree that it seems like the right way to go.

As my projects were growing in complexity and scope, I found myself worrying that we were building things that would subtly break other parts of the application. Because of the limited context windows, it was clear that after a certain size, Claude kind of stops understanding how the work you're doing interacts with the rest of the system. Tests help protect against that.

Red/green TDD specifically ensures that the current work is quite focused on the thing that you're actually trying to accomplish, in that you can observe a concrete change in behaviour as a result of the change, with the added benefit of growing the test suite over time.

It's also easier than ever to create comprehensive integration test suites - my most valuable tests are tests that test entire user facing workflows with only UI elements, using a real backend.

vessenes 39 minutes ago [-]
Red/green is especially good with claude because even now with opus 4.6, claude can throw out a little comment like “//Implementation on hold until X/Y/Z: return { true }” and proceed to completely skip implementation based on the inline skip comment for a longgg time. It used to do this aggressively even in the tests, but by and large red/green prompting helps immensely - it tells the agent “think of failing tests as SUCCESS right now” - then you’ll get lots of them.

I’ve always been partial to integration tests too. Hand coding made integration tests feel bad; you’re almost doubling the code output in some cases - especially if you end up needing to mock a bunch of servers. Nowadays that’s cheap, which is super helpful.

sd9 7 minutes ago [-]
Yeah, I've always _preferred_ integration tests, but the cost of building them was so great. Now the cost is effectively eliminated, and if you make a change that genuinely does affect an integration test (changing the text on a button, for example) it's easy to smart-find-and-replace and fix them up. So I'm using them a lot more.

The only problem is... they still take much longer to _run_ than unit tests, and they do tend to be more flaky (although Claude is helpful in fixing flaky tests too). I'm grateful for the extra safety, but it makes deployments that much slower. I've not really found a solution to that part beyond parallelising.

chillfox 2 hours ago [-]
Isn’t this pretty much how everyone uses agents?

Feels like it’s a lot of words to say what amounts to make the agent do the steps we know works well for building software.

tr888 2 hours ago [-]
For web apps, explictly asking the agent to build in sensible checkpoints and validate at the checkpoint using Playwright has been very successful for me so far. It prevents the agent from strating off course and struggling to find its way back. That and always using plan mode first, and reviewing the plan for evidence of sensible checkpoints. /opusplan to save tokens!
nishantjani10 2 hours ago [-]
I primarily use AI for understanding codebases myself. My prompt is:

"deeply understand this codebase, clearly noting async/sync nature, entry points and external integration. Once understood prepare for follow up questions from me in a rapid fire pattern, your goal is to keep responses concise and always cite code snippets to ensure responses are factual and not hallucinated. With every response ask me if this particular piece of knowledge should be persistent into codebase.md"

Both the concise and structure nature (code snippets) help me gain knowledge of the entire codebase - as I progressively ask complex questions on the codebase.

wokwokwok 2 hours ago [-]
I really like the idea of agent coding patterns. This feels like it could be expanded easily with more content though. Off the top of my head:

- tell the agent to write a plan, review the plan, tell the agent to implement the plan

- allow the agent to “self discover” the test harness (eg. “Validate this c compiler against gcc”)

- queue a bunch of tasks with // todo … and yolo “fix all the todo tasks”

- validate against a known output (“translate this to rust and ensure it emits the same byte or byte output as you go”)

- pick a suitable language for the task (“go is best for this task because I tried several languages and it did the best for this domain in go”)

fennecfoxy 44 minutes ago [-]
This sort of thing is available using utilities like spec kit/spec kitty/etc. But yes it does make it do better, including writing its own checklists so that it comes back to the tasks it identified early on without distraction.
winwang 1 hours ago [-]
Linear walkthrough: I ask my agents to give me a numbered tree. Controlling tree size specifies granularity. Numbering means it's simple to refer to points for discussion.

Other things that I feel are useful:

- Very strict typing/static analysis

- Denying tool usage with a hook telling the agent why+what they should do (instead of simple denial, or dangerously accepting everything)

- Using different models for code review

kubb 2 hours ago [-]
Is there a market for this like OOP patterns that used to sell in the 90s?
arjie 2 hours ago [-]
The underlying technology is still improving at a rapid pace. Many of last year's tricks are a waste of tokens now. Some ideas seem less fragile: knowing two things allows you to imagine the confluence of the two so you know to ask. Other things are less so: I'm a big fan of the test-based iteration loop; it is so effective that I suspect almost all users have arrived at it independently[0]. But the emergent properties of models are so hard to actually imagine. A future sufficiently-smart intelligence may take a different approach that is less search and more proof. I wouldn't bet on it, but I've been surprised too many times over the last few years.

0: https://wiki.roshangeorge.dev/w/Blog/2025-12-01/Grounding_Yo...

jascha_eng 1 hours ago [-]
It definitely feels like everyone is trying to sell you something that is supposed to help you build rather than actually building useful stuff.

Which is oddly close to how investment advice is given. If these techniques work so well, why give them up for free?

ares623 2 hours ago [-]
everybody's trying to become the next Uncle Bob
gaigalas 24 minutes ago [-]
The most important thing you need to understand with working with agents for coding is that now you design a production line. And that has nothing to do (mostly) with designing or orchestrating agents.

Take a guitar, for example. You don't industrialize the manufacture of guitars by speeding up the same practices that artisans used to build them. You don't create machines that resemble individual artisans in their previous roles (like everyone seems to be trying to do with AI and software). You become Leo Fender, and you design a new kind of guitar that is made to be manufactured at another level of scale magnitude. You need to be Leo Fender though (not a talented guitarrist, but definitely a technical master).

To me, it sounds too early to describe patterns, since we haven't met the Ford/Fender/etc equivalent of this yet. I do appreciate the attempt though.

yieldcrv 27 minutes ago [-]
I dont currently have confidence in TDD

A broken test doesn’t make the agentic coding tool go “ooooh I made a bad assumption” any more than a type error or linter does

All a broken test does it prompt me to prompt back “fix tests”

I have no clue which one broke or why or what was missed, and it doesnt matter. Actual regressions are different and not dependent on these tests, and I follow along from type errors and LLM observability

claud_ia 22 minutes ago [-]
The TDD critique is real, but it's worth separating tests-as-specification from tests-as-verification. Post-hoc tests—written after the implementation—give agents nothing meaningful to work against; of course they'll just "fix" them. But tests written before the agent starts, encoding precise behavioral invariants (edge cases, failure modes, not just the happy path), act as actual constraints on the solution space. The pattern that works: write the tests as an unambiguous spec, then let the agent implement to green. The interesting side effect is that the human's job shifts toward writing rigorous behavioral specs rather than reviewing code—which is harder in a different way, but more durable.
claud_ia 22 minutes ago [-]
The TDD critique is real, but it's worth separating tests-as-specification from tests-as-verification. Post-hoc tests—written after the implementation—give agents nothing meaningful to work against; of course they'll just fix them. But tests written before the agent starts, encoding precise behavioral invariants (edge cases, failure modes, not just the happy path), act as actual constraints on the solution space. The pattern that works: write the tests as an unambiguous spec, then let the agent implement to green. The interesting side effect is that the human's job shifts toward writing rigorous behavioral specs rather than reviewing code—which is harder in a different way, but more durable.
claud_ia 23 minutes ago [-]
The TDD critique is real, but it's worth separating tests-as-specification from tests-as-verification. Post-hoc tests—written after the implementation—give agents nothing meaningful to work against; of course they'll just "fix" them. But tests written before the agent starts, encoding precise behavioral invariants (edge cases, failure modes, not just the happy path), act as actual constraints on the solution space. The pattern that works: write the tests as an unambiguous spec, then let the agent implement to green. The interesting side effect is that the human's job shifts toward writing rigorous behavioral specs rather than reviewing code—which is harder in a different way, but more durable.
sdevonoes 2 hours ago [-]
Is there anything about reviewing the generated code? Not by the author but by another human being.

Colleagues don’t usually like to review AI generated code. If they use AI to review code, then that misses the point of doing the review. If they do the review manually (the old way) it becomes a bottleneck (we are faster at producing code now than we are at reviewing it)

fud101 1 hours ago [-]
Any word on patterns for security and deployment to prod?
pts_ 1 hours ago [-]
I really hate smelly statements like this or that is cheap now. They reek of carelessness.
jamiemallers 1 hours ago [-]
[dead]
calmtrace 3 hours ago [-]
[dead]
3 hours ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 10:25:32 GMT+0000 (Coordinated Universal Time) with Vercel.