I'm very glad they're not spamming the mailing list.
jeffbee 10 minutes ago [-]
That is both really useful and a great example of why they should have stopped writing code in C decades ago. So many kernel bugs have arisen from people adding early returns without thinking about the cleanup functions, a problem that many other language platforms handle automatically on scope exit.
monksy 28 minutes ago [-]
I think this is a great and interesting project. However, I hope that they're not doing this to submit patches to the kernel. It would be much better to layer in additional tests to exploit bugs and defects for verification of existance/fixes.
(Also tests can be focused per defect.. which prevents overload)
From some of the changes I'm seeing: This looks like it's doing style and structure changes, which for a codebase this size is going to add drag to existing development. (I'm supportive of cleanups.. but done on an automated basis is a bad idea)
Style and structure is not the goal here, the reason people are interested in it is to find bugs.
Having said that, if it can save maintainers time it could be useful. It's worth slowing contribution down if it lets maintainers get more reviews done, since the kernel is bottlenecked much more on maintainer time than on contributor energy.
My experience with using the prototype is that it very rarely comments with "opinions" it only identifies functional issues. So when you get false positives it's usually of the form "the model doesn't understand the code" or "the model doesn't understand the context" rather than "I'm getting spammed with pointless advice about C programming preferences". This may be a subsystem-specific thing, as different areas of the codebase have different prompts. (May also be that my coding style happens to align with its "preferences").
rwmj 22 minutes ago [-]
No, it's reviewing patches posted on LKML and offering suggestions. The original patch posted corresponding to your link was this, which was (presumably!) written by a human:
Have you ever programmed with AI? It needs a lot of hand holding for even simple things sometimes. Forgets basic input, does all kinds of brain dead stuff it should know not to do.
>"good catch - thanks for pointing that out"
jamesnorden 12 minutes ago [-]
Well, if it doesn't find anything it's just a waste of time at best.
lame-robot-hoax 23 minutes ago [-]
Can you clarify how, at all, that’s relevant to the article?
asadm 23 minutes ago [-]
i think it's a skill.
__tidu 27 minutes ago [-]
well tbf code review is probably the most useful part of "AI coding", if it catches even a single bug you missed its worth it, plus false positives would waste dev time but not pollute the kernel
shevy-java 12 minutes ago [-]
Now they want to kill the Linux kernel. :(
We've already seen how bug bounty projects were closed by AI spam; I think it was curl? Or some other project I don't remember right now.
I think AI tools should be required, by law, to verify that what they report is actually a true bug rather than some hypothetical, hallucinated context-dependent not-quite-a-real-bug bug.
Rendered at 17:58:07 GMT+0000 (Coordinated Universal Time) with Vercel.
For an example of a review (picked pretty much at random) see: https://sashiko.dev/#/patchset/20260318151256.2590375-1-andr...
The original patch series corresponding to that is: https://lkml.org/lkml/2026/3/18/1600
Edit: Here's a simpler and better example of a review: https://sashiko.dev/#/patchset/20260318110848.2779003-1-liju...
I'm very glad they're not spamming the mailing list.
(Also tests can be focused per defect.. which prevents overload)
From some of the changes I'm seeing: This looks like it's doing style and structure changes, which for a codebase this size is going to add drag to existing development. (I'm supportive of cleanups.. but done on an automated basis is a bad idea)
I.e. https://sashiko.dev/#/message/20260318170604.10254-1-erdemhu...
Having said that, if it can save maintainers time it could be useful. It's worth slowing contribution down if it lets maintainers get more reviews done, since the kernel is bottlenecked much more on maintainer time than on contributor energy.
My experience with using the prototype is that it very rarely comments with "opinions" it only identifies functional issues. So when you get false positives it's usually of the form "the model doesn't understand the code" or "the model doesn't understand the context" rather than "I'm getting spammed with pointless advice about C programming preferences". This may be a subsystem-specific thing, as different areas of the codebase have different prompts. (May also be that my coding style happens to align with its "preferences").
https://lkml.org/lkml/2026/3/9/1631
>"good catch - thanks for pointing that out"
We've already seen how bug bounty projects were closed by AI spam; I think it was curl? Or some other project I don't remember right now.
I think AI tools should be required, by law, to verify that what they report is actually a true bug rather than some hypothetical, hallucinated context-dependent not-quite-a-real-bug bug.