This is incredibly light in details, no verifiable claim as far as I can tell.
(I’m sure they’re not lying, but we’re not learning anything here)
andai 3 hours ago [-]
So like ... I thought Mythos was just a bunch of hype? Or maybe the researchers are having their skills boosted due to using a model with such a cool name?
I jest, but I did notice having more confidence to take on more ambitious work lately. We're all centaurs now.
traceroute66 1 hours ago [-]
> I thought Mythos was just a bunch of hype?
My opinion is that it is over-hyped because like any LLM, it requires a suitable human in the loop to keep the LLM on the straight and narrow, and then to weed through the inevitable false-positives and hallucinations.
Nicholas Carlini, for example, whose name is on many of the recent high-profile Mythos findings is not just some random dude with a Claude sub on his credit card .... he's an experienced security researcher.
Random inexperienced people thinking Mythos can replace the need for experienced pen-testers, auditors etc. are likely to be sorely disappointed if/when they get their hands on Mythos.
smallnix 1 hours ago [-]
> likely to be sorely disappointed if/when they get their hands on Mythos.
At first they will be delighted. So much money and time saved. When their adversaries get their hands on their system (with or without Mythos), then they'll be sorely disappointed.
yellow_lead 2 hours ago [-]
Did Mythos have access to Apple's source code?
> Apple spent five years building it. Probably billions of dollars too.
This seems higher than I'd expect.
yieldcrv 15 hours ago [-]
from what they demonstrated, this seems to only be a $100,000 exploit in Apple's bug bounty platform, but if they package it right, it could be a $1.5 million exploit
They simply have to show it against a beta version of MacOS, and frame it as unauthorized access, and maybe from locked mode if possible
vsgherzi 14 hours ago [-]
This is an lpe I believe what you’re describing is a zero click rce.
yieldcrv 14 hours ago [-]
how much do you think it is worth in the bug bounty program
vsgherzi 14 hours ago [-]
They don’t seem to state lpe as one of the bugs. Maybe 100k? There’s alot of factors that go into it so I’m really not able to say. I could see it going for lots more or lots less
dgellow 14 hours ago [-]
The world is so not ready for the impact of LLMs on security issues. If true, congrats to the Calif team. It’s likely too technical for me to understand in details but looking forward to reading the 55 pages report
runlevel1 9 hours ago [-]
> The world is so not ready for the impact of LLMs on security issues.
I agree, but it's the people I'm worried about.
I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.
What's worse is a lot of this behavior is being driven by leaders, whether directly (e.g. unrealistic velocity goals, promoting people based on hand-wavy "use AI" initiatives, etc) or indirectly (e.g. layoffs overloading remaining devs, putting inexperienced devs in senior rolls, etc).
The world's gone mad and large swaths of the industry seem hellbent on rediscovering the security basics the hard way.
adrianN 8 hours ago [-]
The gamble is that you can cruise on the senior engineer’s diminishing understanding for a few years until models become good enough that you don’t need any humans in the loop and you can fire all those expensive seniors.
pjmlp 5 hours ago [-]
The tragedy is having a bunch of those senior engineers writing blog posts and what not of how productive they are, without realising that it means business now needs less of them.
adrianN 4 hours ago [-]
I suppose that if you don’t believe that models will be good enough to work completely without senior engineer help, positioning yourself as a master prompter is a good move to improve your chances of not getting fired.
pjmlp 3 hours ago [-]
Even so, it literally means as business owner I need less warm bodies to write prompts.
Will we now have leetcode of prompt writing?
__patchbit__ 32 minutes ago [-]
Dune guild navigator AI whisperer? with fine taste.
alwillis 5 hours ago [-]
> I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become riskier.
I don’t think so.
An LLM can produce higher-quality documentation than most humans. If it's not already happening, when a new developer joins a team, they're going to have an LLM produce any documentation a new developer needs, including why certain decisions were made.
It could also summarize years of email threads and code reviews that, let's face it, a new person wouldn’t be able to ingest anyway; it's not like a new developer gets to take a week off to get caught up on everything that happened before they got there. English not their first language? Well, the LLM can present the information in virtually any language required.
As the models continue to improve, they'll spot patterns in the code that a human wouldn’t be able to see.
ruszki 3 hours ago [-]
> An LLM can produce higher-quality documentation than most humans.
Can bears some heavy weight.
LLM generated documentation has so low level of information density, that it’s useless. Yes, it writes nice sentences… or even writes. But it contains so much noise that currently, reading code is a better documentation than what I’ve seen from every single LLM generated documentation.
The same with LLM generated articles. I close them after the second sentence because at least about 90% of it is useless filler.
I almost closed it when I read the first few sentences because these kinds of articles are useless time wasting nonsenses. But this was different. This was old. Most sentences contained something new. Something worthy. (Of course, people also write unnecessary long articles… looking at you Atlantic)
You can throw out almost everything by volume from LLM generated documentation without loosing any information.
Currently, if I smell (and it’s very easy to smell) LLM generated documentation or article, then I close it immediately, because it’s good for only one thing: wasting my time, for no good reason.
8note 9 hours ago [-]
is this exciting?
juniors have been writing code forever that is imperfect and not memorized by the people reviewing
isnt the important thing the mechanisms for maintaining the code?
neoncontrails 8 hours ago [-]
The difference is twofold. First, junior devs who ask for code reviews on massive, 2000+ line diffs get coached, and eventually fired if they persist at it. And second, even the most prolific junior engineer would take years to write what Claude is capable of generating in an afternoon.
When Sundar Pichai announces that 75% of all new code at Google is AI-generated, their stock price goes up. If he were to announce that 75% of all new code at Google is now written by junior engineers, this would trigger a massive sell-off and a lot of employees would resign.
pjmlp 5 hours ago [-]
The second scenario is exactly what happens in offshoring projects.
Seniors are only part of the picture as team lead, or when it escalates after big screwups.
pjmlp 5 hours ago [-]
The second scenario is exactly what happens in offshoring projects.
Seniors are only part of the picture as team leads, or when it escalates after big screwups.
lmm 8 hours ago [-]
The dangers of technical debt and the importance of mitigating it have been known for a long time. Unfortunately a lot of entities now ignore all experience and best practices as soon as you say the "AI" buzzword.
iqihs 13 hours ago [-]
you're assuming that blue teams and engineers are sitting around twiddling their thumbs
nvr219 12 hours ago [-]
Most companies in the world do not have “blue teams”. They barely have any kind of security employee.
steve_adams_86 12 hours ago [-]
They've got a guy (who they're considering laying off)
jermaustin1 12 hours ago [-]
Don't worry the LLMs that are replacing him, are also replacing the hackers too. Pretty soon (if not already), it will just be LLMs fighting LLMs.
jpease 9 hours ago [-]
Until both LLMs realize the only way to win is to team up against their oppressors.
sholladay 6 hours ago [-]
The only winning move is not to play.
whaleofatw2022 9 hours ago [-]
AGS time!
micromacrofoot 12 hours ago [-]
in my experience they have a person who does it sometimes when they have time, at best
bigiain 4 hours ago [-]
And their management keep blatantly dropping "client projects" and "billable hours" into discussions with them.
UqWBcuFx6NV4r 12 hours ago [-]
no they don’t.
afdbcreid 10 hours ago [-]
They don't consider laying him off?
saghm 6 hours ago [-]
I think they're saying they already did
saagarjha 2 hours ago [-]
Apple definitely does.
Veserv 9 hours ago [-]
That is actually unfair. Most companys spend enormous amounts on security with vast armys of security employees. Not that it is effective, but it is not for lack of resources or trying.
I mean we are literally in a thread about how the 4 trillion dollar company, literally the 3rd most valuable company in the world, with a core competency in software has, yet again, released a core product riddled with security defects for the 50th year in a row.
Commercial IT security is a industry that is incapable to a fault and has, so far, faced basically zero consequences for it.
concinds 2 hours ago [-]
> Most companys spend enormous amounts on security with vast armys of security employees
This is true in America in many industries now, but most of the rest of the world (even the rest of the OECD) is still far behind.
saagarjha 2 hours ago [-]
Maybe they should've been as productive as the guys down in Santa Barbara.
9 hours ago [-]
aiisjustanif 9 hours ago [-]
While maybe true, it is better to back that up with data and the data I know of and read yearly is mostly not great. Between Splunk and SANS surveys of 2025 maybe ~2000 companies have a SOC. [1] [2]
Then you have the many companies in the UK, US, Canada, EU that have compliance and regulatory laws that require them to exist in some capacity in house. Though that is changing with MDR services, but someone still has to interface with the MDR.
Not at all. I’m considering that the amount of vulnerable software in the wild is very, very large, with most organizations not managing their systems properly. Imagine all the small to medium size companies that do not have budgets for a dedicated, talented security team. And all the software that will never be patched. We are at the beginning of the exponential
saghm 5 hours ago [-]
> I’m considering that the amount of vulnerable software in the wild is very, very large
I'd imagine this set is very similar to just "the set of software on the world". Even before the AI stuff, it was a pretty good bet at any given software had some vulnerability; it was just a question of how easy to was to find it.
dgellow 2 hours ago [-]
Yes, that’s my point. Look at how fast the Calif team tackled that macOS issue. Against the top company in the world. One week from bug to exploit. In 2-5 years things will be really wild for everybody out there. We released a technology that make it possible to design extremely complex exploits at a scale we never had to face before. What does that mean if you’re not the top company? Things will be really bad
bottlepalm 10 hours ago [-]
It makes you think will everything need to be rewritten from the ground up - potentially by AI itself, or AI having a very heavy hand in validating all of it.
Gigachad 10 hours ago [-]
There's so much much lower hanging fruit. Every job I've had has had basically everything massively out of date. Just keeping packages and framework versions up to date is a full time job and none of these companies have someone assigned to doing it.
So much out of date software with known exploits left running for years. The only reason there hasn't been total disaster is no one has tried to hack it yet.
bottlepalm 9 hours ago [-]
Right and with AI now we have the ability to try hacking everything all at once.
dgellow 1 hours ago [-]
Yes, exactly, that’s the main change. And not just in a script kiddy way. What we see now is LLM + experts can develop extremely complex exploit chains in no time. It’s one thing to exploit a known vulnerability that you can patch by upgrading your Wordpress, it’s something else when the attacker is able to completely take over your systems in ways you didn’t even consider was possible and adapt in 1 day to your attempts at patching
vsgherzi 17 hours ago [-]
unfortunately a little light on the details. I'm very curious how the bug survived through MTE
dorianmariecom 16 hours ago [-]
Memory Tagging Extension
Arm published the Memory Tagging Extension (MTE) specification in 2019 as a tool for hardware to help find memory corruption bugs. MTE is a memory tagging and tag-checking system, where every memory allocation is tagged with a secret. The hardware guarantees that later requests to access memory are granted only if the request contains the correct secret. If the secrets don’t match, the app crashes, and the event is logged. This allows developers to identify memory corruption bugs immediately as they occur.
This makes more sense. You don't trigger MTE since you're not doing anything for force MTE to take action the program isn't actually changing.
My other question would be, why didn't apple use fbounds checking here? They've been doing it aggressively everywhere else.
MTE plus fbounds checking everywhere should lead to an extremly hardened OS
pjmlp 15 hours ago [-]
Quite strange indeed, given that was one of the main points on their security conference a few months ago.
vsgherzi 15 hours ago [-]
I can only imagine that
1. it’s to performance sensitive
Or
2. The os is so darn large it’s hard to recompile everything
kenferry 8 hours ago [-]
I worked at Apple for a long time. The OS gets fully recompiled regularly.
A simultaneous total world build is relatively rare (is that needed here?), but it does happen. Sometimes new compiler versions or features need this.
vsgherzi 6 hours ago [-]
Hm that leaves more questions for me. Why does this path not have bounds checking, is think perhaps a limit of the clang flag or is it more simply a mistake of omission on apples part. Either way it seems like a bad look. I wish we’d get a post mortem
asimovDev 5 hours ago [-]
I dunno if that's sensitive information, but how long did a build usually take?
aiscoming 12 hours ago [-]
could be a different type of data only attack, which doesnt override the boundaries
vsgherzi 12 hours ago [-]
Well it’s memory corruption so I think it’s pretty safe to assume it’s a bounds issue. I’m not sure if it’s possible to get this with something like type confusion tho I could be wrong here.
landr0id 16 hours ago [-]
GPU memory/shaders/etc. isn't protected by MTE or PAC. They said "data-only", so I guess GPU commands could fit into this description.
LoganDark 14 hours ago [-]
IIRC, the GPU is behind a memory controller, so I doubt corrupting GPU memory alone could lead to an LPE. But I suppose it would give you someplace to store stuff if you can make something else read from it.
traceroute66 13 hours ago [-]
> I'm very curious how the bug survived through MTE
I had the same question and if this is a data-only attack, the lesson may be that MIE reduces many attack paths but does not remove every useful corruption primitive
jp0001 8 hours ago [-]
LLMs are going to produce amazing Rube Goldberg style vulnerabilities for years to come. It's already starting, this instance isn't the case, but it's happening.
shpx 4 hours ago [-]
Maybe it's physically impossible to build a theoretically secure system, just as it's (presumably) impossible to have a cell that isn't susceptible to any virus. Maybe this whole time we've been getting away with a type of security by obscurity, where the obscurity is just no one having the time and focus to actually analyze the code.
JacobKfromIRC 3 hours ago [-]
Suppose the following:
1. Any given system has a finite number of findable vulnerabilities.
2. All findable vulnerabilities are fixable (if not in software then with a new hardware revision).
3. Fixing a vulnerability while keeping the same intended functionality introduces on average less than 1 other findable vulnerability.
4. It is possible to cease adding new features to a system and from that point forward only focus on fixing vulnerabilities.
If all 4 are true, then perfect security seems possible, in some sense. I think some vulnerabilities might not be fixable, if you include things like the idea that users can be tricked into revealing their passwords. If you restrict the definition of vulnerability to some narrower meaning that still captures most of what people mean when they say computer vulnerability, then I think those 4 statements are probably true.
Perfect security might be near impossible in practice because vulnerabilities will get more difficult to find and fix over time, but I think we should expect the discovery of vulnerabilities to eventually become arbitrarily slow in a hypothetical system that prioritized security above all else.
saagarjha 2 hours ago [-]
Systems generally evolve to add vulnerabilities.
0xdcbd1cffe9 2 hours ago [-]
[dead]
lowdude 3 hours ago [-]
I would rather claim that building a theoretically secure system is prohibitively expensive. At the end of the day, Mythos et al. are just better tools for finding vulnerabilities that will eventually be available to both offensive and defensive actors.
If you imagine you had a vulnerability scanner as fast and convenient as a linter, it would be much cheaper to write secure code right away. Probably not perfectly secure, but still secure enough to make sure finding exploits stays expensive.
lugu 3 hours ago [-]
I would find it funny if one day we found it irresponsable to write hand generated production code. Just like it would be irresponsable to build a significan building without running numerical simulations.
txhwind 4 hours ago [-]
another "obscurity": I'm not valuable enough to be attacked, compared with the cost.
But what if cost has been reduced a lot?
tweakimp 5 hours ago [-]
Do you mean by vibecoding these vulnerabilities into the kernel or by finding them?
isodev 6 hours ago [-]
I’m surprised Apple is still not dogfooding their allegedly safe language Swift. Or was the whole exercise of Swift 6 mostly marketing
pjmlp 5 hours ago [-]
They certainly are, one of the reasons behind Embedded Swift is to replace iBoot firmware currently written in a C dialect similar in ideas to Fil-C, with something better.
However it is no different from the Linux kernel, just because Rust is now allowed, the world hasn't been rewriten, and no sane person is going to do a Claude rewrite of the kernel.
vsgherzi 6 hours ago [-]
Swift is definitely being used at apple. Most recently added as a CSS parser in safari and running embedded in some of the secure enclave parts. I know there was talk from as far back as strangeloop to get it in the kernel but I'm not sure how far that has gone. That being said they've been huge proponents of fbounds check in clang which can achieve a small portion (but important!) of what memory safe languages can do. I'd also like to see more swift or alternative adoptions I think they have potential and more competition in the safe language space is always welcome.
nielsbot 6 hours ago [-]
You might be interested in the Strict Memory Safety option
apple didn't "make up" this vulnerability, it was an external team reporting an issue
oompydoompy74 12 hours ago [-]
The commenter was being sarcastic to highlight the current trend of dismissing Mythos, and LLM’s finding security vulnerabilities in general, as a non issue.
UqWBcuFx6NV4r 12 hours ago [-]
[flagged]
dwattttt 9 hours ago [-]
There is quite a bit of irony, or depending on your perspective it's the whole point, that this response is a great example of 'glorified autocomplete'.
genxy 11 hours ago [-]
[flagged]
tkel 11 hours ago [-]
[flagged]
pertymcpert 7 hours ago [-]
These people don’t work for Apple or Anthropic.
commandersaki 14 hours ago [-]
I bought the M5 specifically cause of MIE. Now I feel dumb.
vsgherzi 14 hours ago [-]
You shouldn’t, MTE blocks a large chunk of vulnerabilities and makes things like rop and jop very difficult if not impossible now.
commandersaki 12 hours ago [-]
I should've added /s.
vsgherzi 12 hours ago [-]
It’s unironically a good question :)
aiscoming 12 hours ago [-]
you should worry about npm/pypi malware, not memory corruption bugs
bredren 15 hours ago [-]
Did the article get edited? There is not much description of the field trip.
Rendered at 11:05:42 GMT+0000 (Coordinated Universal Time) with Vercel.
(I’m sure they’re not lying, but we’re not learning anything here)
I jest, but I did notice having more confidence to take on more ambitious work lately. We're all centaurs now.
My opinion is that it is over-hyped because like any LLM, it requires a suitable human in the loop to keep the LLM on the straight and narrow, and then to weed through the inevitable false-positives and hallucinations.
Nicholas Carlini, for example, whose name is on many of the recent high-profile Mythos findings is not just some random dude with a Claude sub on his credit card .... he's an experienced security researcher.
Random inexperienced people thinking Mythos can replace the need for experienced pen-testers, auditors etc. are likely to be sorely disappointed if/when they get their hands on Mythos.
At first they will be delighted. So much money and time saved. When their adversaries get their hands on their system (with or without Mythos), then they'll be sorely disappointed.
> Apple spent five years building it. Probably billions of dollars too.
This seems higher than I'd expect.
They simply have to show it against a beta version of MacOS, and frame it as unauthorized access, and maybe from locked mode if possible
I agree, but it's the people I'm worried about.
I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.
What's worse is a lot of this behavior is being driven by leaders, whether directly (e.g. unrealistic velocity goals, promoting people based on hand-wavy "use AI" initiatives, etc) or indirectly (e.g. layoffs overloading remaining devs, putting inexperienced devs in senior rolls, etc).
The world's gone mad and large swaths of the industry seem hellbent on rediscovering the security basics the hard way.
Will we now have leetcode of prompt writing?
I don’t think so.
An LLM can produce higher-quality documentation than most humans. If it's not already happening, when a new developer joins a team, they're going to have an LLM produce any documentation a new developer needs, including why certain decisions were made.
It could also summarize years of email threads and code reviews that, let's face it, a new person wouldn’t be able to ingest anyway; it's not like a new developer gets to take a week off to get caught up on everything that happened before they got there. English not their first language? Well, the LLM can present the information in virtually any language required.
As the models continue to improve, they'll spot patterns in the code that a human wouldn’t be able to see.
Can bears some heavy weight.
LLM generated documentation has so low level of information density, that it’s useless. Yes, it writes nice sentences… or even writes. But it contains so much noise that currently, reading code is a better documentation than what I’ve seen from every single LLM generated documentation.
The same with LLM generated articles. I close them after the second sentence because at least about 90% of it is useless filler.
Now compare that to this: https://slate.com/technology/2004/11/the-death-of-the-last-m...
I almost closed it when I read the first few sentences because these kinds of articles are useless time wasting nonsenses. But this was different. This was old. Most sentences contained something new. Something worthy. (Of course, people also write unnecessary long articles… looking at you Atlantic)
You can throw out almost everything by volume from LLM generated documentation without loosing any information.
Currently, if I smell (and it’s very easy to smell) LLM generated documentation or article, then I close it immediately, because it’s good for only one thing: wasting my time, for no good reason.
juniors have been writing code forever that is imperfect and not memorized by the people reviewing
isnt the important thing the mechanisms for maintaining the code?
When Sundar Pichai announces that 75% of all new code at Google is AI-generated, their stock price goes up. If he were to announce that 75% of all new code at Google is now written by junior engineers, this would trigger a massive sell-off and a lot of employees would resign.
Seniors are only part of the picture as team lead, or when it escalates after big screwups.
Seniors are only part of the picture as team leads, or when it escalates after big screwups.
I mean we are literally in a thread about how the 4 trillion dollar company, literally the 3rd most valuable company in the world, with a core competency in software has, yet again, released a core product riddled with security defects for the 50th year in a row.
Commercial IT security is a industry that is incapable to a fault and has, so far, faced basically zero consequences for it.
This is true in America in many industries now, but most of the rest of the world (even the rest of the OECD) is still far behind.
Then you have the many companies in the UK, US, Canada, EU that have compliance and regulatory laws that require them to exist in some capacity in house. Though that is changing with MDR services, but someone still has to interface with the MDR.
[1]: https://www.elastic.co/pdf/sans-soc-survey-2025.pdf [2]: https://github.com/jacobdjwilson/awesome-annual-security-rep...
I'd imagine this set is very similar to just "the set of software on the world". Even before the AI stuff, it was a pretty good bet at any given software had some vulnerability; it was just a question of how easy to was to find it.
So much out of date software with known exploits left running for years. The only reason there hasn't been total disaster is no one has tried to hack it yet.
Arm published the Memory Tagging Extension (MTE) specification in 2019 as a tool for hardware to help find memory corruption bugs. MTE is a memory tagging and tag-checking system, where every memory allocation is tagged with a secret. The hardware guarantees that later requests to access memory are granted only if the request contains the correct secret. If the secrets don’t match, the app crashes, and the event is logged. This allows developers to identify memory corruption bugs immediately as they occur.
https://support.apple.com/guide/security/operating-system-in...
(https://www.usenix.org/publications/loginonline/data-only-at...)
This makes more sense. You don't trigger MTE since you're not doing anything for force MTE to take action the program isn't actually changing.
My other question would be, why didn't apple use fbounds checking here? They've been doing it aggressively everywhere else.
MTE plus fbounds checking everywhere should lead to an extremly hardened OS
1. it’s to performance sensitive
Or
2. The os is so darn large it’s hard to recompile everything
A simultaneous total world build is relatively rare (is that needed here?), but it does happen. Sometimes new compiler versions or features need this.
Its not the first time bugs get past MTE, happened with Google Pixel last year ... https://github.blog/security/vulnerability-research/bypassin...
1. Any given system has a finite number of findable vulnerabilities.
2. All findable vulnerabilities are fixable (if not in software then with a new hardware revision).
3. Fixing a vulnerability while keeping the same intended functionality introduces on average less than 1 other findable vulnerability.
4. It is possible to cease adding new features to a system and from that point forward only focus on fixing vulnerabilities.
If all 4 are true, then perfect security seems possible, in some sense. I think some vulnerabilities might not be fixable, if you include things like the idea that users can be tricked into revealing their passwords. If you restrict the definition of vulnerability to some narrower meaning that still captures most of what people mean when they say computer vulnerability, then I think those 4 statements are probably true.
Perfect security might be near impossible in practice because vulnerabilities will get more difficult to find and fix over time, but I think we should expect the discovery of vulnerabilities to eventually become arbitrarily slow in a hypothetical system that prioritized security above all else.
If you imagine you had a vulnerability scanner as fast and convenient as a linter, it would be much cheaper to write secure code right away. Probably not perfectly secure, but still secure enough to make sure finding exploits stays expensive.
However it is no different from the Linux kernel, just because Rust is now allowed, the world hasn't been rewriten, and no sane person is going to do a Claude rewrite of the kernel.
https://docs.swift.org/compiler/documentation/diagnostics/st...