Sibling comments point out (and I believe, corrections are welcome) that all that theater is still no protection against Apple themselves, should they want to subvert the system in an organized way. They’re still fully in control. There is, for example, as far as I understand it, still plenty of attack surface for them to run different software than they say they do.
What they are doing by this is of course to make any kind of subversion a hell of a lot harder and I welcome that. It serves as a strong signal that they want to protect my data and I welcome that. To me this definitely makes them the most trusted AI vendor at the moment by far.
tw04 6 hours ago [-]
As soon as you start going down the rabbit hole of state sponsored supply chain alteration, you might as well just stop the conversation. There's literally NOTHING you can do to stop that specific attack vector.
History has shown, at least to date, Apple has been a good steward. They're as good a vendor to trust as anyone. Given a huge portion of their brand has been built on "we don't spy on you" - the second they do they lose all credibility, so they have a financial incentive to keep protecting your data.
sunnybeetroot 13 minutes ago [-]
Didn’t Edward reveal Apple provides direct access to the NSA for mass surveillance?
> allows officials to collect material including search history, the content of emails, file transfers and live chats
> The program facilitates extensive, in-depth surveillance on live communications and stored information. The law allows for the targeting of any customers of participating firms who live outside the US, or those Americans whose communications include people outside the US.
> It was followed by Yahoo in 2008; Google, Facebook and PalTalk in 2009; YouTube in 2010; Skype and AOL in 2011; and finally Apple, which joined the program in 2012. The program is continuing to expand, with other providers due to come online.
Apple have name/address/credit-card/IMEI/IMSI tuples stored for every single Apple device. iMessage and FaceTime leak numbers, so they know who you talk to. They have real-time location data. They get constant pings when you do anything on your device. Their applications bypass firewalls and VPNs. If you don't opt out, they have full unencrypted device backups, chat logs, photos and files. They made a big fuss about protecting you from Facebook and Google, then built their own targeted ad network. Opting out of all tracking doesn't really do that. And even if you trust them despite all of this, they've repeatedly failed to protect users even from external threats. The endless parade of iMessage zero-click exploits was ridiculous and preventable, CKV only shipped this year and isn't even on by default, and so on.
Apple have never been punished by the market for any of these things. The idea that they will "lose credibility" if they livestream your AI interactions to the NSA is ridiculous.
lurking_swe 39 minutes ago [-]
> They made a big fuss about protecting you from Facebook and Google, then built their own targeted ad network.
What kind of targeting advertising am i getting from apple as a user of their products? Genuinely curious. I’ll wait.
The rest of your comment may be factually accurate but it isn’t relevant for “normal” users, only those hyper aware of their privacy. Don’t get me wrong, i appreciate knowing this detail but you need to also realize that there are degrees to privacy.
commandersaki 14 minutes ago [-]
> If you don't opt out, they have full unencrypted device backups, chat logs, photos and files.
Also full disk encryption is opt-in for macOS. But the answer isn't that Apple wants you to be insecure, they just probably want to make it easier for their users to recover data if they forget a login password or backup password they set years ago.
> real-time location data
Locations are end to end encrypted.
Tagbert 39 minutes ago [-]
They have not been punished because they have not abused their access to that data.
> There's literally NOTHING you can do to stop that specific attack vector.
E2E. Might not be applicable for remote execution of AI payloads, but it is applicable for most everything else, from messaging to storage.
Even if the client hardware and/or software is also an actor in your threat model, that can be eliminated or at least mitigated with at least one verifiably trusted piece of equipment. Open hardware is an alternative, and some states build their entire hardware stack to eliminate such threats. If you have at least one trusted equipment mitigations are possible (e.g. external network filter).
warkdarrior 4 hours ago [-]
E2E does not protect metadata, at least not without significant overheads and system redesigns. And metadata is as important as data in messaging and storage.
afh1 4 hours ago [-]
> And metadata is as important as data in messaging and storage.
Is it? I guess this really depends. For E2E storage (e.g. as offered by Proton with openpgpjs), what metadata would be of concern? File size? File type cannot be inferred, and file names could be encrypted if that's a threat in your model.
mbauman 3 hours ago [-]
The most valuable "metadata" in this context is typically with whom you're communicating/collaborating and when and from where. It's so valuable it should just be called data.
fsflover 3 hours ago [-]
How is this relevant to the private cloud storage?
Jerrrrrrry 3 hours ago [-]
No point in storing data if it is never shared with anyone else.
Whom it is shared with can infer the intent of the data.
fsflover 17 minutes ago [-]
Backups?
3 hours ago [-]
1 hours ago [-]
vlovich123 2 hours ago [-]
Strictly speaking there's homomorphic encryption. It's still horribly slow and expensive but it literally lets you run compute on untrusted hardware in a way that's mathematically provable.
commandersaki 12 minutes ago [-]
Yeah the impetus for PCC was that homomorphic encryption wasn't feasible and this was the best realistic alternative.
natch 3 hours ago [-]
As to the trust loss, we seem to be already past that. It seems to me they are now in the stage of faking it.
hulitu 3 hours ago [-]
> History has shown, at least to date, Apple has been a good steward.
cough* HW backdoor in iPhone cough*
evgen 25 minutes ago [-]
cough bullshit cough
Don't try to be subtle. If you are going to lie, go for a big lie.
Just make absolutely sure you trust your government when using an iDevice.
spondyl 42 minutes ago [-]
When it comes to China, it's not entirely fair to single out Apple here given that non-Chinese companies are not allowed to run their own compute in China directly.
It always has to be operated by a sponsor in the state who hold encryption keys and do actual deployments etc etc.
The same applies to Azure/AWS/Google Cloud's China regions and any other compute services you might think of.
jayrot 4 hours ago [-]
>Just make absolutely sure you trust your government
This sentence stings right now. :-(
commandersaki 19 minutes ago [-]
> They’re still fully in control. There is, for example, as far as I understand it, still plenty of attack surface for them to run different software than they say they do.
But any such software must be publicly verifiable otherwise it cannot be deemed secure. That's why they publish each version in a transparency log which is verified by the client and handwavy verified by public brains trust.
This is also just a tired take. The same thing could be said about passcodes on their mobile products or full disk encryption keys for the Mac line. There'd be massive loss of goodwill and legal liability if they subverted these technologies that they claim to make their devices secure.
stavros 5 hours ago [-]
> that all that theater is still no protection against Apple themselves
There is such a thing as threat modeling. The fact that your model only stops some threats, and not all threats, doesn't mean that it's theater.
hulitu 2 hours ago [-]
> The fact that your model only stops some threats, and not all threats, doesn't mean that it's theater.
Well, to be honest, theater is a pretentious word in this context. A better word will be shitshow.
(i never heard of a firewall that claims it filters _some_ packets, or an antivirus that claims that it protects against _some_ viruses)
stavros 2 hours ago [-]
Really? Please show me an antivirus that claims that it protects against all viruses. A firewall that filters all packets is a pair of scissors.
derefr 2 hours ago [-]
The "we've given this code to a third party to host and run" part can be a 100% effective stop to any Apple-internal shenanigans. It depends entirely on what the third party is legally obligated to do for them. (Or more specifically, what they're legally obligated to not do for them.)
A simple example of the sort of legal agreement I'm talking about, is a trust. A trust isn't just a legal entity that takes custody of some assets and doles them out to you on a set schedule; it's more specifically a legal entity established by legal contract, and executed by some particular law firm acting as its custodian, that obligates that law firm as executor to provide only a certain "API" for the contract's subjects/beneficiaries to interact with/manage those assets — a more restrictive one than they would have otherwise had a legal right to.
With trusts, this is done because that restrictive API (the "you can't withdraw the assets all at once" part especially) is what makes the trust a trust, legally; and therefore what makes the legal (mostly tax-related) benefits of trusts apply, instead of the trust just being a regular holding company.
But you don't need any particular legal impetus in order to create this kind of "hold onto it and don't listen to me if I ask for it back" contract. You can just... write a contract that has terms like that; and then ask a law firm to execute that contract for you.
Insofar as Apple have engaged with some law firm to in turn engage with a hosting company; where the hosting company has obligations to the law firm to provide a secure environment for the law firm to deploy software images, and to report accurate trusted-compute metrics to the law firm; and where the law firm is legally obligated to get any image-updates Apple hands over to them independently audited, and only accept "justifiable" changes (per some predefined contractual definition of "justifiable") — then I would say that this is a trustworthy arrangement. Just like a trust is a trust-worthy arrangement.
neongreen 9 minutes ago [-]
This actually sounds like a very neat idea. Do you know any services / software companies that operate like that?
patmorgan23 6 hours ago [-]
Yep. If you don't trust apple with your data, don't buy a device that runs apples operating system
yndoendo 5 hours ago [-]
That is good in theory. Reality, anyone you engage with that uses an Apple device has leaked your content / information to Apple. High confidence that Apple could easily build profiles on people that do not use their devices via this indirect action of having to communicate with Apple devices owners.
That statement above also applies to Google. There is now way not prevent indirect data sharing with Apple or Google.
hnaccount_rng 5 hours ago [-]
Yes, if your thread model includes the provider of your operating system, then you cannot win. It's really that simple. You fundamentally need to trust your operating system because it can just lie to you
fsflover 3 hours ago [-]
This is false. With FLOSS and reproducible builds, you can rely on the community for verification.
hulitu 2 hours ago [-]
> You fundamentally need to trust your operating system because it can just lie to you
Trust us, we are liars. /s
afh1 4 hours ago [-]
Depending on your social circle such exposure is not so hard to avoid. Maybe you cannot avoid it entirely but it may be low enough that it doesn't matter. I have older relatives with basically zero online presence.
dialup_sounds 4 hours ago [-]
Define "content / information".
isodev 4 hours ago [-]
That really is not a valid argument, since Apple have grown to be "the phone".
Also, many are unaware or unable to make the determination who or what will own their data before purchasing a device. One only accepts the privacy policy after one taps sign in... and is it really practical to expect people to do this by themselves when buying a phone? That's why regulation needs to step-in and enforce the right decisions are present by default.
mossTechnician 4 hours ago [-]
But if you don't trust Google with your data, you can buy a device that runs Google's operating system, from Google, and flash somebody else's operating system onto it.
Or, if you prefer, you can just look at Google's code and verify that the operating system you put on your phone is made with the code you looked at.
chadsix 6 hours ago [-]
Exactly. You can only trust yourself [1] and should self host.
That is an answer for an incredibly tiny fraction of the population. I'm not so much concerned about myself than society in general, and self-hosting just is not a viable solution to the problem at hand.
chadsix 5 hours ago [-]
To be fair, it's much easier than one can imagine (try ollama on macOS for example). In the end, Apple wrote a lot of longwinded text, but the summary is "you have to trust us."
I don't trust Apple - in fact, even the people we trust the most have told us soft lies here and there. Trust is a concept like an integral - you can only get to "almost" and almost is 0.
So you can only trust yourself. Period.
commandersaki 9 minutes ago [-]
> "you have to trust us."
You have fundamentally misunderstood PCC.
killjoywashere 4 hours ago [-]
There are multiple threat models where you can't trust yourself.
Your future self definitely can't trust your past self. And vice versa. If your future self has a stroke tomorrow, did your past self remember to write a living will? And renew it regularly? Will your future self remember that password? What if the kid pukes on the carpet before your past self writes it down?
Your current self is not statistically reliable. Andrej Karpathy administered an imagenet challenge to himself, his brain as the machine: he got about 95%.
I'm sure there are other classes of self-failure.
martinsnow 3 hours ago [-]
Given the code quality of projects like nextcloud. Suggestions like this makes the head and table transmugify into magnets.
lukev 5 hours ago [-]
The odds that I make a mistake in my security configuration are much higher than the odds that Apple is maliciously backdooring themselves.
The PCC model doesn't guarantee they can't backdoor themselves, but it does make it more difficult for them.
dotancohen 5 hours ago [-]
I don't even trust myself, I know that I'm going to mess up at some point or another.
talldayo 4 hours ago [-]
Nobody promised you that real solutions would work for everyone. Performing CPR to save a life is something "an incredibly tiny fraction of the population" is trained on, but it does work when circumstances call for it.
It sucks, but what are you going to do for society? Tell them all to sell their iPhones, punk out the NSA like you're Snowden incarnate? Sometimes saving yourself is the only option, unfortunately.
3 hours ago [-]
remram 5 hours ago [-]
Can you trust the hardware?
blitzar 3 hours ago [-]
If you make your own silicon can you trust that the sand hasnt been tampered with to breech your security?
killjoywashere 4 hours ago [-]
There's a niche industry that works on that problem: looking for evidence of tampering down to the semiconductor level.
>for them to run different software than they say they do.
They don't even need to do that. They don't need to do anything different than they say.
They already are saying only that the data is kept private from <insert very limited subset of relevant people here>.
That opens the door wide for them to share the data with anyone outside of that very limited subset. You just have to read what they say, and also read between the lines. They aren't going to say who they share with, apparently, but they are going to carefully craft what they say so that some people get misdirected.
isodev 4 hours ago [-]
Indeed, the attestation process, as described by the article, is more geared towards unauthorized exfiltration of information or injection of malicious code. However, "authorized" activities are fully supported where that means signed by Apple. So, ultimately, users need to trust that Apple is doing the right thing, just like any other company. And yes, it means they can be forced (by law) not to do the right thing.
1vuio0pswjnm7 3 hours ago [-]
"Sibling comments point out (and I believe, corrections are welcome) that all that theater is still no protection against Apple themselves, should they want to subvert the system in an organized way. They're still fully in control."
It stands to reason that that control is a prerequisite for "security".
Apple does not delegate its own "security" to someone else, a "steward". Hmmm.
Yet it expects computer users to delegate control to Apple.
Apple is not alone in this regard. It's common for "Big Tech", "security researchers" and HN commenters to advocate for the computer user to delegate control to someone else.
halJordan 5 hours ago [-]
Its not that they couldn't, its that they couldn't without a watcher knowing. And frankly this tradeoff is not new, nor is it unacceptable in anything other than "Muh Apple"
lxgr 4 hours ago [-]
This is probably the best way to do cloud computation offoading, if one chooses to do it at all.
What's desperately missing on the client side is a switch to turn this off. It's really intransparent which Apple Intelligence requests are locally processed and which are sent to the cloud, at the moment.
The only sure way to know/prevent it a priori is to... enter flight mode, as far as I can tell?
Retroactively, there's a request log in the privacy section of System Preferences, but that's really convoluted to read (due to all of the cryptographic proofs that I have absolutely no tools to verify at the moment, and honestly have no interest in).
jagrsw 7 hours ago [-]
If Apple controls the root of trust, like the private keys in the CPU or security processor used to check the enclave (similar to how Intel and AMD do it with SEV-SNP and TDX), then technically, it's a "trust us" situation, since they likely use their own ARM silicon for that?
Harder to attack, sure, but no outside validation. Apple's not saying "we can't access your data," just "we're making it way harder for bad guys (and rogue employees) to get at it."
skylerwiernik 7 hours ago [-]
I don't think they do. Your phone cryptographically verifies that the software running on the servers is what it says it is, and you can't pull the keys out of the secure enclave. They also had independent auditors go over the whole thing and publish a report. If the chip is disconnected from the system it will dump its keys and essentially erase all data.
hnaccount_rng 5 hours ago [-]
But since they also control the phone's operating system they can just make it lie to you!
That doesn't make PCC useless by the way. It clearly establishes that Apple mislead customers, if there is any intentionality in a breach, or that Apple was negligent, if they do not immediately provide remedies on notification of a breach. But that's much more a "raising the cost" kind of thing and not a technical exclusion. Yes if you get Apple, as an organisation, to want to get at your data. And you use an iPhone. They absolutely can.
HeatrayEnjoyer 6 hours ago [-]
How do you know the root enclave key isn't retained somewhere before it is written? You're still trusting Apple.
Key extraction is difficult but not impossible.
jsheard 6 hours ago [-]
> Key extraction is difficult but not impossible.
Refer to the never-ending clown show that is Intels SGX enclave for examples of this.
Can you clarify what you mean by retained and written?
plagiarist 6 hours ago [-]
I don't understand how publishing cryptographic signatures of the software is a guarantee? How do they prove it isn't keeping a copy of the code to make signatures from but actually running a malicious binary?
dialup_sounds 5 hours ago [-]
The client will only talk to servers that can prove they're running the same software as the published signatures.
And the servers prove that by relying on a key stored in secure hardware. And that secure hardware is designed by Apple, who has a specific interest in convincing users of that attestation/proof. Do you see the conflict of interest now?
SheinhardtWigCo 4 hours ago [-]
It was always "trust us". They make the silicon, and you have no hope of meaningfully reverse engineering it. Plus, iOS and macOS have silent software update mechanisms, and no update transparency.
ant_li0n 7 hours ago [-]
Hey can you help me understand what you mean? There's an entry about "Hardware Root of Trust" in that document, but I don't see how that means Apple is avoiding stating, "we can't access your data" - the doc says it's not exportable.
every entity you hand data to other than yourself is a "trust us" situation
fsflover 3 hours ago [-]
Unless it's encrypted.
ozgune 7 hours ago [-]
+1 on your comment.
I think having a description of Apple's threat model would help.
I was thinking that open source would help with their verifiable privacy promise. Then again, as you've said, if Apple controls the root of trust, they control everything.
dagmx 6 hours ago [-]
Their threat model is described in their white papers.
But essentially it is trying to get to the end result of “if someone commandeers the building with the servers, they still can’t compromise the data chain even with physical access”
bootsmann 6 hours ago [-]
They define their threat model in "Anticipating Attacks"
h1fra 5 hours ago [-]
Love this, but as an engineer, I would hate to get a bug report in that prod environment, 100% don't work on my machine and 0% reproducibility
pjmlp 3 hours ago [-]
Usually quite common when doing contract work, where externals have no access to anything besides a sandbox to play around with their contribution to the whole enterprise software jigsaw.
slashdave 4 hours ago [-]
That's a strange point of view. Clearly one shouldn't use private information for testing in any production environment.
ericlewis 3 hours ago [-]
As a person who works on this kinda stuff I know what they mean. It’s very hard to debug things totally blind.
7 hours ago [-]
curt15 4 hours ago [-]
For the experts out there, how does this compare with AWS Nitro?
bobbiechen 2 hours ago [-]
AWS Nitro (and Nitro Enclaves) are general computing platforms, so it's different. You'd need to write a PCC-like system/application on top of AWS Nitro Enclaves to make a direct comparison. A breakdown of those 5 core requirements from Apple:
1. Stateless computation on personal user data - a property of the application
2. Enforceable guarantees - a property of the application; Nitro Enclaves attestation helps here
3. No privileged runtime access - maps directly to the no administrative API access in the AWS Nitro System platform
4. Non-targetability - a property of the application
5. Verifiable transparency - a mix of the application and the platform; Nitro Enclaves attestation helps here
To be a little more concrete: (1 stateless) You could write an app that statelessly processes user data, and build it into a Nitro Enclave. This has a particular software measurement (PCR0) and can be code-signed (PCR8) and verified at runtime (2 enforceable) using Nitro Enclave Attestation. This also provides integrity protection. You get (3 no access) for "free" by running it in Nitro to begin with (from AWS - you also need to ensure there is no application-level admin access). You would need to design (4 non-targetable) as part of your application. For (5 transparency), you could provide your code to researchers as Apple is doing.
(I work with AWS Nitro Enclaves for various security/privacy use cases at Anjuna. Some of these resemble PCC and I hope we can share more details about the customer use cases eventually.)
>No privileged runtime access: PCC must not contain privileged interfaces that might enable Apple site reliability staff to bypass PCC privacy guarantees.
What about other staff and partners and other entities? Why do they always insert qualifiers?
Edit: Yeah, we know why. But my point is they should spell it out, not use wording that is on its face misleading or outright deceptive.
m3kw9 6 hours ago [-]
I will just use it, it’s Apple and all I need is to see the verifiable privacy thing and I let the researchers let me know red flags. You go on copilot, it says your code is private? Good luck
z3ncyberpunk 32 minutes ago [-]
Apple hands your data over to PRISM since 2012.
danparsonson 6 hours ago [-]
I've got a fully private LLM that's pretty good at coding built right into my head - I'll stick with that, thanks.
gigel82 5 hours ago [-]
I'm glad that more and more people start to see through the thick Apple BS (in these comments). I don't expect them to back down from this but I hope there is enough pushback that they'll be forced to add a big opt-out for all cloud compute, however "private" they make it out to be.
vtodekl 5 hours ago [-]
[dead]
jgalt212 6 hours ago [-]
[flagged]
wutwutwat 5 hours ago [-]
Comments like this are extremely common on any apple post related to photos and honestly it's pretty sus that you and many others will start complaining about a thing nobody even mentioned, because of the thing you're complaining/concerned/pissed about. That's pretty telling imo and nobody ever calls it out. I'm going to start calling it out.
MaKey 5 hours ago [-]
What exactly are you calling out?
dialup_sounds 5 hours ago [-]
They're saying the above user may be a pedophile on the basis that they brought up CSAM on an article that has nothing to do with it.
jgalt212 5 hours ago [-]
Indeed. What exactly is being called out? That someone is expressing concern at the impossibility of a vendor's claims?
nerdjon 5 hours ago [-]
That was never actually released so there is no "still".
Also worth mentioning that if that had shipped, it would have only taken affect if you uploaded images to iCloud.
niek_pas 6 hours ago [-]
They never actually went through with that, did they?
ZekeSulastin 5 hours ago [-]
They indeed shelved the plan {1}, and have also introduced iCloud Advanced Data Protection (their branding for end to end encryption) {2}.
There is still the opt-in Communication Safety {3} that tries to interdict sending or receiving media containing nudity if enabled, but Apple doesn’t get notified of any hits (and assuming I’m reading it right the parent doesn’t even get a notification unless the child sends one!).
The core of this article, if I understand it correctly, is that macOS pings Apple to make sure that apps you open are safe before opening them. This check contains some sort of unique string about the app being opened, and then there is a big leap to "this could be used by the government"
Is this the ideal situation? No, probably not. Should Apple do a better job of communicating that this is happening to users? Yes, probably so.
Does Apple already go overboard to explain their privacy settings during setup of a new device (the pages with the blue "handshake" icon)? Yes. Does Apple do a far better job of this than Google or Microsoft (in my opinion)? Yes.
I don't think anyone here is claiming that Apple is the best thing to ever happen to privacy, but when viewed via the lens of "the world we live in today", it's hard to see how Apple's privacy stance is a "scam". It seems to me to be one of the best or most reasonable stances for privacy among all large-cap businesses in the world.
max_ 4 hours ago [-]
Have you read the linked article?
jasongill 2 hours ago [-]
Yes, that's why I commented, because the article's core complaint is about the fact that the OS'es Gatekeeper feature does an OCSP certificate validation whenever an app is launched and there's no way to disable it, and that supposed calling home could leak data about your computer use over the wire.
However, it also has a LOT of speculation, with statements like "It seems this is part of Apple’s anti-malware (and perhaps anti-piracy)" and "allowing anyone on the network (which includes the US military intelligence community) to see what apps you’re launching" and "Your computer now serves a remote master, who has decided that they are entitled to spy on you."
However, without this feature (which seems pretty benign to me), wouldn't the average macOS user be actually exposed to more potential harm by being able to run untrusted or modified binaries without any warnings?
_boffin_ 6 hours ago [-]
I really don’t care at all about this as the interactions that I’d have would be the speech to text, which sends all transcripts to Apple without the ability opt out.
lukev 1 hours ago [-]
Settings > Privacy and Security > Analytics and Improvements
Rendered at 21:33:09 GMT+0000 (Coordinated Universal Time) with Vercel.
What they are doing by this is of course to make any kind of subversion a hell of a lot harder and I welcome that. It serves as a strong signal that they want to protect my data and I welcome that. To me this definitely makes them the most trusted AI vendor at the moment by far.
History has shown, at least to date, Apple has been a good steward. They're as good a vendor to trust as anyone. Given a huge portion of their brand has been built on "we don't spy on you" - the second they do they lose all credibility, so they have a financial incentive to keep protecting your data.
> allows officials to collect material including search history, the content of emails, file transfers and live chats
> The program facilitates extensive, in-depth surveillance on live communications and stored information. The law allows for the targeting of any customers of participating firms who live outside the US, or those Americans whose communications include people outside the US.
> It was followed by Yahoo in 2008; Google, Facebook and PalTalk in 2009; YouTube in 2010; Skype and AOL in 2011; and finally Apple, which joined the program in 2012. The program is continuing to expand, with other providers due to come online.
https://www.theguardian.com/world/2013/jun/06/us-tech-giants...
Apple have never been punished by the market for any of these things. The idea that they will "lose credibility" if they livestream your AI interactions to the NSA is ridiculous.
What kind of targeting advertising am i getting from apple as a user of their products? Genuinely curious. I’ll wait.
The rest of your comment may be factually accurate but it isn’t relevant for “normal” users, only those hyper aware of their privacy. Don’t get me wrong, i appreciate knowing this detail but you need to also realize that there are degrees to privacy.
Also full disk encryption is opt-in for macOS. But the answer isn't that Apple wants you to be insecure, they just probably want to make it easier for their users to recover data if they forget a login password or backup password they set years ago.
> real-time location data
Locations are end to end encrypted.
E2E. Might not be applicable for remote execution of AI payloads, but it is applicable for most everything else, from messaging to storage.
Even if the client hardware and/or software is also an actor in your threat model, that can be eliminated or at least mitigated with at least one verifiably trusted piece of equipment. Open hardware is an alternative, and some states build their entire hardware stack to eliminate such threats. If you have at least one trusted equipment mitigations are possible (e.g. external network filter).
Is it? I guess this really depends. For E2E storage (e.g. as offered by Proton with openpgpjs), what metadata would be of concern? File size? File type cannot be inferred, and file names could be encrypted if that's a threat in your model.
Whom it is shared with can infer the intent of the data.
cough* HW backdoor in iPhone cough*
Don't try to be subtle. If you are going to lie, go for a big lie.
Just make absolutely sure you trust your government when using an iDevice.
It always has to be operated by a sponsor in the state who hold encryption keys and do actual deployments etc etc.
The same applies to Azure/AWS/Google Cloud's China regions and any other compute services you might think of.
This sentence stings right now. :-(
But any such software must be publicly verifiable otherwise it cannot be deemed secure. That's why they publish each version in a transparency log which is verified by the client and handwavy verified by public brains trust.
This is also just a tired take. The same thing could be said about passcodes on their mobile products or full disk encryption keys for the Mac line. There'd be massive loss of goodwill and legal liability if they subverted these technologies that they claim to make their devices secure.
There is such a thing as threat modeling. The fact that your model only stops some threats, and not all threats, doesn't mean that it's theater.
Well, to be honest, theater is a pretentious word in this context. A better word will be shitshow.
(i never heard of a firewall that claims it filters _some_ packets, or an antivirus that claims that it protects against _some_ viruses)
A simple example of the sort of legal agreement I'm talking about, is a trust. A trust isn't just a legal entity that takes custody of some assets and doles them out to you on a set schedule; it's more specifically a legal entity established by legal contract, and executed by some particular law firm acting as its custodian, that obligates that law firm as executor to provide only a certain "API" for the contract's subjects/beneficiaries to interact with/manage those assets — a more restrictive one than they would have otherwise had a legal right to.
With trusts, this is done because that restrictive API (the "you can't withdraw the assets all at once" part especially) is what makes the trust a trust, legally; and therefore what makes the legal (mostly tax-related) benefits of trusts apply, instead of the trust just being a regular holding company.
But you don't need any particular legal impetus in order to create this kind of "hold onto it and don't listen to me if I ask for it back" contract. You can just... write a contract that has terms like that; and then ask a law firm to execute that contract for you.
Insofar as Apple have engaged with some law firm to in turn engage with a hosting company; where the hosting company has obligations to the law firm to provide a secure environment for the law firm to deploy software images, and to report accurate trusted-compute metrics to the law firm; and where the law firm is legally obligated to get any image-updates Apple hands over to them independently audited, and only accept "justifiable" changes (per some predefined contractual definition of "justifiable") — then I would say that this is a trustworthy arrangement. Just like a trust is a trust-worthy arrangement.
That statement above also applies to Google. There is now way not prevent indirect data sharing with Apple or Google.
Trust us, we are liars. /s
Also, many are unaware or unable to make the determination who or what will own their data before purchasing a device. One only accepts the privacy policy after one taps sign in... and is it really practical to expect people to do this by themselves when buying a phone? That's why regulation needs to step-in and enforce the right decisions are present by default.
Or, if you prefer, you can just look at Google's code and verify that the operating system you put on your phone is made with the code you looked at.
[1] https://www.youtube.com/watch?v=g_JyDvBbZ6Q
I don't trust Apple - in fact, even the people we trust the most have told us soft lies here and there. Trust is a concept like an integral - you can only get to "almost" and almost is 0.
So you can only trust yourself. Period.
You have fundamentally misunderstood PCC.
Your future self definitely can't trust your past self. And vice versa. If your future self has a stroke tomorrow, did your past self remember to write a living will? And renew it regularly? Will your future self remember that password? What if the kid pukes on the carpet before your past self writes it down?
Your current self is not statistically reliable. Andrej Karpathy administered an imagenet challenge to himself, his brain as the machine: he got about 95%.
I'm sure there are other classes of self-failure.
The PCC model doesn't guarantee they can't backdoor themselves, but it does make it more difficult for them.
It sucks, but what are you going to do for society? Tell them all to sell their iPhones, punk out the NSA like you're Snowden incarnate? Sometimes saving yourself is the only option, unfortunately.
>for them to run different software than they say they do.
They don't even need to do that. They don't need to do anything different than they say.
They already are saying only that the data is kept private from <insert very limited subset of relevant people here>.
That opens the door wide for them to share the data with anyone outside of that very limited subset. You just have to read what they say, and also read between the lines. They aren't going to say who they share with, apparently, but they are going to carefully craft what they say so that some people get misdirected.
It stands to reason that that control is a prerequisite for "security".
Apple does not delegate its own "security" to someone else, a "steward". Hmmm.
Yet it expects computer users to delegate control to Apple.
Apple is not alone in this regard. It's common for "Big Tech", "security researchers" and HN commenters to advocate for the computer user to delegate control to someone else.
What's desperately missing on the client side is a switch to turn this off. It's really intransparent which Apple Intelligence requests are locally processed and which are sent to the cloud, at the moment.
The only sure way to know/prevent it a priori is to... enter flight mode, as far as I can tell?
Retroactively, there's a request log in the privacy section of System Preferences, but that's really convoluted to read (due to all of the cryptographic proofs that I have absolutely no tools to verify at the moment, and honestly have no interest in).
Harder to attack, sure, but no outside validation. Apple's not saying "we can't access your data," just "we're making it way harder for bad guys (and rogue employees) to get at it."
That doesn't make PCC useless by the way. It clearly establishes that Apple mislead customers, if there is any intentionality in a breach, or that Apple was negligent, if they do not immediately provide remedies on notification of a breach. But that's much more a "raising the cost" kind of thing and not a technical exclusion. Yes if you get Apple, as an organisation, to want to get at your data. And you use an iPhone. They absolutely can.
Key extraction is difficult but not impossible.
Refer to the never-ending clown show that is Intels SGX enclave for examples of this.
https://en.wikipedia.org/wiki/Software_Guard_Extensions#List...
https://security.apple.com/documentation/private-cloud-compu...
"Explain it like I'm a lowly web dev"
https://x.com/frogandtoadbook/status/1734575421792920018
I think having a description of Apple's threat model would help.
I was thinking that open source would help with their verifiable privacy promise. Then again, as you've said, if Apple controls the root of trust, they control everything.
But essentially it is trying to get to the end result of “if someone commandeers the building with the servers, they still can’t compromise the data chain even with physical access”
1. Stateless computation on personal user data - a property of the application
2. Enforceable guarantees - a property of the application; Nitro Enclaves attestation helps here
3. No privileged runtime access - maps directly to the no administrative API access in the AWS Nitro System platform
4. Non-targetability - a property of the application
5. Verifiable transparency - a mix of the application and the platform; Nitro Enclaves attestation helps here
To be a little more concrete: (1 stateless) You could write an app that statelessly processes user data, and build it into a Nitro Enclave. This has a particular software measurement (PCR0) and can be code-signed (PCR8) and verified at runtime (2 enforceable) using Nitro Enclave Attestation. This also provides integrity protection. You get (3 no access) for "free" by running it in Nitro to begin with (from AWS - you also need to ensure there is no application-level admin access). You would need to design (4 non-targetable) as part of your application. For (5 transparency), you could provide your code to researchers as Apple is doing.
(I work with AWS Nitro Enclaves for various security/privacy use cases at Anjuna. Some of these resemble PCC and I hope we can share more details about the customer use cases eventually.)
Some sources:
- NCC Group Audit on the Nitro System https://www.nccgroup.com/us/research-blog/public-report-aws-...
- Nitro Enclaves attestation process: https://github.com/aws/aws-nitro-enclaves-nsm-api/blob/main/...
What about other staff and partners and other entities? Why do they always insert qualifiers?
Edit: Yeah, we know why. But my point is they should spell it out, not use wording that is on its face misleading or outright deceptive.
Also worth mentioning that if that had shipped, it would have only taken affect if you uploaded images to iCloud.
There is still the opt-in Communication Safety {3} that tries to interdict sending or receiving media containing nudity if enabled, but Apple doesn’t get notified of any hits (and assuming I’m reading it right the parent doesn’t even get a notification unless the child sends one!).
1: https://archive.ph/x6z0K (WIRED article)
2: https://support.apple.com/en-us/102651 (Adv Data Protection)
3: https://support.apple.com/en-us/105069 (Comm Safety)
They cannot be trust any more. These "Private Compute" schemes are blatant lies. Maybe even scams at this point.
Learn more — https://sneak.berlin/20201112/your-computer-isnt-yours/
Is this the ideal situation? No, probably not. Should Apple do a better job of communicating that this is happening to users? Yes, probably so.
Does Apple already go overboard to explain their privacy settings during setup of a new device (the pages with the blue "handshake" icon)? Yes. Does Apple do a far better job of this than Google or Microsoft (in my opinion)? Yes.
I don't think anyone here is claiming that Apple is the best thing to ever happen to privacy, but when viewed via the lens of "the world we live in today", it's hard to see how Apple's privacy stance is a "scam". It seems to me to be one of the best or most reasonable stances for privacy among all large-cap businesses in the world.
However, it also has a LOT of speculation, with statements like "It seems this is part of Apple’s anti-malware (and perhaps anti-piracy)" and "allowing anyone on the network (which includes the US military intelligence community) to see what apps you’re launching" and "Your computer now serves a remote master, who has decided that they are entitled to spy on you."
However, without this feature (which seems pretty benign to me), wouldn't the average macOS user be actually exposed to more potential harm by being able to run untrusted or modified binaries without any warnings?