NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
On-silicon real-time AI compute governance from Nvidia, Intel, EQTY Labs (eqtylab.io)
bee_rider 2 days ago [-]
I don’t know what dialect this is written in, but can anybody translate it to engineer? What type of problem are they trying to solve and how are they going about it? (Is this DRM for AIs?)
ben_w 2 days ago [-]
Allow me:

"Are you running the AI that you thought you were running, or a rip-off clone that will sneakily insert adverts for Acme, your one-stop-shop for roadrunner-related explosives, traps, and fake walls, into 1% of outputs? Here's how you can be sure."

ramoz 2 days ago [-]
Allow me to help extract the general value:

(your case is not the direct point, but the measures are a part of strengthening the supply chain[1]. Other application include strengthening privacy [2])

Verifiability measures are designed to transform privacy and security promises from mere assurances into independently checkable, technical guarantees. _Generally achieving_: verification of claims (from governance/regulation, to model provenance), cryptographic attestation ensuring code integrity, enforceable transparency through append-only logs and tooling, no blind trust-but verifiable trust, a structured environment for ongoing scrutiny and improvement.

[1] https://www.rand.org/pubs/research_reports/RRA2849-1.html

[2] https://security.apple.com/blog/private-cloud-compute/

fc417fc802 2 days ago [-]
[dead]
drak0n1c 2 days ago [-]
People are starting to worry about generative AI/LLM training data - copyright, bias, knowledge gaps, bad/outdated info. They are also worried about the black box of hardcoded "alignment" layers on top - which can and has introduced similar issues. Cryptographic proof of a model's training data provenance and its compute would be reassuring to large enterprises, governments, and conscientious users who are hesitant to embrace AI.

You don't want to accidentally use part of a biased social media chat LLM to summarize legislation or estimate results of a business plan.

jasonsb 2 days ago [-]
Had to use bullshitremover dot com for this one. This is the translation:

> Blah blah, AI is the future, trust it, crypto, governance, auditing, safer AI.

aftbit 18 hours ago [-]
I got:

> Verifiable Compute is a new AI framework that uses hardware-based crypto to verify AI models and data. It lets companies audit and control their AI systems to make them more secure and compliant. Intel and Nvidia are supporting it.

lawlessone 2 days ago [-]
yeah is that what "Hedera Consensus Platform" is?

Wonder what happens if that fails.

ramoz 2 days ago [-]
You lose some benefits around decentralized trust & temporal anchoring. But not all. DLT are established in software supply chains and are being adapted to AI supply chain (see below). It's not indicative of a "crypto" play.

"Sigstore’s public signing ledgers form the foundation of trust for SLSA provenance" - https://storage.googleapis.com/gweb-research2023-media/pubto...

https://docs.sigstore.dev/logging/overview/

https://github.com/sigstore/rekor

mistrial9 2 days ago [-]
software developer's rule #32 - "you are not Google"
th0ma5 2 days ago [-]
Some have speculated they want to command regulations so complex that only they can adhere to them.
JSR_FDED 2 days ago [-]
If the bank rejects your loan application they will be able, when challenged, to say “you were rejected by this particular model, trained on this particular data which was filtered for bias in this particular way”.

Similarly the tax authorities will be able to say why they chose to audit you.

The university will be able to say why you were or weren’t admitted.

malwrar 2 days ago [-]
I’m trying to decide if I should be concerned about the safety of general-purpose computing with such technologies sneaking into our compute. Verifying compute workloads is one thing, but I can’t find information on what kind of regulatory compliance controls this addition enables. I assume it is mostly just operation counting and other audit logging discussed in AI safety whitepapers, but even that feels disturbing to me.

Also, bold claim: silicon fabrication scarcity is artificial and will be remedied shortly after Taiwan is invaded by China and the world suddenly realizes it needs to (and can profit from) acquiring this capability. Regulatory approaches based on hardware factors will probably fail in the face of global competition on compute hardware.

ramoz 2 days ago [-]
Reads as compliance controls being embedded into the code with integrated gates to halt execution, or verify controls are met at runtime - providing receipts with computed outputs. This is generally oriented toward multi-party, confidential, sensitive computing domains. As AI threat models develop, general compliance of things during training, or benchmarking, etc become more relevant as security posture requires.

ref:

https://openai.com/index/reimagining-secure-infrastructure-f...

https://security.apple.com/blog/private-cloud-compute/

https://arxiv.org/html/2409.03720v2

malwrar 2 days ago [-]
Thanks for the reading.
drak0n1c 2 days ago [-]
Cryptographic proof that the model you’re using is the one you think it is along with proof it was not trained on biased or copyrighted data is the main feature here. Think certs when looking at webpages or ssh when looking at servers.
fc417fc802 2 days ago [-]
[dead]
amelius 2 days ago [-]
Just give us untethered AI models that we can run on our own hardware.
2 days ago [-]
fc417fc802 2 days ago [-]
[dead]
fc417fc802 2 days ago [-]
[dead]
lawlessone 2 days ago [-]
>Verifiable Compute represents a significant leap forward in ensuring that AI is explainable, accountable.

This is like saying the speedometer on a car prevents speeding.

drak0n1c 2 days ago [-]
Given the amount of worry about training data provenance and associated bias/copyright issues, such a cryptographic proof of training would certainly act as a "speedometer" that disincentivizes speeding. At least among models that large enterprises, governments, and conscientious users are deciding to use.
ramoz 2 days ago [-]
It’s a poor comparison.

Regardless it’s about a trusted observation - in your metaphor to help you prove in court that you weren’t actually speeding.

Apple deploys verifiable compute in Private Cloud to ensure transparency as a measure of trust, and surely as a method of prevention whether a direct method or not (depends on how they utilize verifiability measures as execution gates or not).

moffkalast 2 days ago [-]
No this is like saying adding a speedometer to your car makes it, as an inanimate object, personally liable for going fast if you press on the accelerator.
vouaobrasil 2 days ago [-]
Verifiable compute doesn't do much good if the people doing the verifying and securing are making wild profits at the expense of the rest of us. This technology is more about making sure nothing horrible happens in enterprises rather than protecting people from AI, even if "safety" is claimed.
Animats 2 days ago [-]
Yes, it's DRM for AI models. The idea seems to be that approved hardware will only run signed models

This doesn't end well. It's censorship on the user's machine. There will have to be multiple versions for different markets.

- MAGA (US red states)

- Woke (US blue states)

- Xi Thought (China)

- Monarchy criticism lockout (Thailand)

- Promotion of the gay lifestyle lockout (Russia)

ramoz 1 days ago [-]
> approved hardware will only run signed models

This is true. But this is an ability of the hardware owners. Intel and NVIDIA are not setting the rules - and there is a real commitment to that because its open source.

It's also confidential. Data, code, rules, ... all of these are processed together in secure enclaves. It's up to the hardware owner/users to determine that processing and to stamp/verify what they want.

ramoz 1 days ago [-]
BTW it's also a measure to ensure your own standards are met - in distant execution, e.g.where you can ensure your data is processed privately, or your end of a contract is adhered to (something that we think resonates with an agentic/autonomous future).

"How can we ensure that the system enforces the rules that I want"

kfrzcode 2 days ago [-]
This misunderstands what Verifiable Compute actually does. VC isn't about restricting execution, it's about proving properties of computation. The key difference is that VC lets you verify what happened, not control what can happen.

Think SSH keys, not DRM. SSH lets you verify you're talking to the right server without restricting which servers exist. Similarly, VC lets you verify properties of AI models (like training data lineage or inference characteristics) without restricting which models can run.

The regional censorship concern isn't relevant here since VC doesn't enable content restriction. It's a mathematical verification tool, not an enforcement mechanism.

chii 2 days ago [-]
> lets you verify what happened

> It's a mathematical verification tool, not an enforcement mechanism.

enforcement comes from the rubber hose. The maths is what makes the enforcement possible.

Unless you are able to prevent attestations from being made with your machine (presumably against your will).

ramoz 1 days ago [-]
The attestations in this case are completely determined by you.
henning 2 days ago [-]
Grey on black text, cookie shit in the corner, their stupid menu overlaid over the text, their stupid announcement banner, giant quotes about the company larger than the actual press release. I fucking hate web design in 2024.
transfire 2 days ago [-]
Gearing up to put a hefty price on AGI. You can only run it if you have a very costly certificate which probably requires detailed security clearances as well.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 16:09:55 GMT+0000 (Coordinated Universal Time) with Vercel.