NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Employees regularly paste company secrets into ChatGPT (theregister.com)
Citizen8396 122 days ago [-]
This is a more general problem: people will sign up for, install, and provide data to just about anything that promises to be useful.
ewa-szyszka 122 days ago [-]
Who needs corporate espionage when employees are literally Ctrl+C, Ctrl+V-ing company secrets into a publicly accessible chatbot? We've automated the data breach.
aitchnyu 122 days ago [-]
I noticed Mac app store shows imposters with "Powered by Chatgpt" when I look for Chatgpt desktop.
bdcravens 122 days ago [-]
In part, this was due to apps being created before OpenAI released their official apps.
aitchnyu 122 days ago [-]
I've been urging my friend to be the hero and set up Sonnet 4.5/Qwen3 235B/Deepseek R1/V3 on AWS Bedrock and allow employees to point their IDEs and chatbots to their endpoint and dont let the data leave their cloud. They are priced the same as their public counterparts.
coredog64 122 days ago [-]
Unless something has changed recently, Bedrock has significant limits on input sizes that are frequently lower than those supported by the underlying model.
master_crab 122 days ago [-]
As of a couple months ago you could use the 1 million token limit for sonnet 4. Granted it was a beta feature that you had to explicitly set (not sure if it’s GA now).
HardwareLust 122 days ago [-]
Well yeah, how else are you supposed to use it to do your work for you?
s3r3nity 122 days ago [-]
With so many recent leadership hires / acquire hires with Facebook Growth Team backgrounds, ya’ll are naive if you think OpenAI _isn’t_ using this business data for their own means…and/or intends to lean more heavily into this direction

Ex: if you’re a Statsig user, OpenAI now knows every feature you are releasing, content you produce, telemetry, etc.

tobias2014 121 days ago [-]
Meanwhile companies exist that have built essentially layers in front of chatbots, masking or filtering sensitive data, then forwarding the masked query, then unmasking it when giving back to the user(e.g. https://www.liminal.ai/ ).

Ideally you shouldn't paste sensitive information into the chat in first place. But when such companies can guarantee certain compliance types, it might be better to offer this rather than letting people use chats uncontrolled in companies.

butlike 122 days ago [-]
On the one hand I hear time and time again: it's not the idea, it's the implementation that matters.

On the other hand, people freak out about uploading secrets to a tool/platform.

Are these secrets REALLY that 'cornerstone' to the survivability of the company, or is it maybe just a <little> wishful thinking from smaller companies convincing themselves they've made some sort of secret sauce?

RadiozRadioz 122 days ago [-]
The first paragraph of the article states

> Personally Identifiable Information (PII) or Payment Card Industry (PCI) numbers

Yes, these are definitely secrets of high value that must not be leaked. These can sink a company due to litigative or reputational damage.

datadrivenangel 122 days ago [-]
I know of a CTO who did this right after his org rolled out rules against it... and then he asked and IT said it was fine...
jasonthorsness 122 days ago [-]
At some level this just puts a huge burden on OpenAI. Because ChatGPT is so widely used, if something leaks everyone might put the blame predominantly on OpenAI rather than all the employees using it (disclaimer in case my employer is reading; I don't paste secrets into ChatGPT :P).
msarrel 122 days ago [-]
No, I don't believe this. Every corporate employee I know places the security and privacy of corporate assets as paramount. I can't believe anyone would subvert security controls to make their jobs easier. In case you couldn't tell, that was sarcasm.
Bender 122 days ago [-]
That sounds like a management friendly business opportunity. Sell corporate accounts that allow uploading DLP data loss prevention rules. Someone uploads your company secrets ChatGPT makes a snarky reply to the person and sends the data to /dev/null. I could suggest even more dystopian measures like ChatGPT using an HR API to automate off-boarding after repeated incidents. Or companies could get their data-scientists big-data teams to write code in-house to do the same thing employees are trying to get ChatGPT to do for them.
craftkiller 122 days ago [-]
I think the more likely response is companies simply need to pick their favorite LLM provider, establish a contract with that provider to keep your data private, and then block the other LLM providers via both firewall rules and company policy. Trying to catch it all with DLP rules is like trying to catch water with a colander.
Bender 122 days ago [-]
I could see this working if the LLM provider logs all queries by the employees and someone reviews them. Otherwise the DLP just moves to that dedicated provider and PII/intellectual property just moves to that LLM provider and it's still a reported incident as it is still legally a third party provider. The mutually binding contract would have to be compatible with the B2B contracts and other third party contracts mentioned in SOC1/SOC2 and other related audits.
bwfan123 122 days ago [-]
so, i can have auto-completion of my api-key ?
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 10:37:54 GMT+0000 (Coordinated Universal Time) with Vercel.