NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Employees regularly paste company secrets into ChatGPT (theregister.com)
Citizen8396 4 hours ago [-]
This is a more general problem: people will sign up for, install, and provide data to just about anything that promises to be useful.
aitchnyu 4 hours ago [-]
I've been urging my friend to be the hero and set up Sonnet 4.5/Qwen3 235B/Deepseek R1/V3 on AWS Bedrock and allow employees to point their IDEs and chatbots to their endpoint and dont let the data leave their cloud. They are priced the same as their public counterparts.
coredog64 3 hours ago [-]
Unless something has changed recently, Bedrock has significant limits on input sizes that are frequently lower than those supported by the underlying model.
ewa-szyszka 5 hours ago [-]
Who needs corporate espionage when employees are literally Ctrl+C, Ctrl+V-ing company secrets into a publicly accessible chatbot? We've automated the data breach.
aitchnyu 4 hours ago [-]
I noticed Mac app store shows imposters with "Powered by Chatgpt" when I look for Chatgpt desktop.
bdcravens 2 hours ago [-]
In part, this was due to apps being created before OpenAI released their official apps.
jasonthorsness 56 minutes ago [-]
At some level this just puts a huge burden on OpenAI. Because ChatGPT is so widely used, if something leaks everyone might put the blame predominantly on OpenAI rather than all the employees using it (disclaimer in case my employer is reading; I don't paste secrets into ChatGPT :P).
s3r3nity 3 hours ago [-]
With so many recent leadership hires / acquire hires with Facebook Growth Team backgrounds, ya’ll are naive if you think OpenAI _isn’t_ using this business data for their own means…and/or intends to lean more heavily into this direction

Ex: if you’re a Statsig user, OpenAI now knows every feature you are releasing, content you produce, telemetry, etc.

butlike 2 hours ago [-]
On the one hand I hear time and time again: it's not the idea, it's the implementation that matters.

On the other hand, people freak out about uploading secrets to a tool/platform.

Are these secrets REALLY that 'cornerstone' to the survivability of the company, or is it maybe just a <little> wishful thinking from smaller companies convincing themselves they've made some sort of secret sauce?

RadiozRadioz 1 hours ago [-]
The first paragraph of the article states

> Personally Identifiable Information (PII) or Payment Card Industry (PCI) numbers

Yes, these are definitely secrets of high value that must not be leaked. These can sink a company due to litigative or reputational damage.

HardwareLust 5 hours ago [-]
Well yeah, how else are you supposed to use it to do your work for you?
bwfan123 56 minutes ago [-]
so, i can have auto-completion of my api-key ?
datadrivenangel 4 hours ago [-]
I know of a CTO who did this right after his org rolled out rules against it... and then he asked and IT said it was fine...
Bender 4 hours ago [-]
That sounds like a management friendly business opportunity. Sell corporate accounts that allow uploading DLP data loss prevention rules. Someone uploads your company secrets ChatGPT makes a snarky reply to the person and sends the data to /dev/null. I could suggest even more dystopian measures like ChatGPT using an HR API to automate off-boarding after repeated incidents. Or companies could get their data-scientists big-data teams to write code in-house to do the same thing employees are trying to get ChatGPT to do for them.
craftkiller 1 hours ago [-]
I think the more likely response is companies simply need to pick their favorite LLM provider, establish a contract with that provider to keep your data private, and then block the other LLM providers via both firewall rules and company policy. Trying to catch it all with DLP rules is like trying to catch water with a colander.
Bender 1 hours ago [-]
I could see this working if the LLM provider logs all queries by the employees and someone reviews them. Otherwise the DLP just moves to that dedicated provider and PII/intellectual property just moves to that LLM provider and it's still a reported incident as it is still legally a third party provider. The mutually binding contract would have to be compatible with the B2B contracts and other third party contracts mentioned in SOC1/SOC2 and other related audits.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 18:23:28 GMT+0000 (Coordinated Universal Time) with Vercel.