Another comment says the situation was fake. I don't know, but to avoid running afoul of the authorities, it's possible to document this without actually accessing user data without permission. In the US, the Computer Fraud and Abuse Act and various state laws are written extremely broadly and were written at a time when most access was either direct dial-up or internal. The meaning of abuse can be twisted to mean rewriting a URL to access the next user, or inputting a user ID that is not authorized to you.
Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.
For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".
viccis 2 minutes ago [-]
This is somewhat related, but I know of a fairly popular iOS application for iPads that stores passwords either in plaintext or encrypted (not as digests) because they will email it to you if you click Forgot Password. You also cannot change it. I have no experience with Apple development standards, so I thought I'd ask here if anyone knows whether this is something that should be reported to Apple, if Apple will do anything, or if it's even in violation of any standards?
vaylian 20 minutes ago [-]
> Instead, I offered to sign a modified declaration confirming data deletion. I had no interest in retaining anyone’s personal data, but I was not going to agree to silence about the disclosure process itself.
Why sign anything at all? The company was obviously not interested in cooperation, but in domination.
dwedge 4 minutes ago [-]
Despite being black and white, this site was impossible to read on my eink phone
anonymous908213 19 minutes ago [-]
This is an LLM-generated article, for anyone who might wish to save the "15 min read" labelled at the top. Recounts an entirely plausible but possibly completely made up narrative of incompetent IT, and contains no real substance.
circuit10 13 minutes ago [-]
How do you know? Some of the text has a slightly LLM-ish flavour to it (e.g. the numbered lists) but other than that I don’t see any solid evidence of that
Edit: I looked into it a bit and things seems to check out, this person has scuba diving certifications on their LinkedIn and the site seems real and high-effort. While I also don’t have solid proof that it’s not AI generated either, making accusations like this based on no evidence doesn’t seem good at all
a3w 18 minutes ago [-]
ai;dr then? Should be removed from hackernews even?
BizarroLand 8 minutes ago [-]
Proof?
toomuchtodo 13 minutes ago [-]
Can you share how you confirmed this is LLM generated? I review vulnerability reports submitting by the general public and it seems very plausible based on my experience (as someone who both reviews reports and has submitted them), hence why I submitted it. I am also very allergic to AI slop and did not get the slop vibe, nor would I knowingly submit slop posts.
I assure you, the incompetence in both securing systems and operating these vulnerability management systems and programs is everywhere. You don't need an LLM to make it up.
(my experience is roughly a decade in cybersecurity and risk management, ymmv)
anonymous908213 8 minutes ago [-]
The headers alone are a huge giveaway. Spams repetitive sensatational writing tropes like "No X, No Y, No Z" and "X. Not Y" numerous times. Incoherent usage of bold type all throughout the article. Lack of any actually verifiable concrete details. The giant list of bullet points at the end that reads exactly like helpful LLM guidance. Many signals throughout the entire piece, but don't have time to do a deep dive. It's fine if you don't believe me, I'm not suggesting the article be removed. Just giving a heads-up for people who prefer not to read generated articles.
Regarding your allergy, my best guess is that it is written with Claude, not ChatGPT, and they have different styles, so you may be sensitive to one but not the other.
refulgentis 13 minutes ago [-]
I'm very sensitive to this but disagree vehemently.
I saw one or two sigils (ex. a little eager to jump to lists)
It certainly has real substance and detail.
It's not, like, generic LinkedIn post quality.
You could tl;dr it to "autoincrementing user ids and a default password set = vulnerability, and the company responded poorly." and react as "Jeez, what a waste of time, I've heard 1000 of these stories."
I don't think that reaction is wrong, per se, and I understand the impulse. I feel this sort of thing more and more as I get older.
But, it fitting into a condensed structure you're familiar with isn't the same as "this is boring slop." Moby Dick is a book about some guy who wants revenge, Hamlet is about a king who dies.
Additionally, I don't think what people will interpret from what you wrote is what you meant, necessarily. Note the other reply at this time, you're so confident and dismissive that they assume you're indicating the article should be removed from HN.
xvxvx 23 minutes ago [-]
I’ve worked in I.T. For nearly 3 decades, and I’m still astounded by the disconnect between security best practices, often with serious legal muscle behind them, and the reality of how companies operate.
I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.
Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?
By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.
calvinmorrison 19 minutes ago [-]
> By even flagging the issue and the potential fallout, I’ve put my career at risk.
Simple as. Not your company? not your problem? Notify, move on.
refulgentis 17 minutes ago [-]
> These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.
I had a bit of a feral journey into tech, poor upbringing => self taught college dropout waiting tables => founded iPad point of sale startup in 2011 => sold it => Google in 2016 to 2023
It was absolutely astounding to go to Google, and find out that all this work to ascend to an Ivy League-esque employment environment...I had been chasing a ghost. Because Google, at the end of the day, was an agglomeration of people, suffered from the same incentives and disincentives as any group, and thus also had the same boring, basic, social problems as any group.
Put more concretely, couple vignettes:
- Someone with ~5 years experience saying approximately: "You'd think we'd do a postmortem for this situation, but, you know how that goes. The people involved think they're an organization-wide announcement that you're coming for them, and someone higher ranked will get involved and make sure A) it doesn't happen or B) you end up looking stupid for writing it."
- A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.
xvxvx 3 minutes ago [-]
I would get fired at Google within seconds then. I’m more than happy to shine a light on bullshit like that.
desireco42 20 minutes ago [-]
I think the problem is the process. Each country should have a reporting authority and it should be the one to deal with security issues.
So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.
So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.
ikmckenz 38 seconds ago [-]
That’s almost what we already have with the CVE system, just without the legal protections. You report the vulnerability to the NSA, let them have their fun with it, then a fix is coordinated to be released much further down the line. Personally I don’t think it’s the best idea in the world, and entrenching it further seems like a net negative.
refulgentis 22 minutes ago [-]
Wish they named them. Usually I don't recommend it. But the combination of:
A) in EU; GDPR will trump whatever BS they want to try
B) no confirmation affected users were notified
C) aggro threats
D) nonsensical threats, sourced to Data Privacy Officer w/seemingly 0 scruples and little experience
Due to B), there's a strong responsibility rationale.
Due to rest, there's a strong name and shame rationale. Sort of equivalent to a bad Yelp review for a restaurant, but for SaaS.
mzi 8 minutes ago [-]
Dan Europe has a flow as discussed in the article and both the foundation and the regulated insurance branch is registered in Malta.
Rendered at 20:16:45 GMT+0000 (Coordinated Universal Time) with Vercel.
Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.
For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".
Why sign anything at all? The company was obviously not interested in cooperation, but in domination.
Edit: I looked into it a bit and things seems to check out, this person has scuba diving certifications on their LinkedIn and the site seems real and high-effort. While I also don’t have solid proof that it’s not AI generated either, making accusations like this based on no evidence doesn’t seem good at all
I assure you, the incompetence in both securing systems and operating these vulnerability management systems and programs is everywhere. You don't need an LLM to make it up.
(my experience is roughly a decade in cybersecurity and risk management, ymmv)
Regarding your allergy, my best guess is that it is written with Claude, not ChatGPT, and they have different styles, so you may be sensitive to one but not the other.
I saw one or two sigils (ex. a little eager to jump to lists)
It certainly has real substance and detail.
It's not, like, generic LinkedIn post quality.
You could tl;dr it to "autoincrementing user ids and a default password set = vulnerability, and the company responded poorly." and react as "Jeez, what a waste of time, I've heard 1000 of these stories."
I don't think that reaction is wrong, per se, and I understand the impulse. I feel this sort of thing more and more as I get older.
But, it fitting into a condensed structure you're familiar with isn't the same as "this is boring slop." Moby Dick is a book about some guy who wants revenge, Hamlet is about a king who dies.
Additionally, I don't think what people will interpret from what you wrote is what you meant, necessarily. Note the other reply at this time, you're so confident and dismissive that they assume you're indicating the article should be removed from HN.
I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.
Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?
By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.
Simple as. Not your company? not your problem? Notify, move on.
I had a bit of a feral journey into tech, poor upbringing => self taught college dropout waiting tables => founded iPad point of sale startup in 2011 => sold it => Google in 2016 to 2023
It was absolutely astounding to go to Google, and find out that all this work to ascend to an Ivy League-esque employment environment...I had been chasing a ghost. Because Google, at the end of the day, was an agglomeration of people, suffered from the same incentives and disincentives as any group, and thus also had the same boring, basic, social problems as any group.
Put more concretely, couple vignettes:
- Someone with ~5 years experience saying approximately: "You'd think we'd do a postmortem for this situation, but, you know how that goes. The people involved think they're an organization-wide announcement that you're coming for them, and someone higher ranked will get involved and make sure A) it doesn't happen or B) you end up looking stupid for writing it."
- A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.
So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.
So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.
A) in EU; GDPR will trump whatever BS they want to try B) no confirmation affected users were notified C) aggro threats D) nonsensical threats, sourced to Data Privacy Officer w/seemingly 0 scruples and little experience
Due to B), there's a strong responsibility rationale.
Due to rest, there's a strong name and shame rationale. Sort of equivalent to a bad Yelp review for a restaurant, but for SaaS.