Hacker Newsnew | past | comments | ask | show | jobs | submit | aaronwp's commentslogin

Another S3 breach, same as all the rest, except this time it risked a ton of crypto assets


This is the first Stripe breach I've found where the vendor didn't even bother to set a host password. All of their money was at risk. It's a good thing I'm so handsome and honest. They also leaked a bunch of plaintext passwords in an especially galaxy-brained way.


Sega Europe left AWS S3 creds laying around in a server image on downloads.sega.com. I was able to use them to enumerate a bunch of storage, dig out more keys, and mock up a spear phishing attack against the Football Manager forums.

All the keys and services are secure and the breach is closed.


Is it common, now or historically, to follow up a notification of compromise with self-directed PoC and privilege escalation exercises on the resources of a company with which you're not under contract? My naïve take is that this was a series of well-intentioned but possibly criminal actions used to illustrate a lesson we could all be reminded of from time to time.

Also, the HackerOne page doesn't appear to be claimed by SEGA Sammy, so notices might dead-end there as well.


> Is it common, now or historically

Historically: yes.

Now: no.

> possibly criminal

Sans some sort of formal agreement (which platforms like HackerOne might facilitate), it's definitely criminal. (IMO at least not unethical, to be clear.)

Again, sans some sort of contract either one-off or platform based. If SEGA wanted a prosecution, they would almost certainly be able to convince a prosecutor to press charges. The prosecutor would certainly get a guilty verdict. (Or, much more likely, a guilty plea with a bit of prison time and stiff probation.)

This still happens from time to time in much more ambiguous situations. E.g., https://www.nytimes.com/2021/10/15/us/missouri-st-louis-post...

Fortunately, there's a bit of a gentleman's detente among reasonable white hats and reasonable companies. But if you venture much outside of the small set of companies who rely on and have technologists in senior leadership, the story changes fast.


That detente's boundaries may be somewhat vague and impossible to guarantee, but you can broad-brush paint yourself into a safer box with these four principles:

- Don't make humiliating changes to their content

- Don't mess with their userbase

- Don't leave undocumented backdoors

- Don't damage production

If you do your best to comply with those principles, then you can make a strong argument to a judge/jury that your behavior was without malice, which will notably improve your chances of survival if someone decides the usual detente isn't palatable.


I used to do this white hat hacking back in the day: modify a page on the web server, send a link to the admin with the exploit walkthrough.

It's a dangerous game to play now, though. You're basically betting the company you tested your PoC on would rather avoid the negative PR of filing charges against you, vs. a bunch of non-technical suits who just want to see you do 150 years in Sing Sing.


Yup, this was totally criminal in most jurisdictions. I don’t care if the person intended to help; this kind of vigilante hacking deserves to land you in prison.

You want a bounty? Talk to me before you break into my systems. Because once you do that without my permission, you have proven yourself completely unworthy of being trusted. Why should I believe that you have not installed a rootkit or other tech that you did not subsequently disclose?

You will need to be treated the same as any other criminal. If my insurance gets involved, that also probably means directly assisting with an attempt at criminal prosecution.

So, yeah, brilliant strategy. /s


Not sure why my comment got downvoted, but it very much feels like HN is defending this kind of behavior. This is why we can’t have nice things.


You can't have nice things because you aggressively criminalized the white hats, thus were never warned by them before a black hat took your nice things away.

> Why should I believe that you have not installed a rootkit or other tech that you did not subsequently disclose?

Because doing that and also disclosing your identity would be incredibly stupid?


> You can't have nice things because you aggressively criminalized the white hats

voakbasda even proposed giving a bounty. Is defacing a website and spearfishing the users (as is claimed higher up in the thread) needed for white hats to do their thing? I'm surprised that we aren't all in agreement that this isn't at least grey hat behavior.


It's unclear to me where the line is being drawn and a zero-tolerance policy applied with maximum criminal penalties pursued.

The whole world sucks: the companies who are slovenly with our data, the criminals who exploit that data when it is inevitably leaked, the grey hat hackers who "joyride to prove they found your keys" to use the memorable metaphor from elsethread, the circumstances which make probing for vulnerabilities incredibly risky because one misstep gets you a prison sentence. the resulting feast of vulnerabilities ripe for criminal exploitation....


Do you understand that, from the perspective of the person suffering an attack, there is absolutely zero difference between a good guy that breaks in without a contract, permission, or other sort approval and an actual bad guy? The act of committing a crime actively destroys trust.

Come to me with a list of potential vulnerabilities that I can detect and investigate with an open source scanner, and we can talk. Come to me after you've already broken in, and you will never be grated the trust required to work on security systems.

I think this whole scenario effectively is perjury. Once someone has been proven to lie, everything associated with that lie needs to be vetted (or simply thrown out), because you have demonstrated that this person cannot be trusted to tell the truth. Does anyone here think that perjury is moral or ethical? Is the scenario presented here really that different?


The "person suffering the attack" is not the only party who suffers from an attack — the individuals whose information gets leaked also suffer when a company hoards toxic data and it inevitably spills.

From the perspective of those individuals, there is a dramatic difference between black hats who exploit their data and grey hats who humiliate the toxic data hoarders.


Do you think those individuals will see the difference?

Also, I would argue there is no gray. A white that breaks the law cannot be trusted, because they become indistinguishable from a black hat that is pretending to be a white hat.

This all comes down a matter of trust, and breaking the law does not build trust in anyone except other criminals. If anything, it erodes trust by demonstrating the willingness to skirt the rules when it suits you.

In this case and context, I see the use of "gray hat" as an attempt whitewash black hat activities. Once you behave like a black hat, you always need to be treated like a black hat. Trust is like that, particularly when talking about security.


> Do you think those individuals will see the difference?

No more or less than an individual whose home was not robbed because a crime was prevented unbeknownst to them.

However, I believe that the toxic data hoarding companies collectively don't see any difference, and so they don't care if individuals suffer. The suffering of individuals when data is leaked is an externality, and it is only when forced to pay for that externality that companies would start to care.

In this regard, the black hats and the toxic data hoarders both contribute towards undermining the common good. Companies don't care if money disappears mysteriously from the bank accounts of individuals who happen to be their customers — companies just don't want to be embarrassed, as it isn't their money being stolen.

But the grey hats disrupt this state of affairs. They are truly antagonistic to the toxic data hoarders, because they humiliate them, rather than merely use them to steal from somebody else.

This status quo of companies operating unsafely, creating massive but dispersed and plausibly deniable harm, is perfectly legal. But should the public trust these companies? Should the public trust individuals who work hard at these companies to build toxic data stockpiles and cover up intrusions, rather than those who expose the harms these practices bring? Who are the "good guys"?


>You can't have nice things because you aggressively criminalized the white hats

This isn't how a white hat should behave. At the first issue, they should have stopped, reported, and waited. At the very least, a responsible disclosure, followed by a reasonable time, then maybe public disclosure-- or just move on. Continuing to dig and steal information because someone didn't reply is unacceptable.


I cede the point. It's incredibly frustrating because the harm done is minuscule in comparison to the unsafe business practices exposed. The toxic data hoarder will skate, the messenger will be shot, and the public will continue to be victimized by black hats exploiting toxic data stockpiles.


Honeypots etc make this absolutely true.


How do you know there’s a breach without seeing it?


How would Sega know there are AWS API keys in a public S3 bucket without vpnoverview defacing their careers site? Sega could probably, y'know, look in the S3 bucket at the identified file which contained the keys.

All of the things found could have been investigated by Sega and replicated if vpnoverview just documented how they got access to the info.

You don't have to joyride in a car to show the owner that they dropped their keys.


> You don't have to joyride in a car to show the owner that they dropped their keys.

This is the most accurate analogy I've seen in months, thank you for sharing it!


In this case SEGA, due to their incompetence lost a bunch of car keys owned by other people despite claiming that they’ll keep them safe (and having a legal obligation to do so under GDPR). So I don’t see any problem with publicly exposing them.


A vulnerability or a breach?


Yes, if PII is involved it's common to run an audit like this. In addition to the access keys on the server image, Sega also accidentally published a database export containing PII. In order to write a comprehensive disclosure I have to investigate thoroughly.

And yeah, there's no branding or information on HackerOne. Even if this had been in scope, I would have thought twice about submitting anything. Our publishing standards match HackerOne ethical disclosure standards.


Did Sega agree to this public disclosure?

Referring to the HackerOne standards, it appears your team violated a couple:

> Respect privacy. Make a good faith effort not to access or destroy another user's data.

> Do no harm. Act for the common good through the prompt reporting of all found vulnerabilities. Never willfully exploit others without their permission.


Public disclosing it seems to clearly fall under the ‘ Act for the common good through the prompt’ since SEGA’s user are the real victims in this situation and have the right to known that SEGA us incapable of keeping their data safe.


This sounds similar to justification used by ransomware groups.


Only under the most carelessly superficial analysis.

Is there any limit to the vast, systemic negligence and enabled criminality which can be excused away into nothingness because the circumstances under which they were made public were problematic?

This isn't a criminal prosecution of the company who was irresponsible with user data. If the people who exposed the negligence screwed up, that doesn't mean we have to act as though that the negligence ever happened.

Demonizing the messenger while remaining silent about the message is a choice.


Mostly agree. It is unfortunate that the methods used by the messenger add distractions to the situation.

The point I am trying to make is the ends don't absolve the hacker from consequences. Ransomware operators often blame their victims for poor security and frame their actions as security-as-a-service.


> The point I am trying to make is the ends don't absolve the hacker from consequences.

I agree on this point. I see it as analogous to holding your allies to a standard that your adversaries are unwilling to uphold.

In this case I categorize both black hats and toxic data hoarding companies (including their techie apologist employees) as "adversaries" (though I don't assert you agree with my assessment).

> Ransomware operators often blame their victims for poor security and frame their actions as security-as-a-service.

Despicable victim blaming by the very party doing the victimizing.

I understand why advertising a VPN service can be seen as analogous, even if the scale of profiteering is not comparable.

The argument against toxic data hoarding is easier to make when untainted by exploitative profit motive.


> Even if this had been in scope, I would have thought twice about submitting anything.

Sorry, I don't understand. Why would you be hesitant to responsibly disclose it to HackerOne?


They didn't know about it beforehand, but even if you visit the page, it says that SEGA Sammy hasn't claimed it, so it appears unofficial.


Historically, definitely. Currently? Fairly common. However, what's both historically and currently uncommon is having the sense to not do so while also identifying yourself. For the h4x0r cred, or whatever. Which is of course childishly idiotic, but makes my job a whole lot easier. In my experience, if you're not under any such contract and even if you are going to report such a compromise in complete good faith and have done no damage, you are far better off doing so as anonymously as possible. Nobody likes to be embarrassed, and it's a lot simpler for a corporation with a stock price and public image to think about to pin the whole situation on those damn hackers than own up to even the slightest degree of incompetence. Typing at work in sort of a hurry so, please forgive grammatical issues.


Should have just left it at that and collected the bug bounty, defacing for a proof of concept and telling everyone pretty much makes you ineligible in any white hat program. Can I get dibs on your flat while you're in the... camp?


> dig out more keys

I guess that if they leave them lying around that it is likely there are more.


> I was able to use them to enumerate a bunch of storage, dig out more keys

That's unethical and likely criminal without explicit testing authorization (which it appears you didn't have).

I wonder if there are any examples of "researchers" being sued/prosecuted for stunts like this.


This would be awesome as a blog post if you ever want to go into detail on how you executed each step.


Assertive: Show me something else.


stay tuned


if anything my terrible merges usually detract from it


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: