Hacker Newsnew | past | comments | ask | show | jobs | submit | cedws's commentslogin

I watched a talk from Bjarne Stroustrup at CppCon about safety and it was pretty second hand embarrassing watching him try to pretend C++ has always been safe and safety mattered all along to them before Rust came along.

Well, there has been a long campaign against manual memory management - well before Rust was a thing. And along with that, a push for less use of raw pointers, less index loops etc. - all measures which, when adopted, reduce memory safety hazards significantly. Following the Core Guideliness also helps, as does using span's. Compiler warnings has improved, as has static analysis, also in a long process preceding Rust.

Of course, this is not completely guaranteed safety - but safety has certainly mattered.


>Following the Core Guideliness also helps

Yes, this what Stroustrup said and it makes me laugh. IIRC he phrased with a more of a 'we had safety before Rust' attitude. It also misses the point, safety shouldn't be opt-in or require memorising a rulebook. If safety is that easy in C++ why is everyone still sticking their hand in the shredder?


You're "moving the goal posts" of this thread. Safety has mattered - in C++ and in other languages as well, e.g. with MISRA C.

As for the Core Guidelines - most of them are not about safety; and - they are not to be memorized, but a resource to consult when relevant, and something to base static analysis on.


Because they don’t work with it. It’s a simple as that. I don’t trust people who don’t work with a terminal these days, the further they get from a terminal, the less grounded their views are. They rely on hearsay and CEO hype. To make matters worse, they say whatever they think will earn them a bonus/promotion, which leads to a cascade of BS down the chain.

I seriously doubt Satya Nadella is sitting down for hours a day to use Copilot to draft detailed documents. He's being fed fantastical stories by his lackeys telling him what he wants to hear.


I've seen this misconception so many times in open source projects - commits just bumping the version in go.mod to 'get the latest performance and security improvements.' Like no, that's not how it works, you just made your code compile with fewer compiler versions for no reason.

I think the directive could have been named better though, maybe something like min_version.


Are you able to share if there's a general trend behind the outages? Do you often hit capacity, or do you budget to have headroom?

Yes, the general trend is the unprecedented growth that we've seen. Typically one would have some time in advance to re-engineer the systems to support the increased in traffic and users. But we're dealing with very compressed timelines and while most of the time we're able to fix the issues beforehand, sometimes we have to do them in production. Sorry for that.

The US under Trump is behaving exactly like a country with intentions of damaging the Western order and antagonising enemies to open new front lines. I think writing off Trump's actions as stupid is wrong, he's malicious.

Also making new enemies in their own row of allies. That can't be a side effect on how efficient he is doing it.

GitHub, npm, PyPi, and other package registries should consider exposing a firehose to allow people to do realtime security analysis of events. There are definitely scanners that would have caught this attack immediately, they just need a way to be informed of updates.

PyPI does exactly that, and it's been very effective. Security partners can scan packages and use the invite-only API to report them: https://blog.pypi.org/posts/2024-03-06-malware-reporting-evo...

PyPI is pretty best-in-class here and I think that they should be seen as the example for others to pursue.

The client side tooling needs work, but that's a major effort in and of itself.


It is not effective if it just takes a simple base64 encode to bypass. If Claude is trivially able to find that it is malicious then Pypi is being negligent.

The package in question was live for 46 minutes. It generally takes longer than that for security partners to scan and flag packages.

PyPI doesn't block package uploads awaiting security scanning - that would be a bad idea for a number of reasons, most notably (in my opinion) that it would be making promises that PyPI couldn't keep and lull people into a false sense of security.


It should not let people download unscanned dependencies without a warning and asking the user to override and use a potentially insecure package. If such security bug is critical enough to need to bypass this time (spoiler: realistically it is not actually that bad for a security fix to be delayed) they can work with the pypi security team to do a quicker manual review of the change.

The whole point is that this would give a false sense of security. Scanned dependencies aren't secure, they're just scanned by some tools which might catch some issues. If you care about security, you need to run those same scans on your side, perhaps with many more rules enabled, perhaps with multiple tools. PyPI, understandably, does NOT want to take any steps to make it seem like they promise their repo doesn't contain any malware. They make various best effort attempts to keep it that way, but the responsibility ultimately falls on you, not on them.

sadly I still worry about that. An install fails once, you you hard code the --force flag in all your CI/CD jobs and we are back in the same place again. I am not sure what the answer is, though problems...

Adding a hardcoded flag is not the same as asking the user if they want potential malware. If CI/CD is broken they should revert the change to pinned dependencies instead of trying to install a bleeding edge version of a new dependency that hasn't been scanned yet.

I don't understand why this would be an issue. Firstly, you could just pin your dependencies, but even if you don't, couldn't the default behaviour be to just install the newest scanned version?

What happens then if the security scanners say something is safe and it turns out not to be?

I don't think PyPI should be in the business of saying if a piece of software is safe to install or not.


Then it will be downloadable and then it's up to your own security scanners to catch it. If you find it, it should be reported to pypi and then the scanner should be improved to catch that kind of bypass the next time it comes around. In such a world I don't think pypi is acting negligent.

That's really not very different from what we have right now. PyPI works with scanners which catch a whole lot of malware and are getting better all the time.

I think PyPI suggesting that software is safe would be a step down from this because it make promises that PyPI can't keep, and would encourage a false sense of security.


It's less about suggesting that it's safe, and more about avoiding pushing out arbitrary code to thousands of people without checking if you are pushing out malicious code to all of those people. It is the responsible thing to do.

>That's really not very different from what we have right now.

What I'm advocating for is different enough to have stopped this malware from being pushed out to a bunch of people which at the very least would raise the bar of pulling off such an attack.


I realize this is controversial (and many Python folks would claim anti ethical). But I keep wondering if requiring a small payment for registering and updating packages would help. The money could go to maintaining pypix as well as automated AI analysis. Folks who really couldn't afford it could apply for sponsorship.

Very much not speaking for the PSF here, but my personal opinion on why that wouldn't work is that Python is a global language and collecting fees on a global basis is inherently difficult - and we don't want to discriminate against people in countries where the payment infrastructure is hard to support.

PyPI has paid organization accounts now which are beginning to form a meaningful revenue stream: https://docs.pypi.org/organization-accounts/pricing-and-paym...

Plus a small fee wouldn't deter malware authors, who would likely have easy access to stolen credit cards - which would expose PyPI to the chargebacks and fraudulent transactions world as well!


I don't think people want to pay for that.

If pypi charges money, python libraries will suddenly have a lot of "you can 'uv add git+https://github.com/project/library'" instead of 'uv add library'.

I also don't think it would stop this attack, where a token was stolen.

If someone's generating pypi package releases from CI, they're going to register a credit card on their account, make it so CI can automatically charge it, and when the CI token is stolen it can push an update on the real package owner's dime, not the attackers, so it's not a deterrent.

Also, the iOS app store is an okay counter example. It charges $100/year for a developer account, but still has its share of malware (certainly more than the totally free debian software repository).


TBH there isn't much difference in pulling directly from GH.

Though I do like your Apple counterexample.


Not speaking on behalf of PSF, but to me, it looks like a no-go, as some packages are maintained, legitimately, by people from sanctioned countries, with no way to pay any amount outside their country.

I don't see how this would help in the least, what kind of criminal would be dissuaded by paying a small fee to set an elaborate scheme such as this in motion? This is not a spamming attack where the sheer volume would be costly. It doesn't even help to get a credit card on file, since they can use stolen CC numbers.

It's far more likely that hobbyists will be hurt than someone that can just write off the cost as a small expense for their criminal scheme.


I suspect that for a nation-state type threat actor, this wouldn’t be much of a deterrent. Any type of reputation system like this would work to a point until motivated threat actors find a way to game it.

Would you happen to know where the latency comes from between upload and scanning? Would more resources for more security scanner runners to consume the scanner queue faster solve this? Trying to understand if there are inherent process limitations or if a donation for this compute would solve this gap.

(software supply chain security is a component of my work)


He said, "pypi doesn't block upload on scanning"; that's part of where the latency comes from. The other part is simply the sheer mass of uploads, and that there's not money in doing it super quickly.

I agree that's a bad idea to do so since security scanning is inherently a cat and mouse game.

Let's hypothetically say pypi did block upload on passing a security scan. The attacker now simply creates their own pypi test package ahead of time, uploads sample malicious payloads with additional layers of obfuscation until one passes the scan, and then uses that payload in the real attack.

Pypi would also probably open source any security scanning code it adds as part of upload (as it should), so the attacker could even just do it locally.


I suppose my argument is that pypi could offer the option to block downloads to package owners until a security scan is complete (if scanning will always take ~45-60 minutes), and if money is a problem, money can solve the scanning latency. Our org scans all packages ingested into artifact storage and requires dependency pinning, and would continue to do so, but more options (when cheap) are sometimes better imho. Also, not everyone has enterprise resources for managing this risk. I agree it is "cat and mouse" or "whack-a-mole", and always will be (ie building and maintaining systems of risk mitigation and reduction). We don't not do security scanning simply because adversaries are always improving, right? We collectively slow attackers down, when possible.

("slow is smooth, smooth is fast")


I don't know that myself but Mike Fiedler is the person to reach out to, he runs security for PyPI and is very responsive. security@pypi.org

Thanks, TIL.

So I've been thinking about this a lot since it happened. I've already added dependency cooldowns https://nesbitt.io/2026/03/04/package-managers-need-to-cool-... to every part of our monorepo. The obvious next thought is "am I just dumping the responsibility onto the next person along"? But as you point out it just needs to give automated scanners enough time to pick up on obvious signs like the .pth file in this case.

It is in a sense dumping responsibility, but there’s a legion of security companies out there scanning for attacks all the time now to prove their products. They’re kind of doing a public service and you’re giving them a chance to catch attacks first. This is why I think dep cooldowns are great.

npm has a feed of package changes you can poll if you're interested.

GitHub has a firehose of events and there's a public BigQuery dataset built from that, with some lag.


I feel like they should be legally responsible for providing scanning infrastructure for this sort of thing. The potential economic damage can be catastrophic. I don't think this is the end of the litellm story either, given that 47k+ people were infected.

It's not about expectation of work (well, there's some entitled people sure.)

It's about throwing away the effort the reporter put into filing the issue. Stale bots disincentivise good quality issues, makes them less discoverable, and creates the burden of having to collate discussion across N previously raised issues about the same thing.

Bug reports and FRs are also a form of work. They might have a selfish motive, but they're still raised with the intention of enriching the software in some way.


It's like playing The Witness. Somebody should set LLMs loose on that.

Or more appropriately - The Talos Principle.

I don't think you can put them into buckets like that. All addiction is driven in persuit of a reward. The magnitude of reward can be estimated with brain scans and stuff but to my understanding isn't universal in all humans.

Can we definitely say gambling addiction is less serious than alcohol addiction when there's individuals who find the former harder to quit than the latter?


Wasn't Zuckerberg caught red handed in emails signing off on this? When is he going to be facing consequences?

Corporate liability isolation has become absurd. People who make decisions that harm people should be held to account for those decisions even if they structured their decision making apparatus in a legal way that makes it look like they're just following the orders of the shareholders.

Zuckerberg has a brain, he decided to take this action, it is absurd he is not being hit with a personal penalty.


Consequences are for poor people.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: