Hacker Newsnew | past | comments | ask | show | jobs | submit | jedisct1's commentslogin

Swival found many more vulnerabilities without Mythos https://github.com/swival/security-audits

Finding vulnerabilities everywhere doesn't need any skills and more, nor Mythos.

See https://github.com/Swival/security-audits/ for examples, which are automated security audits just made with swival.dev /audit command, and includes audits of large code bases such as the entire OpenBSD base system.



`tokio`, and Rust `futures` in general, are perfectly fine for typical applications.

But as soon as you need something that doesn’t fit neatly into the abstractions they provide, even something as seemingly simple as proactively reusing or cancelling sessions, things quickly become extremely complicated, inefficient, and unreliable.

For high-performance servers, where you really care about raw performance, DoS resistance, and taking advantage of modern kernel features, these abstractions can become a major limitation.

It’s a bit like using an ORM that gives you no easy way to send raw SQL queries. It works fine for common cases, even if it’s not always optimal. But when you really want to take advantage of what the database can do, you usually avoid the ORM.


And of course, everything was carefully reviewed by a human.

The Rust ecosystem is also a moving target.

Virtually all crates are still at version 0.x and introduce constant breaking changes: [https://00f.net/2025/10/17/state-of-the-rust-ecosystem/](https://00f.net/2025/10/17/state-of-the-rust-ecosystem/)

If you don’t want to use obsolete versions of dependencies, you need to explicitly tell the model that. Then you have to hope it can adopt new APIs it wasn’t trained on, rewrite existing code to handle the breaking changes, and keep your fingers crossed that nothing else breaks in the process.

LLMs perform much better with Go, not only because of the lack of hidden control flow (LLMs can deal with that, but it costs a lot of tokens) but mainly because both the language and its dependencies introduce very few breaking changes.


This hasn’t been true for some months. Claude has gotten better about adding latest versions of crates, and when it does encounter a breaking change from what it expects it is usually very quick about finding the change in the docs or crate source code.

What you are talking about used to be a pain point, but is now pretty much gone.

Rust can be a real superpower for AI-assisted dev work, because the compiler outputs very good errors, and the type system catches most safety bugs.


I’m even more worried after reading this: https://news.ycombinator.com/item?id=48016880

So Bun is going to become a fully vibe-coded codebase, with important details lost in translation.

I’ve been a huge supporter of Bun, but now I’d be extremely reluctant to deploy it in production.

It’s also a bit disappointing to see Jared change his mind so quickly. He’s an incredible developer with deep knowledge of how to write clean, maintainable, efficient code. But now it feels like his talent is being sidelined, and Claude has been given full control over the codebase.

Claude Code itself seems to be built that way: they keep piling on new features every day, but it has become this big, bloated Frankenstein slug.

Bun used to be a small, elegant, clean codebase. Now I’m worried it may turn into an unreliable mess.


Ironically, there are plenty of evals showing that it’s not actually that great. Even with Anthropic models, other harnesses are more efficient, both in terms of the number of problems solved and token usage.

Significant regressions also seem to be introduced from time to time after releases.

The UX is great, and if you need a kitchen sink packed with tons of features, even though you’ll probably only end up using a fraction of them, it’s fine.

But if you want something that performs well, you’re better off using something like Opencode or Swival.dev


Try https://swival.dev - Works perfectly with DeepSeek and Qwen.

> They have custom versions of Claude running on their own servers internally.

This is the important point.

Sending their internal code, documentation, secret tokens, etc. to Anthropic would be completely irresponsible.

But if they are running the models on their own servers, why not!


Was it even publicly known that Anthropic offered this capability? I wasn't aware on-prem Claude was a thing.


If you're Apple (or even Apple-sized), you can get a bunch of things others can't.


Bedrock? If you’ve got the cash they’ll deploy it.


Yes it was known. The usg is also running their own copies on fedramp data centers (for now)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: