Hacker Newsnew | past | comments | ask | show | jobs | submit | more staticassertion's commentslogin


Sure, nobody’s saying it’s an inscrutable mystery but if your goal is to inform a wide audience it’s considered good form to expand all but the most common acronyms. It’ll even get you more internet points than petty smugness.

I think sysadmins should learn the term LPE tbh

It was already known to attackers (or basically anyone watching) weeks ago when the patch hit the kernel but it wasn't communicated by upstream as a vuln (because Linus and Greg do not believe that vulnerabilities are conceptually relevant to the kernel).

Will this continue like that even when the prophesied Mythos Vulnocalypse hits the Kernel?

This stance doesn't seem sustainable any more to me.


The response from Greg was that Mythos proved that upstream was right all along and that they'll continue to do things the same way. That's my recollection, at least - pretty sure it was something like that, could have been even worse though and I'm misremembering.

The stance was never sustainable, hence linux LPEs being constantly available. The solution is to treat your kernel as impossible to secure. Notably, gvisor users are not impacted by this CVE. Seccomp also kills this CVE.


How about SELinux, like on Android?

To even get the su binary on Android you have to patch the OS. So this exploit can't work on Android. Because there is no su binary to target.

Update: Just tried it on Termux and as expected even creating an AF_ALG socket requires root access.


The specific exploit payload for the POC relies on a su binary. The vuln is ambivalent and other non-su paths will exist.

Of course, but it does not matter as the entire AF_ALG module is forbidden by SELinux anyway (on Android).

That's fine and a very separate reason why it would not be exploitable, also assuming that the module is not just compiled in since then loading it would be irrelevant.

I assume that wouldn't help here but I could easily be wrong. (Assuming if you're asking if SELinux would block this exploit).

selinux on enforcement mode did not mitigate the exploit when I tested today on fedora coreos :(

iframe sandboxing is wildly underleveraged. I think it's because it doesn't work well with "modern" app development - you need the ability to slice bits and pieces out yourself.

I've been just using plain typescript/html and it's so easy to say "yeah all of that rendered content goes into an iframe", I've got all of d3 entirely sandboxed away with a strict CSP and no origin.

I do hope that iframe sandboxing grows some new primitives. It's still quite hacky - null origins suck and I want a virtual/sandbox origin primitive as well as better messaging primitives.


I think the reason it's under leveraged is that there's so little useful documentation about it - particularly about its support in different browsers.

For something like this that's security critical I'd really like to see each of the browser vendors publishing detailed, trustworthy documentation about their implementations.

The technology itself is very widely deployed due to banner ads, so it's at least thoroughly exercised.


It's multi-faceted. Docs is part of it, but also, no one cares about security and they won't do literally anything to improve security if there's a papercut.

Right now if I want to render untrusted content and if I use React I have to escape from using React to leverage this, using https://react.dev/reference/react-dom/server/renderToString

And using null origins has tons of UX problems - virtual / sandbox origins would solve this. https://gist.github.com/ddworken/309363b5d140bcc5ff6b39fa4a8...

There's just a lot more work to do before I expect to see this. It would solve so many problems though. I personally put d3, markdown rendering, etc, all into iframe sandboxes, which means the entire library could be malicious and it won't matter. But it requires way more effort than I'd like.


Typically you do things like this to either work in restricted envs (distroless) or to evade detection logic. It's not about bypassing a boundary, it's about getting things done in the env you have available.

I've had to remove any of the "knowledge" about me from any agent I use. "As a security engineer, blah blah blah" or "as a rust developer blah blah blah" even though my questions has nothing to do with those topics and they're a huge distraction.

Yeah, I've disabled memory in everything I use. It's super distracting to have it infer connections between conversations where there is none. It's also kind of sleazy feeling. Like, manipulative in the sense that it thinks it knows what I'm into so it's going to weave that into the conversation.

If we didn't have evidence that these things cause something like psychosis in some people, it'd seem innocent. But, since the sycophancy combines with the long-term relationships some people think they're having with matrix math to trigger serious mental health problems, it feels more sinister.

Anyway, having a long-term memory makes them dumber and more easily confused. I don't have any use for a dumb agent.


In my experience, you can tell them "Don't stop working on this until complete" and they'll go for an hour or more.

That's pretty much how every bounty works... obviously it's going to be at their discretion for an incomplete attempt.

it's unusual to have to sign the NDA for a rejected bounty

No it isn't. Confidentiality terms are the norm.

This assumes that the tokens it outputs are a good description of the tool's behavior. That's not necessarily true though. For example, the LLM may be trained such that a lot of its input data is "LLMs often hallucinate", so the LLM may be biased to say "I hallucinated that" even if there's some more structural issue.

I think there's something here to consider, but it's sort of like assuming that the LLM has reasons for doing things when it only has weights for which tokens are produced - thats the sum of its reasoning.

Maybe it's the case that LLM tokens to correlate to truth values or that this approach actually provides value but there's probably good reason to be skeptical, given that we'd need to posit some sort of causative function of "token outputs" to reasoning about prior behaviors.


There's one extra process that takes up a tiny bit of CPU and memory. For that, you get an immutable host, simple configuration, a minimal SBOM, a distributable set of your dependencies, x-platform for dev, etc.


Yes but NixOS does all of these things already, without the process overhead


Even the minimal SBOM part? It's hard to be more minimal than a busybox binary.


That’s fair, NixOS avoids the direct stuff from Docker itself but if you’re basing on an Alpine image or something that would probably be more minimal / smaller


Nix wraps your process in namespaces and seccomp?


Not by default but tools like agent-sandbox.nix (bwrap, seccomp) or other nixpak (just bwrap but more popular) can provide those capabilities if you want in a fairly simple interface


This is why there's an endless cycle of shitty SaaS with slow APIs and high downtime. People keep thinking that scale is something you can just add later.


What's a more reasonable general approach then?

Let's say you're a team of 1-3 technical people building something as an MVP, but don't necessarily want to throw everything away and rewrite or re-architect if it gets traction.

What are your day 1 decisions that let you scale later without over-engineering early?

I'm not disagreeing with you btw. I genuinely don't know a "right" answer here.


I don't think there's a right answer, you need to sit down and try to think about these problems upfront. What will scaling look like? What decisions will you regret? Make the guesses you can, but don't ignore scale or performance.


I'd argue on the contrary that it's the last decades' over-engineering bender that's coming home to roost. Now too many things have too many moving parts to keep stable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: