Hacker Newsnew | past | comments | ask | show | jobs | submit | silverstream's commentslogin

Cloudflare Tunnel is solid for quick demos. One thing though — if you're planning the "bring your own keys" version, don't just throw them in a settings page. I went down that road and ended up with keys sitting in localStorage where any XSS could grab them. What worked better for me was having the backend hold the keys and issuing short-lived session tokens to the frontend. More moving parts but way less surface area if something goes wrong.


Stellar advice! I will totally keep that in mind. Thanks!


File-level sandboxing is table stakes at this point — the harder problem is credentials and network. An agent inside sandbox-exec still has your AWS keys, GitHub token, whatever's in the environment. I've been running a setup where a local daemon issues scoped short-lived JWTs to agent processes instead of passing raw credentials through, so a confused agent can't escalate beyond what you explicitly granted. Works well for API access. But like you said, nothing at the filesystem level stops an agent from spinning up 50 EC2 instances on your account.


> An agent inside sandbox-exec still has your AWS keys, GitHub token, whatever's in the environment.

That's not the case with Agent Safehouse - you can give your agent access to select ~/.dotfiles and env, but by default it gets nothing (outside of CWD)


Completely agree. As soon as I had OpenClaw working, I realized actually giving it access to anything was a complete nonstarter after all of the stories about going off the rails due to context limitations [1]. I've been building a self-hosted open sourced tool to try to address this by using an LLM to police the activity of the agent. Having the inmates run the asylum (by having an LLM police the other LLM) seemed like an odd idea, but I've been surprised how effective it's been. You can check it out here if you're curious: https://github.com/clawvisor/clawvisor clawvisor.com

[1] https://www.tomshardware.com/tech-industry/artificial-intell...


Every post from this two day old account starts with about 8 words and then an em-dash. And it happens to self-identify a startup building infra for OpenClaw.


Node.js basically tried this — every package gets its own copy of every dependency in node_modules. Worked great until you had 400MB of duplicated lodash copies and the memes started.

pnpm fixed it exactly the way you describe though: content-addressable store with hardlinks. Every package version exists once on disk, projects just link to it. So the "dedup at filesystem level" approach does work, it just took the ecosystem a decade of pain to get there.


nix has a cache too but only if the packages are reproducible.

Much harder to get reproducibility with C++ than JavaScript to say the least.


Honestly the guard overhead is a non-issue in practice — it's one atomic check after first init. The real problem with the static data member approach is initialization order across translation units. If singleton A touches singleton B during startup you get fun segfaults that only show up in release builds with a different link order.

I ended up using std::call_once for those cases. More boilerplate but at least you're not debugging init order at 2am.


"it's one atomic check after first init" And that's slow :P [0] If you don't need to access it from multiple threads, cutting that out can mean a huge difference in a hot path.

[0] https://stackoverflow.com/questions/51846894/what-is-the-per...


Came here to say the same thing. Static is OK as long as the object has no dependencies but as soon as it does you're asking for trouble. Second the call_once approach. Another approach is an explicit initialization order system that ensures dependencies are set up in the right order, but that's more complex and only works for binaries you control.


AI. Probably clawdbot.


Same experience here with a pnpm workspace monorepo. The baseUrl removal was the only real friction — we were using it as a path alias root, had to move everything to subpath imports.

  The moduleResolution: node deprecation is the one I'd flag for anyone not paying attention yet. Switching to nodenext forced us to add .js extensions to all      
  relative imports, which was a bigger migration than expected.

  Compilation speed improvement is real though. Noticeably faster on incremental builds.


enableScripts: false is a great default, but in a pnpm workspace monorepo it needs some tuning — a few packages legitimately rely on postinstall (esbuild, sharp, etc. downloading platform binaries).

What worked for us was whitelisting just those in onlyBuiltDependencies. Everything else stays locked down.

The age gate is a nice extra layer. I do wonder how well it holds up for fast-moving deps where you actually want the latest patch though.


This also compounds with npm's postinstall defaults. In this attack chain, the prompt injection triggers npm install on a fork, and postinstall scripts run with the user's full permissions without any audit prompt.

  So you end up with GHA's over-privileged credentials handing off to npm's over-privileged install hooks.

  I've started running --ignore-scripts by default and only whitelisting packages that genuinely need postinstall. It's a bit annoying, but the alternative is trusting   
  every transitive dependency not to do something during install.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: