The current trend of running autonomous, LLM-generated scripts directly on a host machine is wild to me. I looked into things like Firecracker and gVisor, but they felt way too heavy for a daily dev workflow.
I wanted something better than running agents on devices, so I recently put together a tool called enclv.
It basically just throws your untrusted code into a locked-down, disposable Docker container.
The main thing I cared about was network egress. The container runs on an internal-only Docker network, and all traffic is forced through a proxy. You provide a YAML config with allowed domains (e.g., api.anthropic.com), and the proxy drops everything else. Raw IP connections are blocked so it can't bypass DNS.
Other than that: the rootfs is read-only, it can only write to a specific output volume (which gets hashed at the end), and secrets are injected via tmpfs instead of standard env vars.
It auto-detects what you're running based on your files:
requirements.txt / pyproject.toml -> drops it in a python:3.12-slim image.
package.json -> drops it in node:20-slim.
If you drop a Dockerfile in the directory, it uses that, so you can run Go, Rust, or whatever else. It still injects the proxy and runtime hardening regardless of the base image.
(Full disclosure: If it doesn't recognize the project, it silently falls back to Python right now, which is a bit janky. Eventually, I can add Go/Ruby detection and a fallback warning if there is enough interest.)
Before the security folks roast me:
I documented the limitations heavily in the README, but to be clear here:
This is a seatbelt for dev workflows, not a zero-day malware sandbox. Docker shares the host kernel, so a real exploit will escape it.
I'm not doing TLS interception. If a malicious script wants to exfiltrate a secret, it can still hide it in the JSON payload sent to an allowed domain (like OpenAI's API) and I can't stop it.
There's no disk quota on the output volume yet, so a script could just fill your drive.
Itβs just an npm install to use it. You can point it at a local dir or a git repo.
Curious to hear if the proxy logic makes sense to you folks.
I wanted something better than running agents on devices, so I recently put together a tool called enclv.
It basically just throws your untrusted code into a locked-down, disposable Docker container.
The main thing I cared about was network egress. The container runs on an internal-only Docker network, and all traffic is forced through a proxy. You provide a YAML config with allowed domains (e.g., api.anthropic.com), and the proxy drops everything else. Raw IP connections are blocked so it can't bypass DNS.
Other than that: the rootfs is read-only, it can only write to a specific output volume (which gets hashed at the end), and secrets are injected via tmpfs instead of standard env vars.
It auto-detects what you're running based on your files:
requirements.txt / pyproject.toml -> drops it in a python:3.12-slim image.
package.json -> drops it in node:20-slim.
If you drop a Dockerfile in the directory, it uses that, so you can run Go, Rust, or whatever else. It still injects the proxy and runtime hardening regardless of the base image.
(Full disclosure: If it doesn't recognize the project, it silently falls back to Python right now, which is a bit janky. Eventually, I can add Go/Ruby detection and a fallback warning if there is enough interest.)
Before the security folks roast me:
I documented the limitations heavily in the README, but to be clear here:
This is a seatbelt for dev workflows, not a zero-day malware sandbox. Docker shares the host kernel, so a real exploit will escape it.
I'm not doing TLS interception. If a malicious script wants to exfiltrate a secret, it can still hide it in the JSON payload sent to an allowed domain (like OpenAI's API) and I can't stop it.
There's no disk quota on the output volume yet, so a script could just fill your drive.
Itβs just an npm install to use it. You can point it at a local dir or a git repo.
Curious to hear if the proxy logic makes sense to you folks.
reply