? You say 'yes' but you seem to be answering a different question. Docker desktop only makes me choose a max ram - it dynamically scales RAM usage. I don't need fully automatic like that, but the ability to vertically scale RAM for an existing instance is really important, particularly given the cost of RAM these days.
Fly.io sprites is the most similar to us of the bunch. They do hardware virtualization as well, have comparable start times and are full Linux. What we call snapshots they call checkpoints.
The big pros of Sprites over us is their advanced networking stack and the Fly.io ecosystem. The big cons are that Sprites are incredibly bare bones — they don't have any templating utilities. I've also heard that Sprites sometimes become unavailable for extended periods of time.
The big pros of Freestyle over Sprites is fork, advanced templating, and IMO a better debugging experience because of our structure.
Thanks for the thoughtful response. I'm predominantly a self-hoster, but I think your product makes a lot of sense for a wide variety of users and businesses. I'm excited to try out freestyle!
Freestyle/other providers will likely provide better debugging experience but thats something you can probably get past for a lot of workloads.
The time when you/anyone should think about Freestyle/anyone is when the load spikes/the need to create hundreds of VMs in short spikes shows up, or when you're looking for some of the more complex feature sets any given provider has built out (forks, GPUs, network boundaries, etc).
I also highly recommend self hosting anything you do outside of your normal VPC. Sandboxes are the biggest possible attack surface and it is a feature of us that we're not in your cloud; If we mess up security your app is still fine.
Obviously your service/approach is different than exe, more like sprites but like you said more targeted/opinionated to AI coding/sandboxing tasks it looks like. Interesting space for sure!
I built yoloAI, which is a single go binary that runs anywhere on mac or linux, sandboxing your agents in disposable containers or VMs, nested or not.
Your agent never has access to your secrets or even your workdir (only a copy, and only what you specify), and you pull the changes back with a diff/apply workflow, reviewing any changes before they land. You also control network access.
Still WIP, but the core works — three rootfs tiers (minimal Ubuntu, headless Chromium with CDP, Docker-in-VM), OCI image support (pull any Docker image), automatic thermal management (idle VMs pause then snapshot to disk, wake transparently on next API call), per-user bridge networking with L2 isolation, named checkpoints, persistent volumes, and preview URLs with auto-wake.
Fair warning: the website is too technical and the docs are mostly AI-generated, both being actively reworked. But I've been running it daily on a Hetzner server for my AI agents' browser automation, and deploy previews.
I'd love any feedback if you want to go ahead and try it yourself
We do auto suspend depending on your configured timeout. We'll pause your VM and when you come back the processes will be in the exact same state as when you left.
But your pricing page suggests that that is not available without a subscription: in the on-demand pricing section "persistent Snapshots" and "Persistent VM's" have an 'x'.
We do not allow long term persistence for the free tier.
This is purely a defense mechanism, I don't want to guarantee storing the data of an entire VM forever for non paying users. We have persistence options for them like Sticky persistence but it doesn't come with the reliability of long term persistence storage.
Which sounds like a great idea, except that is uses NFS instead of FUSE (note that macFUSE now has a FSKit backend so FUSE seems like the best solution for both Mac and Linux).
And zerobrew, like the original Homebrew, is compatible with Linux.
It appears that Nanobrew is not.
I care about the light-weight efficiency of these new native code variants much more when I want to use brew on some little Linux container or VM or CI, than I do for my macOS development machine.
Most of what I (and in my experience many people) want a voice assistant for, is setting+ending timers... which for me happens mostly in the kitchen, while I'm simultaneously holding a hot pan or hand-tossing a salad or paper-towelling off some raw chicken. In none of those cases would I want a ring anywhere near my hands, let alone a smart ring. (And nor, in half of those cases, is it convenient/hygenic to use my oven timer.)
That being said, we could solve for fully 50% of in-home voice-assistant use-cases just by developing an extremely domain-specific voice assistant that has an extremely small (ideally burned-into-a-DSP) voice model that only knows how to recognize commands to manage kitchen timers. If such a device existed, and was cheap enough that you could assume anyone who wanted this functionality would just buy one, then this would make truly hands-free activation of a "real" voice-assistant much less necessary, as there'd be far fewer user-stories that would really "need" that. The rest of those user-stories really mostly could work with some kind of ring / belt buckle / shirt comm badge / etc.
The new board hasn't come yet, but a friend gave me a great idea, to power the mic from a GPIO, which powers it off completely when the ESP is off.
Hopefully the new boards will be here soon, but another issue is that I don't really have anything that can measure microamp consumption, so any testing takes days of waiting for the battery to run down :(
I do think these clones are the issue, though. They had a LED I couldn't turn off, so they'd literally shine forever. They don't seem engineered for low quiescent current, so fingers crossed with the new ones.
Is it worth removing the led from the board? Wont help with any other decisions by the designer that draw excess current, but maybe that's the only or largest one?
I did remove it :( It's still pretty bad. I ordered some Xiaos that do explicitly say 14 uA sleep current, but they seem to have gotten lost in the mail!
Just nitpicking: unlockable bootloaders. The bootloader is locked by default. But you can unlock it without needing Google.
Additionally, Pixels support a Linux VM and has a desktop mode (I'm running GrapheneOS, it may still be that these features have to be enabled through the developer settings).
Well akshually.... the bootloader is initially not unlockable. You must connect the phone to the internet. Within a few minutes a background process will reach out to Google servers to check whether it was purchased outright or with a payment plan. It will only enable the bootloader unlocking toggle after this step. Phones bought with a carrier contract won't be unlockable until paid off.
In those initial few minutes (/ before you connect it to the interwebs), the bootloader unlock option in the developer settings & fastboot will be disabled.
Thanks for the correction! Though I recently set up and unlocked a new Pixel and I don't recall it. Maybe SKUs for European countries are an exception since such payment plans do not really exist? Or I forgot.
I recently turned my unused Google Pixel 8 into a server for my personal site and various side projects. It's super satisfying to spin things up in a couple hours, point a cloudflare tunnel at it, and share it with the world.
Yikes, I wrote that? I hate it when people write cryptic replies like that.
What I meant was 'yes', Google Workspace CLI appears to quite similar to 'gogcli', the CLI written for OpenClaw. Both provide CLI access to a broad range of Google services for both workspace and regular gmail accounts.
GAM, on the other hand, is an admin tool, and strictly for Google Workspace accounts.
I felt that way too, until I noticed how different their schemes are for discovering these files, e.g. Claude will pick up context files in parent folders, and Codex doesn’t.
Maybe it’s better that they maintain different names to prevent people from assuming that they work the same
reply