Hacker Newsnew | past | comments | ask | show | jobs | submit | osigurdson's commentslogin

"Level 8" isn't really a level, it is more like a problem type: language translation. Perhaps it can be extended to something a bit broader but the pre-requisite is you need to have a working reference implementation and high quality test suite.

I think the solution has to be end to end tests. Maybe first run by humans, then maybe agents can learn and replicate. I can't see why unit tests really help other than for the LLM to reason about its own code a little more.

So more stuff happens with this approach but how do you know what it generates is correct?

Exactly. This is real world pushback on the "software is solved" narrative from AI labs. Also, most orgs try to copy Amazon for some reason more than big tech firms. "At our org, we disagree and commit" - yeah you made that one up yourself. Anyway, this is going to have a lot of impact in my view.

Perhaps I misunderstand your comment but when you run docker / podman in Windows, you are using WSL / HyperV.

running docker on windows runs docker in wsl (which is a HyperV guest). The project offers running docker images as wsl instance (which is a HyperV guest), no docker involved

You said - "A bit more overhead since they now run as a VM instead of a container"

To which osigurdson seemed to be noting that WSL2 itself is a VM (meaning if you launch 1 or 100 WSL2 instances, a single Linux VM will be spun up), and when you run docker, it runs using exactly that same VM (optionally, though it's the default now to just use WSL2's backend).

Can you clarify what you meant by "A bit more overhead"? Running a container via docker or directly via WSL2 will use the same underlying VM, and there will only be that one VM regardless of the number of WSL2 or docker instances.


For a single container the difference depends on your exact setup. I typically run docker in the same WSL instance as my other wsl stuff, so starting a single docker container adds 0 additional VMs, while starting the container as a wsl instance will add one VM. If you use the "docker for windows" package you may be adding a VM just for running docker, depending on your setup.

Once you start the second container the difference becomes more obvious: running $N containers in docker uses one VM with one linux kernel, no matter how many containers you add. Running $N containers as separate WSL instances runs $N VMs and $N linux kernels. That's the "bit more overhead" I was referring to


"Running $N containers as separate WSL instances runs $N VMs and $N linux kernels."

But it doesn't, and this is what I'm disagreeing with.

If you instantiate WSL2, it launches a Linux VM. A single Linux VM. If you then run Docker with WSL2 integration (the default and hugely recommended), it uses that Linux VM as its VM as well, so you're still at 1 VM.

If you run 100 WSL2 instances, they will all use that single Linux VM, each doing namespacing for isolation with their own filesystems. If you run 100 Docker instances, they will all use that single Linux VM.

If you run 100 WSL instances, and 100 Docker instances (assuming, again, WSL2 integration which is the default), they will all be using that single Linux VM.


Every time you run `wsl --install <distro>` you are creating a new VM. Every time you run `wsl --unregister <instance id or name/>` your are removing a VM.

It is these two operations at the heart of OP's app.


No, this is wrong.

WSL2 distributions share the same Linux kernel. They only get their own root filesystem with a Linux userland (/bin, /usr, /lib etc), and some WSL config meta data. This is then stored as a virtual disk image (which is probably where your belief comes from). But the kernel runs in a single utility VM. The distros share that kernel instance and they are separated via namespaces only.

This makes running multiple WSL2 distributions in parallel very performant btw, as there is no world switch.


I stand corrected. It makes sense that it is a chroot/rootfs rather than fully independent VMs.

re: side-by-side running, I always get socket and/port port problems when doing that. Without having looked into it at all I figure it is NAT collisions.


This is not true - it's actually all the same VM if you check hcsdiag.

WSL2 distros only use Linux namespaces, same as docker, and the WSL2 --system distro can see PIDs from all running WSL2 distros.


Okay, are you guys using wrong terminology? Are you talking about a filesystem image as a "VM", because that would not be correct. A "Virtual HD" (VHD) is not a VM. Do you think a WSL2 distro instance is a "VM" because it's persistent? Did you know Docker containers can be persistent as well, people just choose as a best practice not to use it like that? Does that make them magical VMs?

WSL2 has a single VM running as the process vmmem (or vmmemWSL in the newest versions). This is the single Linux kernel that will be shared by every WSL2 instance. It will also be the instance that hosts Docker containers as well. I mean, I'm just echoing myself and apologies for the repetition, but this is just easily confirmed, utter factual truth.

A rootfs is not a VM. WSL2 and containers have different nuances about their filesystem and isolation (WSL2 instances is aware of other instances, for instance, as a design aspect, and WSL2 persists rootfs changes while docker as a best practices does not), but they're more alike than different. And they share little in common with something like VirtualBox or vmware, which actually isolate every such system as VMs.

Again, sigh, if you install 10 WSL2 "distros", you will have 10 rootfs overlays on your drive, and 10 persistence VHD files. When you run these the VM they all run under in the shared vmmem instance, which is the single utility Linux VM.


Is there a way to visualize this on a running system or some documentation that describes it? I'm not familiar with the plumbing here but did try to find some documentation.

I've done a lot of in-depth WSL2 work, so I'm just aware of its particulars, but to cite some documentation-

https://learn.microsoft.com/en-us/windows/wsl/about

"WSL 2 uses virtualization technology to run a Linux kernel inside of a lightweight utility virtual machine (VM). Linux distributions run as isolated containers inside of the WSL 2 managed VM. Linux distributions running via WSL 2 will share the same network namespace, device tree (other than /dev/pts), CPU/Kernel/Memory/Swap, /init binary, but have their own PID namespace, Mount namespace, User namespace, Cgroup namespace, and init process."

(under "What is WSL2"?)

https://www.docker.com/blog/new-docker-desktop-wsl2-backend/

"WSL 2 runs all distros in the same utility VM, sharing the same Kernel."

If you run multiple distros take a look at the process manager and find the single vmmem or vmmemWSL (newer versions have the latter). That single instance is all of the instances, and all of the docker containers you might be running as well, each with namespace isolation (with WSL2 having intentional bridging between them for convenience). Visualise it by doing something intensive in any of them and seeing the single process react, because that's the single utility VM responsible for all of them. Further while starting up the first WSL2 instance or Docker container is expensive, requiring the initialisation of all of the resources for the utility VM and the memory to support it, subsequent iterations are much less expensive.


Thanks, it wasn't out of doubt that I asked, but it seemed having a reference to point at would help resolve the contention. The Docker blog post covered a lot more detail, even about WSL2, which was really informative and I hadn't seen.

I wonder exactly how much work "container" is doing in that Microsoft blog post's description, because it doesn't seem like it's the same kind of environment as a runc or containerd container?

I also wasn't quite sure how much detail to infer from the behavior of vmmemWSL or vmcompute.exe, because my casual understanding is that there's some adaptation layer that handles mapping Linux calls to Windows calls. It seems reasonable to allow for process mapping or accounting shenanigans for any number of good reasons.


>there's some adaptation layer that handles mapping Linux calls to Windows calls

This was how WSL1 functioned. It used a shim layer, and honestly it was pretty neat for a lot of the basic stuff. It fell apart if you were doing more complex/advanced stuff, however, as there were many missing cases and exceptions.

WSL2 instead uses that utility VM, with a couple of special Microsoft kernel drivers to interact with the host system.


The linked project includes a very different way to launch docker containers.

Just FYI you can also run a windows docker natively in windows without using WSL or any linux component. But that's more of a niche usecase.

I haven't heard anyone talk about Windows containers in years, and that was actually a good thing. Let's just pretend they never existed.

I'm not sure if this is what you mean but in some ways it would be nice to have tighter coupling with a registry. Docker build is kind of like a multiplexer - pull from here or there and build locally, then tag and push somewhere else. Most of the time all pulls are from public registries, push to a single private one and the local image is never used at all.

It seems overly orthogonal for the typical use case but perhaps just not enough of an annoyance for anyone to change it.


In some situations, yes, others no. For instance if you want to control memory or cpu using a container makes sense (unless you want to use cgroups directly). Also if running Kubernetes a container is needed.

You have to differentiate container images, and "runtime" containers. You can have the former without the latter, and vice versa. They are entirely orthogonal things.

E.g. systemd exposes a lot of resource control as well as sandboxing options, to the point that I would argue that systemd services can be very similar to "traditional" runtime containers, without any image involved.


Well, I did mention "or use cgroups" above.

And what I've said is that there are more options. You don't have to use cgroups directly, there are other tools abstracting over them (e.g. systemd) that aren't also container runtimes.

Macs were already much better value than other laptops in my opinion. I just wish it were easier to run Linux on them.

Hi all. I'll maybe try to see if I can get this added to the list of search tools above but here is a tool that allows you to search (text and semantic / tags) as well as chat. Some people found this useful in previous months so posting again.

Basic search: https://nthesis.ai/public/hn-who-is-hiring

Chat: Click the "Chat" button or use the following link (https://nthesis.ai/public/b28f0eb9-f5ac-4152-9a4c-152253d698...)


Right now, I'd say the best language for AI is the one that you can review the fastest and rarely changes. Go is fairly readable imo and never changes so it is probably a good contender. But, I can't see any reason for anyone to learn it if they don't feel like it. Same goes for other "language X is good for AI" type posts.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: