I am going to look at Zig after 1.0 is released. The current state is that you are playing catch up with language if you have any reasonable sized project in Zig. A new release might mean that you need to rewrite significant portion of your code.
The design philosophy of mainstream Linux distros is not like OpenBSD.
Linux distros go to market as maximally capable, maximally interoperable, and maximally available for whatever the users want to do. So there is a lot of "shovelware" that is unnecessarily installed with your base system. A lot of services are enabled that you don't need. A lot of kernel modules are loaded or ready to spring into action as soon as you connect hardware that the kernel recognizes.
All this maximizing also increases the system's attack surface, whether local or over the network. Your resources, time and effort increase, to update the system and maintain all those packages. The TCO is high.
With OpenBSD, the base system is hardened and the code is audited with security in mind. They only install or enable essential functions. So it's up to the user to dig in, customize it, and add in features that are needed.
The good news is that you can do some after-market hardening. Uninstall software that you're not using, and disable non-essential services. Tune your kernel for special-purpose, or general-purpose, but not every-purpose.
There are now special distros for containers and VMs with minimal system builds. They are designed to be as small and lightweight as possible. That is a good start in the right direction.
Thanks for the explanation. I am wondering if it is possible or does it make sense to have a modular linux that does not have these attack surfaces enabled by default. Alpine is my default solution for most Linux use cases (except when I need GPU support).
Not "by default", but still Gentoo. My USE= is several lines worth of -this -that -all-the-things. I got rid of wayland, pipewire, pulseaudio, avahi and a shitload of other stuff I don't need.
PulseAudio applications can still produce (but not record) audio through apulse and my handcrafted asoundrc
We need a truly distributed point-to-point internet asap. Politicians going to do everything to limit free speech and free ideas in the name of protecting children while they already got all the powers to investigate and stop child abuse.
Did you intend to link to Meshtastic as an example of how not to achieve your goals? Because it definitely isn't capable of scaling up to anything like the whole internet, and the project struggles to agree on any goals they want to reliably achieve.
There are so many caveats and limitations that bringing it up in this context is downright dishonest. The most you could fairly say is that some of the philosophy driving some of the meshtastic developers is what you want to see applied to the development of an internet-scale network (which in reality would have less technology in common with meshtastic than with the current internet).
Alas it is the great contradiction. Federated technologies are brilliant for peer-to-peer but many struggle to scale because the designed redundancy tends to crush their efficiency.
Really depends on the context. Email works because of its limits. Remove those limits and weaknesses start to appear.
So a mesh isn't made up of point to point connections? I'm pretty sure if you have several they start to look like a mesh (and every security site's banner)
Sure but I cant communicate with you in a point to point fashion, in a mesh network I am hoping that I have possibly hundreds of disinterested nodes between us. But like, are those nodes coordinating on censorship? Are some of the nodes recording your metadata? Are the nodes incentivized to carry the quantity of traffic you require?
Really the "fix" the ultimate goal has to be direct point to point.
What a series of disasters that are happening at Anthropic nowadays. I am not even sure what is going on with Opus 4.7 I had to switch back to 4.6 and 4.6 was already a downgrade (anecdotal + the github thread with the harness changes).
I am cancelling my subscription as it is impossible to justify these degradations and paying for a subpar service especially now that we have at least 3 more models that are as good as Opus and there is the pi project that is undoubtedly the best harness.
The problem in most of those cases is not specifically AI. Many of the issues you cited are related to Anthropic specifically and many could have been avoided with better testing.
Yes, I am assuming the AI/LLM of choice you've implemented in your software engineering org is Claude because as far as I can tell there aren't really alternatives that come close to its quality in software.
Excuse me for being blunt but you would assume ai bros run a place like this, and ai bros can manage tech as much as crypto bros can manage monetary systems.
You realize the thread you are commenting on is not about scaling or downtime, but about a billing bug that Anthropic refused to fix until it become a Streisand effect?
If you're happy to continue paying a company that has demonstrated it will steal your money, admit it, and refuse to return it, more power to you. The AI industry is moving fast enough that there will be plenty of players to pick up customers who don't want to be robbed.
I'm reading the same thread as you and seeing the same complaints, yes. Personally, I'm willing to giving the benefit of the doubt to a company that has demonstrated they will stand up for human rights principles at the expense of their bottom line, vs immediately jumping to a "they are stealing money in plain sight" conclusion from a bizarre bug that was not widely known or reported.
But that's just me. Vote with your dollars; I've voted with mine.
I assume Anthropic just realized that their business model is not profitable and they started to do some crazy stuff to dial down cost on their end without transparency. Customer support is not a priority because it is just cost. The changes in March and the new Opus 4.7 slop are probably the side effects of this. This is my speculation, no evidence yet.
Zed is the only editor I use on a daily basis and VIM. It is fast and renders nicely. I do not need to configure it much, few extra plugins but most of the things are working out of the box.
As an anecdotal evidence I code way more now with agents because i have an entity who has vast amount of knowledge about pretty much everything and I have the creativity to use that well.
But you already knew how to code before LLM coding agents, juniors will jump straight into using agents without learning to code by hand, hence the premise of the article.
You are screwed either way. If you don't update your container has a ton of known security issues, if you do the container is not reproducable. reproducable is neat with some useful security benefits, but it is something a non goal if the container is more than a month old - day might even be a better max age.
So if i have a docker container which needs a handful of packages, you would handle it how?
I'm handling it by using a slim debian or ubuntu, then using apt to install these packages with necessary dependencies.
For everything easy, like one basic binary, I use the most minimal image but as soon as it gets just a little bit annoying to set it up and keep it maintained, i start using apt and a nightly build of the image.
IMO—package manager outside the container. You just want the packages inside the container; the manager can sit outside and install packages into the container.
For the package management, it depends on the package manager, but most have some mechanism for installing into a root other than the currently running system.
Even without explicit support in the pacakage manager, you could also roll your own solution by running the package manager in a chroot environment, which would then need to be seeded with the package manager's own dependencies, of course (and use user-mode qemu to run pre- and post-installation scripts within the chroot in the case of cross-architecture builds).
Whether this yields a minimal container when pointed at a repository intended to be used to deploy a full OS is another question, but using a package manager to build a root filesystem offline isn't hard to pull off.
As for how to do this in the context of building an OCI container, tools like Buildah[1] exist to support container workflows beyond the conventional Dockerfile approach, providing straightforward command line tools to create containers, work with layers, mount and unmount container filesystems, etc.
There have got to be a million ways to do this by now. Some of the more principled approaches are tools like Nix (https://xeiaso.net/talks/2024/nix-docker-build/) and Bazel (https://github.com/bazel-contrib/rules_oci). But if you want to use an existing package manager like apt, you can pick it apart. Apt calls dpkg, and dpkg extracts files and runs post-install scripts. Only the post-install script needs to run inside the container.
I may be a little out of touch here, because the last time I did this, we used a wholly custom package manager.
Docker recommends using multi-stage builds e.g. Stage one image has the package manager, stage two image omits it completely, leaving only the installed software.
The same way you may require something like cmake as a build dependency but not have it be part of the resulting binary - separate build time and run time dependencies so you only distribute the relevant ones.
Your question feels insane to me for production environments. Why aren't you doing a version cutoff of your packages and either pulling them from some network/local cache or baking them into your images?
I don't just run a java spring boot application. I run other things on my production system.
It doesn't matter much were i pull them from though, i only do this with packages which have plenty of dependencies and i don't want to assemble my own minimal image.
Friend, considering the supply chain attacks going on these days, automatically updating everything, immediately, probably isn't the perfect move either.
A weird tradeoff but an increasingly important tradeoff to keep in mind nonetheless. Like I said, updating immediately isn't a perfect answer. But neither is waiting. I hope you're having this discussion, at least.
That local cache is often implemented as a drop-in replacement for the upstream package repository, and packages are still installed with the same package manager (yum,apt,pip,npm).
Minimal might or might not pe your goal. A large container sometimes is correct - at that point you have to ask if maybe using a container twice so you only need to download it once and then installing the one missing part makes more sense.
If you are on github/gitlab, renovate bot is a good option for automating dependency updates via PRs while still maintaining pinned versions in your source.
I know it's an anti-pattern, but what is the alternative if you need to install some software? Pulling its tagged source code, gcc and compile everything?
> the old snapshot has security holes attackers know how to exploit.
So is running `docker build` and the `RUN apt update` line doing a cache hit, except the latter is silent.
The problem solved by pinning to the snapshot is not to magically be secure, it's knowing what a given image is made of so you can trivially assert which ones are safe and which ones aren't.
In both cases you have to rebuild an image anyway so updating the snapshot is just a step that makes it explicit in code instead of implicit.
where does the apt update connect to? If it is an up to date package repo you get fixes. Howerer there are lots of reasons it would not. You better know if this is your plan.
You get fixes that were current at docker build time, but I think GP is referring to fixes that appear in the apt repo after your docker container is deployed.
If you've pulled in a dependency from outside the base image, there will be no new base image version to alert you to an update of that external dependency. Unless your container regularly runs something like apt update && apt list --upgradable, you will be unaware of security fixes newly available from apt.
Run “nix flake update”. Commit the lockfile. Build a docker image from that; the software you need is almost certainly there, and there’s a handy docker helper.
Recently I’ve been noticing that Nix software has been falling behind. So “the software you need is almost certainly there” is less true these days. Recently = April 2026.
That's been an issue for years from my impression of the state of NixOS. There are other problems too, like a lot of open source packages doing straight binary downloads instead of actually building the software.
Are you referring to how the nixpkgs-unstable branch hasn't been updated in the past five days? Or do you have some specific software in mind? (not arguing, just curious)
It’s a variety of different software that just isn’t updated very often.
I don’t mind being somewhat behind, but it seems like there are a lot of packages that don’t get regular updates. It’s okay to have packages that aren’t updated, but those packages should be clearly distinguishable.
I don't really see how that's different from a normal binary install of a reproducible package. Especially with the lacking quality of a lot of Nix packages.
I disagree with that as a hard rule and with the opinion that it's an anti-pattern. Reproducible containers are fine, but not always necessary. There's enough times when I do want to run apt-get in a container and don't care about reproducibility.
The problem is distros often remove older versions from the repo as soon as the new version is available. Granted there is an archive that you can pull from.
This is to solve such issues that I am using and running StableBuild.
It is a managed service that keeps a cached copy of your dependencies at a specific time.
You can pin your dependencies within a Dockerfile and have reproducible docker images.
reply