Hacker Newsnew | past | comments | ask | show | jobs | submit | stephen_cagle's commentslogin

You've almost buffer overrun Goodhart's Law into the https://en.wikipedia.org/wiki/McNamara_fallacy . :]

I believe private equity ownership represents this in an aggressive form. The 2 and 20 percent takes that PE usually mandates as part of their purchase agreement means that they are highly highly incentivized to maximize short term "wins" over long term survival.

I think Chesterton and Taleb also had pretty reasonable things to say about understanding a system before you make changes and fragile/anti-fragile systems as well.


https://www.levels.fyi/companies/meta/salaries/software-engi...

I fee like if you are an L4+ at Meta, you are doing fine even on a single salary. Let's say you loose 40% to state and federal and 20% to housing (renting), you still have 120k+ for the rest. This is plenty for most people in the world.

You are right that times are hard, but these are hardly trying times for people with this much compensation.


40% to taxes is a very high estimate for L4-L5, even in places like California or NY.

Still not great if you want a family, because daycare/private school are expensive, and housing in a good school district (to not have to pay for private school) are even more expensive.

Remind me how much a house costs in Menlo Park proper. Like I said, renting isn’t standard middle class experience for adults. Americans own their homes.

I'm blown away by the idea of not using Chris Tucker for Ruby Rhod. It is like imagining anyone but Hugh Jackman as Wolverine. They are basically perfect castings.


Last month I "panic bought" a $999 Macbook Mini (32G) so I could run small models, Image Generation, and Voice synthesis on it. I don't think I regret it yet, despite the fact that you can get a 16G for $599, which is honestly a much more efficient price per Gig.

I think it is interesting that, at least thus far, Apple has chosen not to raise the price of their comps despite presumably the price of RAM going up multiples.

Tipping point for me: It will be a pretty kickass media server for at least a decade.


Didn't they eliminate the highest tier Mac Pro and raise the price of the one under it?


Writing (unassisted) is probably the first step towards your own independent thoughts.

I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."

I think a diversity of opinion is important for society. I'm worried that LLM's are going to group-think us into thinking the same way, believing the same things, reacting the same way.

I wonder if future children will need to be taught how to purposely have their own opinions; being so used to always asking others before even considering things on their own? The LLM will likely reach a better conclusion than you would on your own, but there is value in diverging from the consensus and thinking your own thoughts.

https://stephencagle.dev/posts-output/2025-10-14-you-should-...


> I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."

The scene you mentioned (amazing movie and holds up to this day) with the Major and Togusa:

https://youtube.com/watch?v=VQUBYaAgyKI

While I frequently use a similar argument, "We need someone 'untainted' to provide a different point of view", my honest opinion is somewhat more nuanced. These models tend to gravitate towards some sort of level of writing competence based on how good we are at filtering pre-training data and creating supervised data for fine-tuning. However, that level is still far below where my current professional writing is and I find it dreadful to read compared to good writing. Plenty of my students can not "see" this, as they are still below the level of current LLMs and I caution them to overly rely on LLMs for writing as they can then never learn good writing and "reach above" LLM-level writing. Instead, they must read widely, reflect, and also I always provide written feedback on their writing (rather than making edits myself) so that they must incorporate it manually into their own and when doing so they consider why I disagree with the current writing and hopefully learn to become better writers.


Bitterpilled. Wow, the audio mixing on that clip is great. I miss art like this. I'm afraid that nothing will recapture the way I felt watching GOTS the first time.


There are some many pieces of media that I wish I could fully scrub my memory of to experience for a second time.


You just invented a category for a list! Going to have fun thinking of mine.


Agree. Also, deference to consensus has always been a thing. "Best practices" is a thing at all levels of school and work. So it's very much a human thing, AI drastically compresses the timeline.

Importantly, it's not wrong. I say this as someone that seems to have the contrarian gene. I am worried too, that status-quo is now instant and all-consuming for anyone anywhere. But there's still hope in that AI compresses ramp up speed for anyone that would have the capacity to branch out anyway. So that's good.


I think LLM writing is probably a short term fad. It doesn't provide any value and no one likes reading it. That said, anywhere where value can be extracted by posting writing will be completely destroyed by LLMs as people try to grift their way in.

Either we find some way to filter out AI slop or the internet just stops getting used to post and consume content.


[flagged]


It's similar to the "workslop" problem where you can generate reports and documents rapidly, but the real work has shifted to the receiver who has to review and correct mistakes. In open source this has moved to the PR review being the actual work while generating the code and submitting it is worthless.

Obviously this is nonsensical long term. Why would I want to receive your LLM output when I could get the same output myself?


I think the most interesting idea here is the idea of people purposely keeping secrets in order to maintain advantages.

Beliefs: At this time, I do not actually believe that LLM's can innovate in any real way. I'm not even clear if they can abstract. I think the most creative thing they can do is act as digital "nudgers" on combinatorial deterministic problems; illustrated by their performance on very specific geometry and chemistry problems.

Anyway, my point is that I think they may still need human beings to actually provide novel solutions to problems. To handle the unexpected. To simplify. LLM's can execute once they have been trained, but they cannot train themselves.

In the past, the saying in silicon valley was often "ideas are cheap". And there was some truth to that. Execution was far more difficult then the idea itself. Execution was so much more difficult than "pure thought" that you could often publicize the algorithmic/process/whatever that you had and still offer a product/service/consultancy that made use of it. The execution was the valuable thing.

But LLM's execute at a cost that is fractional of human cost and multiples of human development speed. The idea hasn't increased in value, but the execution cost has decreased markedly. In this world, protecting the idea is far more valuable than it is in the previous world. You can't keep your competitors away by out executing them, but you can keep them away if you have some advantage that they do not understand.

And, I agree, that is quite worrisome. If people don't share knowledge then knowledge disseminate much more slowly as everyone has to independently learn things on their own. That is a frightening future.


Does anyone have a breakdown from the case itself about what particular features of these social media apps makes them threshold into the "addictive" classification?

- Infinite Scrolling?

- Play Next Video Automatically?

- Shorts?

- Matching to your peer group?

- Variable Reward?

- Social Reciprocity?

- Notifications?

- Gamification (Streaks)?

Was the case won on the argument that it is the aggregate of these things (and many more I am sure)? The power imbalance between the user and the company? Was it some particular subset of them that they rest their argument on? I'm just genuinely curious how you can win a very challenging case like this without inadvertently lassoing so many other industries that your arguments seem ludicrous?


I'm somewhat skeptical of this "enter the trades" movement. Actually, I am more skeptical of that statement than I am of LLM's replacing white collar work in general. I think parts of coding are being replaced quickly because they are the parts that don't require discernment. Trades likely contain just as many automatable and just as many discernment parts as white collar work. At this moment in history, the automatable parts are being automated in the knowledge based world. People think the physical world is somehow different, but with world models (along the full spectrum of what that means) the physical world will be just as trainable as the knowledge based world.

tldr; Just like knowledge work, most trade stuff is probably mostly repeated (i.e. very trainable) task with a small amount of taste and discernment applied. The repeated will be trainable, the discernment may be trainable. I don't think the physical world is necessarily any safer than the knowledge world.


The difference is the physical aspect of the trades. The design for wiring can be (and already has been) automated, but you physically need an electrician on site to pull the wires. So I can see a hollowing out of the engineers, but not the actual electricians.

That being said, the absolute focus on trades from the fed right now just reeks of the wild pendulum swing. It used to be 'go to college to get a good job' then we had too many college grads. In ten years we'll have a glut of people trained in the trades with no prospects.

It just keeps swinging back and forth and somehow Joe Regularworker keeps losing.


Indeed. If you squint a little, it kind of looks like the machines are trying to shift to a world where we are just meat puppets to do the tricky stuff there aren't robotics for (yet). :(


Cory Doctorow's "The Reverse-Centaur’s Guide to Criticizing AI" [1] agrees with you:

"<...> a reverse centaur is machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine."

[1] https://doctorow.medium.com/https-pluralistic-net-2025-12-05...


Or humans are just the "sex organs" that work to bring about the artificial life-forms that come next.


Have you seen what Unitree G1 can already do? I see the writing on the walls for going onsite and pulling wires.


Yeah, things change. What do you propose to do about that? The only people who lose are the ones who can't accept that they may need to change careers to make more money.


Robots are expensive, software is not. I can instantly duplicate software 1 million times and run it in parallel, I can't just produce 1 million robots. Physical world is always harder.

Even if we get robots who can, say, build roads start to end, there is still a HUGE gap between that and it actually being used. There is a hard floor, too. Robots are made of physical things, physical things have scarcity, and there's no way around that to our knowledge. Even if you can build the robot for 1 cent, the material cost will still exist.


> Robots are expensive

People are not, though, and all the folks who are no longer necessary in knowledge work are available for physical work.


Dark thoughts... Imagine a future where most human beings are just overseered by an LLM and we are just wearing AR work glasses. Barely aware of what (physical) work we are doing as we overlay our hands within the projections of our AR glasses. Every task is decomposed into a set of small physical steps, you don't even think about what you are trying to actually accomplish, just follow the steps one at a time. I wonder if an entire fast food restaurant could be run in this fashion? No managers, no shift supervisors, just a skeleton crew doing one step of a task at a time.


Why have fast food restaurants at all at that point? Just have everyone eat the same mass-produced, nutritionally-optimized substance, and use the AR vision to superimpose pretty pictures over that food. Varied meals are for the rich.


Hasn't the US already minimised the cost of all the construction work that are "the parts that don't require discernment" to minimum wage who-cares-if-they're-documented-or-not day workers?


Seems the answer is no, the average wage is about $25/hr depending on region.


Cool, I can make that working at Walmart many places nowadays.


The average for Walmart is $18.25.


Have you heard of any good projects for running isolated containers in NixOS that are cheaply derived from your own NixOS config? Because that is what I want. I want a computer where I can basically install every non stock app in its own little world, where it thinks "huh, that is interesting, I seem to be the only app installed on this system".

Basically, I want to be able to run completely unverified code off of the internet on my local machine, and know that the worst thing it can possibly due is trash its own container.

I feel like NixOS, is one path toward getting to that future.



There is also https://microvm-nix.github.io/microvm.nix/ if you want increased isolation.


I can recommend MicroVM.nix, since it allows for multiple VM runtimes like QEMU, Firecracker, etc.

There's also nixos-shell for ad-hoc virtual machines: https://github.com/mic92/nixos-shell


Can you do those ad-hoc though? I was looking into this too. I feel like it requires a system config change, apply, and then you need to do container start + machinectl login to actually get a shell.

That's definitely what I want... most of the time.


Yes, NixOS containers can be run in:

* declarative mode, where your guest config is defined within your host config, or

* imperative mode, where your guest NixOS config is defined in a separate file. You can choose to reuse config between host and guest config files, of course.

It sounds like you want imperative containers. Here's the docs: https://nixos.org/manual/nixos/stable/#sec-imperative-contai...


Oh I totally missed that!


sounds like you want qubes os https://www.qubes-os.org/


> I want a computer where I can basically install every non stock app in its own little world, where it thinks "huh, that is interesting, I seem to be the only app installed on this system".

NixOS containers are the most convenient way to do this, but those will map the entire global nix store into your container. So while only one app would be in your PATH, all other programs are still accessible in principle. From a threat-modelling perspective, this isn't usually a deal-breaker though.

There's also dockerTools, which lets you build bespoke docker/podman images from a set of nix packages. Those will have a fully self-contained and minimal set of files, at the expense of copying those files into the container image instead of just mapping them as a volume.


https://spectrum-os.org/ is trying to marry QubesOS (everything runs inside a VM) with Nix. It's still very much in development, though.


If containers are safe enough for ur use case then just use nixos containers they just a few more lines to setup in a regular nixos config

If it isn't enough there's microvm.nix which is pretty much the same in difficulty /complexity, but runs inside a very slim and lightweight VM with stronger isolation than a container


Sounds like Ghaf might be what you're after: https://ghaf.tii.ae/ghaf/overview


depends whether you consider rootless Docker "cheap". I tried running ZeroClaw in a Nix-derived Docker (spoiler - it was a bad idea to use ZeroClaw at all since the harness is very buggy) and there is still a potential for container escape zero-days, but that's the best I've found. also, Nix's own containerization is not as hermetic as Docker; they warn about that in docs


That's hard given most apps have dependencies and often share them.

It will always look like curl is available or bash or something

What's wrong with another user account for such isolation?

They can be isolated to namespaces and cgroups. Docker and Nix are just wrappers around a lot of OS functionality with their own semantics attempting to describe how their abstraction works.

Every OS already ships with tools for control users access to memory, disk, cpu and network.

Nix is just another chef, ansible, cfengine, apt, pacman

Building ones own distro isn't hard anymore. If you want ultimate control have a bot read and build the LFS documentation to your needs.

Nothing more powerful than the raw git log and source. Nix and everything else are layers of indirection we don't need


> Nix is just another chef, ansible, cfengine, apt, pacman

No, because Nix code is actually composable. These other tools aren't.


Not only is it composable, but it is generalizable. So yes there is also chef, ansible, apt, uv, nodeenv, etc... or there is just nix. It is able to be the "one tool" to rule them all, often with better reproducibility guarantees.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: