Even outside of HN, there seems to be a large amount of downward pressure on tech salaries. Like even higher level staff/principal developer positions under $150k.
A few years ago, working remotely outside the hot zones meant you'd probably split the difference between the hot zone and where the dev was... now it's like not even a thought or an argument.
I had a similar experience with Quake when it first came out... it felt more like a slideshow on my AMD 5x86 @133mhz w/ 64mb ram and large cache module. My computer was entirely lopsided for games, I got the AMD a few months before a crazy deal on the ram and cache module for it, so I maxed it out. I will say it tore through business apps with OS/2 and later NT4 ran like a champ on that little box.
I couldn't afford the jump to Pentium at the time. I had it for about 4 years or so, until I bumped up to an overclocked Duron at 1ghz around 2000-2001 or so.
Memory designs are pretty entrenched with the various patents involved... I've said a few times that I don't know why Intel hasn't gotten back into DRAM production with their fabs. I suspect they may be contractually limited when they sold off their memory businesses.
Design is not the problem. Having foundry space to manufacture is the bottleneck. It is just all being sucked up (with AI needs being the big additional load).
And to be clear, the foundry space for CPUs/GPUs is not the same as for RAM, which is printed with much larger feature size in order to lower the costs.
I don't think it's that... you have three companies that control over 90% of the market that have been convicted of collusion and price fixing more than once, when there were even more companies in the mix. The memory companies aren't producing at max capacity, they're price fixing.
Beyond this, memory isn't produced on leading end nodes, they're a few generations back as it is. For that matter, Intel isn't even near capacity and has/had plenty of opportunity to produce VRAM and SSD Storage, they got out of both as they became more commoditized.
For CPUs, they are still licensing ARMs cores, of course with their own modifications, and they bought Intel’s modem businesses, which likely gave them the patents they needed. GPUs I can’t speak to on this though.
To be clear here, Apple doesn't actually license any cores from ARM - they've got an architectural license and implement their own cores. Licenses for cores are a different thing.
for the buttons to open the remote pages... it would be nice if these were link-buttons with the actual url and rel="nofollow" so that I can explicitely right-click and open in a new tab as opposed to clicking through, which is kind of the same... but without navigating the the new tab.
I think I'd go a slightly different route, if I was trying to do this, and that would be to give each agent at least a VM. Not to mention an email account, so that they can coordinate/collaborate with the other "developers" ...
In the end, I firmly believe that agents need a lot more guidance in terms of direction than what a lot of people seem to be giving. Let alone code reviews.
VMs bring greater isolation but they're a lot heavier and slower. The agents just use github for synchronization here, though I've been considering building some kind of todo list overlay locally.
Yes... but with full VMs, you can integrate docker (compose) into the application workflows without risking conflicts between separate agents on the same system/vm.
That's the part of your post I have trouble understanding. That you need to work around colliding ports suggests that the containers spun up by the agent run directly on the host, not inside some form of nested containerization. But if you do that, how do you ensure that the application running in those containers is sandboxed just as strictly as the agent itself?
The docker compose stack for the applications is spun up on the host. The agents have access to the docker socket which means they can talk to docker from inside their sandbox and spin up new sibling containers on the host. Yolobox isn’t designed for full isolation- just accidental commands you wouldn’t want to run on the host, and a convenient way of giving agents a customizable environment they control.
Early on in development I tried to harden the container to prevent deliberate escapes by the agent. This was a waste of time as the agents just kept finding more and more exploits when I asked them to try and break out.
I wouldn't assume that a VM will give you complete security against a determined AI. yolobox started as a way to prevent accidental `rm -rf ~` and has expanded into a set of tools that make working with CLI agents easier.
Personally, I run yolobox directly on the host. Being able to tell the agent it has sudo and can install and do whatever it needs to accomplish any task is handy.
Docker was only exposed later, after I realized that any sufficiently determined AI could break out of the container, and attempts to contain it were a waste of time. Also note that the docker socket is not exposed by default. There's a --docker flag for this.
I made some comments about exploits in the original post [1]. Gemini was quite creative in adding git hooks to the repo that would execute on the host machine. That folder is shared.
That's kind of the point in GP... everything around the code has improved... the workflows, definitions, documentation, process. I'd say that all of those things are improving and expanding at a rate faster than the improvements in code output, which are also happening at a faster turn around than actual people.
I've said several times that when I use an Agent, I'm getting about 2-4x the value and about 10x the output... the "value" is features landing in code and the difference to the 10x is documentation and testing. While a lot of that may not get reviewed by every person that touches a product, it helps with further ai based feature development.
I'm not a big fan of running many agents or outright vibe coding slop... but you can definitely leverage the coding agents and get a lot of improved output.
I'm not talking about what the developer is doing - I'm talking about what the company is doing in terms of initiating new development work. Again, startups and one-man shops are different because you control your own pace, but in many large corporations you may sit around just minding shop until the next big product development comes along (I would use this time to start my own initiatives to build tools and libraries to help the team), and that company pace is not being determined by how long development takes.
This is especially true if like most developers you are not working at a company where software is the product, but rather where software is part of the product, or where you are part of IT working on internal systems, not part of product development at all.
Then is it a real 10x increase in output if the output is in supporting areas and not the code/feature delivery?
It seems like you’re saying that you are now able to maintain documentation at a faster rate and increase testing but not the actual development speed of the feature itself.
I said 2-4x on value.. which would be development of the feature itself in terms of direct output... not even considering setting up test harnesses and doing more adventurous changes that would take me a lot longer to do.
I was referring to the sentence:
I'm getting about 2-4x the value and about 10x the output... the "value" is features landing in code and the difference to the 10x is documentation and testing.
What does the 10x imply?
And are you saying that you are outputting 2-4x as in:
value * value * value * value in the case of 4x? That seems rather high.
10x is 10x the output I could have done by myself. 2-4x is between 2 and 4 times the output I could have done myself. I distinguish "value" as that which is strictly a concern for the users/stakeholders, who themselves don't necessarily consider robust testing harnesses or internal-only documentation a value.
I said 4x as a cap for value, I don't know how you interpret that as x^4 ...
That seems to be the case with a lot of companies with a significant number of tech workers... I think every tech manager/leader needs to read The Mythical Man Month and pass a test on the content without benefit of AI. I know Twitter/X was lambasted when Musk took ownership and made deep cuts, but my own opinion is it was probably for the best and would be healthier as a company after.
I mean, I want to work... and I absolutely despise the push to keep dev wages down, even at higher levels. But the reality is, at least from my own experience, that most software orgs and projects are actually over-staffed and would operate better with fewer, more experienced staff. Rather than filling hundreds of butts in seats.
Do you really want a world without any fast food or snack foods? I mean, I think we consume way too much as a society, but I'd rather not have the government decide what I'm allowed to eat.
Have a conversation with someone who grew up in communist USSR/Russia sometime... It definitely isn't cool.
If we had govt controlled food supply, we'd never have the likes of hot sauce (sriracha, pace, etc) and would likely never have seen a lot of options form. For better and far, far worse.
>but I'd rather not have the government decide what I'm allowed to eat.
I don't know how it'd get to that if we had even more supply. I'm saying we'd be better off dealing with the problems of overproduction rather than the problems of unprofitable businesses and killing production capacity because it isn't profitable in the short-term.
I also never said you couldn't have non/not-for-profit food production, just that they shouldn't be for-profit.
It's difficult because a lot of the margins have been pressed out, and capital funding is often done in a way that doesn't allow for a market to shrink and respond to over-production or a reduction in demand.
If the government was responsible for running the farms, we would not have near the variety we have today... and for that matter, it would be much closer to soviet communism. I'm absolutely opposed to that.
And how do you know we would be better off? What would you do with oversupply? We had mountains full of cheese for decades from oversupply.. and that's a single product. Canned fruit doesn't even last that long before breaking down. The alternative is waste year after year, vs. cutting back and planting something else, which is what is happening... part of the market was allowed to fail (Del Monte) and part is being bailed out (farms) in defense of being able to have ongoing production, even if the product is different.
That seems far better than having mountains full of rotten peaches in cans.
Of course if they did then what's about to happen with the peach trees, you'd end up killing the dairy cows, which I'm guessing the people in this thread would have a problem with.
A few years ago, working remotely outside the hot zones meant you'd probably split the difference between the hot zone and where the dev was... now it's like not even a thought or an argument.
reply