Hacker Newsnew | past | comments | ask | show | jobs | submit | dasil003's commentslogin

This is very naive and reductive thinking. Experiments have a cost, you really have to think carefully about what you are trying to learn. Even when code is cheap, traffic and time are still huge constraints, and you better make sure your hypothesis actually makes sense for your goals, because AI is more than happy to fill in the blanks with a plausible but completely wrong proposal.

More broadly, it's well understood that experiments are not a replacement for design and UX. Google is famously great at the former and terrible at the latter. Sure the AI maxxers will say the machines are coming for all creative endeavours as well, but I'm going to need more evidence. So far, everything good I've seen come from AI still had a human at the wheel, and I don't see that changing any time soon.


I think you and 7e are both right. Being able to iterate some N orders of magnitude quicker is a big deal. This doesn’t eliminate design and UX. Rather, it merges it with high iteration speed to produce a form of “play”.

“Play” is what produced at least two (likely more) generations of attentive (and therefore competent) programmers. The hype around LLMs is painful, yes, but attentive human minds will ultimately bust through it.


I agree with you, so far what I see is that AI amplifies an individuals output in many domains, but the value of that is 100% contingent on their judgment. It changes the economics of many tasks, but fundamentally it can't really help you if you don't actually know what you want—which is sort of a shocking number of people in the corporate world where most people are there for a paycheck, and perhaps to pursue some social marker of "success".

I'm under no illusions about the goals of AI company execs to justify their valuations (and expenses!) by capturing a huge chunk of global employment value, and the CEOs of many big companies whose financials are getting squeezed for all sorts of reasons and are all too happy to jump on the efficiency narrative of AI to justify layoffs that would have been necessary anyway. Also, AI will keep getting better and it could certainly will move up the food chain—it's already replaced a lot of what I did and I assume capabilities will continue improving for a while even after model capabilities plateau as we improve harnessing, tooling and practice.

So yeah, it can replace a lot of what we do, but I'm not running scared because every step of the way I've seen software people are the ones who actually get the most out of LLMs. Sure it can write all the code so the job changes, but even our workflows completely change, it's giving us more of an edge (if we're open to it) than it does to anyone non-technical. At this stage it still feels empowering on an individual level.

Now I do worry about the consolidation of power and wealth in a tech oligarchy, but that's an issue we need to deal with at a societal and government policy level. Essentially, I can see AI as having radically different outcome potential based on how it's governed. In one way it can be very empowering to small teams, and reduce coordination costs, and increase competition by allowing smaller groups of people to make more scalable companies. But it could also lead to unprecedented concentration of wealth and power if a small set of AI companies are allowed to capture all the economic gains. I don't think there are any easy answers, but I do feel hopeful that we can figure something out as a society—it certainly seems to be creating some unified sentiment across political lines that have been so polarized and divisive over the last decade.


It amplifies by 1000x is the problem for our jobs. However, I do agree that developers with experience are needed to actually harness these tools. I’ve been able to do wonders with them, but I can’t see a junior dev doing 10% of the work that I can with them.

It's a strategy problem, and the current version of the US is spectacularly bad at strategy.

Once upon a time the US had visionaries steering DARPA and making useful bets on the future.

Now strategy is defined by stonks-go-up, quarterly returns, democracy bad, and CEO narcissism, and that's a potently catastrophic combination.


I get that working in the corporate world is often alienating, and also that one might have to accept a bullshit job in order to put food on the table, so on some level I say do what you gotta do.

But beyond that, man this is such a depressing way to live. I've worked a lot of jobs I don't like and put in varying degrees of effort based on how I feel about the people and the situation. But generally one value I live by is that if I'm paid to do something I'm going to try to do a good job at it. That doesn't mean burning myself out or going above and beyond for a boss or company that doesn't deserve it. But for my own integrity and self-worth I have to at least put in a baseline professional effort. If I can't stomach even doing that then it's a clear sign I need to be planning my exit, anything less is disrespectful to my basic self.


Kudos on the analysis, the conclusion is right. I would go further and say even if the metric was completely fair and unbiased it would still not tell you anything useful, and any manager or executive that used it as part of any kind of headcount decision (firings, layoffs, which team to grow, etc) is a moron who probably should be facing tough questions from their own management chain.

AI is a tool, everything it does is attributable to the person who prompted it. Anything else is no different from the long-understood fallacy of counting lines of code.


so, what you're saying is that we can definitely expect this metric to be used to make hiring / firing decisions then


I agree with you, the "replacing people" narrative is not only wrong, it's inflammatory and brand suicide for these AI companies who don't seem to realize (or just don't care) the kind of buzz saw of public opinion they're walking straight towards.

That said, looking at the way things work in big companies, AI has definitely made it so one senior engineer with decent opinions can outperform a mediocre PM plus four engineers who just do what they're told.


This seems like dramatically overstating the mistake. Yeah it was expensive, and yes this could easily been foreseen, but that’s really small potatoes compared to mistakes I’ve seen. I mean I’ve seen promos off shit that never even fully worked beyond pilot scale and had to be rolled back because it was fundamentally flawed on purely technical level.


Buy me a beer and I can tell you some very poignant stories. The best ones are where there is a legitimate abstraction that could be great, assuming A) everyone who had to interact with the abstraction had the expertise to use it, B) the details of the product requirements conformed to the high level technical vision, now and forever, and C) migrating from the current state to the new system could be done in a bounded amount of time.

My view is over-engineering comes from the innate desire of engineers to understand and master complexity. But all software is a liability, every decision a tradeoff that prunes future possibilities. So really you want to make things as simple as possible to solve the problem at hand as that will give you more optionality on how to evolve later.


When you’re out in the infinite empty of space many light years from any livable environment, you damn well better know how your warp drive works to be able to fix it, and that is what Star Trek portrayed.


This is such a middlebrow dismissal. Like yeah, people are speculating based personal experience and knowledge, so what? If you have a different viewpoint or something specific you'd like to see data on call it out and engage in the discussion. Don't just be the "data or GTFO" guy because that's a super bland and pointless take.


"Oh yeah, you think you know something about entrepreneurs? Name every business."


This article is not bad overall, but it does over-index on the cost of making software development costs and tradeoffs legible. Of course leadership does need to make decisions, and so the quest for better data and better cost modeling will continue, and rightly so, Goodhart's law notwithstanding.

I do like this bit though:

> A large codebase also carries maintenance costs that grow over time as the system becomes more complex, more interconnected, and more difficult to change safely. Every engineer added to maintain it increases coordination costs, introduces new dependencies, and adds to the organizational weight that slows decision-making. The asset and the liability exist simultaneously, and for most of the past twenty years, the financial environment masked the liability side of that equation.

And the insight that LLMs are exposing this reality is absolutely true. The funny thing is they are exposing it by accelerating both good and bad engineering practices. Teams with good engineering judgement will move faster than ever with fewer people, and teams with bad engineering judgment will bury themselves in technical debt so fast the wheels will come off.

For me, running an engineering org is primarily about talent acquisition and empowering those ICs with judgment to move quickly. How well systems and teams scale depends on the domain, product, and how it allows you to decouple things. With the right talent and empowerment there are often creative ways to make product and system tradeoffs and iterate quickly to change the shape of ROI. Any mapping to financial metrics is a hugely lossy operation that can't account for such changes. It might work in mature companies that are ossified and in the second half of their lifecycle, but in growing companies I think it's fundamentally misguided would amount to empowering the wrong people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: