Hacker Newsnew | past | comments | ask | show | jobs | submit | thwarted's commentslogin

> It turns out that if you're writing an essay or a youtube script you don't have to tell me that you're going to explain something to me before you explain it to me.

I do if I'm looking to pad the essay or video to make it longer.


This position that no wrongdoing or illegal action can be discouraged because someone has to eat, or because it's "regular people" who have accountability because of who they decided to have manage their investments is getting old. Accountability has been diluted so much that no one is accountable. What about the people who are harmed, the victims, your grandparents, teachers, all sorts of regular people. Nothing is going to get better if we're constantly looking for the most appropriate person to place blame on. Maybe people should be paying more attention to the things they invest in/own.

Most people have no idea what they're invested in. Most are invested in mutual funds through their work or 401k. My point isn't that we shouldn't hold people accountable. My point is that going after owners/shareholders is not the solution we want because it hurts people who have nothing to do with what happened. We need to go after executives.

Once people are impacted, maybe they'll start paying attention to what they're investing in.

> We have the laws. What they’re doing is illegal. … It’s an enforcement problem.

Because some people think laws only ever exist to restrain as a show of power over others and something is only illegal if you get caught.

And some people just want to be contrarian and acting against the law is the ultimate punching-up.

Some laws are just a good idea, and provide benefit, or even just expectation/predictability, to everyone.


Japan puts these uturn fences up. The sign near the uturn part says "get off your bike and walk it"

https://pasteboard.co/IlXSlFUOgULW.png

I'm not sure how I feel about them. I like that the made a way to get you off your bike. I dislike that the path seems plenty wide enough to accomodate bikes and that it would be a useful bike path or 1/2 bike path, but they want it to 100% pedestrian path, even though it's not remotely crowded.


> We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people.

Something Brooks wrote about 50 years ago, and the industry has never fully acknowledged. Throw more bodies at it, be they human bodies or bot agent bodies.


The point of the mythical man month is not that more people are necessarily worse for a project, it's just that adding them at the last minute doesn't work, because they take a while to get up to speed and existing project members are distracted while trying to help them.

It's true that a larger team, formed well in advance, is also less efficient per person, but they still can achieve more overall than small teams (sometimes).


Interesting point. And from the agents point of view, it’s always joining at the last minute, and doesn’t stick around longer than its context window. There’s a lesson in there maybe…

The context window is the onboarding period. Every invocation is a new hire reading the codebase for the first time.

This is why architecture legibility keeps getting more important. Clean interfaces, small modules, good naming. Not because the human needs it (they already know the codebase) but because the agent has to reconstruct understanding from scratch every single time.

Brooks was right that the conceptual structure is the hard part. We just never had to make it this explicit before.


A small difference is that AGENTS.md gets added every time, so the evolution of that is essentially your agent's equivalent of team experience.

But there is a level of magnitude difference between coordinating AI agents and humans - the AIs are so much faster and more consistent than humans, that you can (as Steve Yegge [0] and Nicholas Carlini [1] showed) have them build a massive project from scratch in a matter of hours and days rather than months and years. The coordination cost is so much lower that it's just a different ball game.

[0] https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

[1] https://www.anthropic.com/engineering/building-c-compiler


Then why aren’t we seeing orders of magnitude more software being produced?

Didn't we have a post the other day saying that the number of "Show HN" posts is skyrocketing?

https://news.ycombinator.com/item?id=47045804


I think we are. There's definitely been an uptick in "show HN" type posts with quite impressively complex apps that one person developed in a few weeks.

From my own experience, the problem is that AI slows down a lot as the scale grows. It's very quick to add extra views to a frontend, but struggles a lot more in making wide reaching refactors. So it's very easy to start a project, but after a while your progress slows significantly.

But given I've developed 2 pretty functional full stack applications in the last 3 months, which I definitely wouldn't have done without AI assistance, I think it's a fair assumption that lots of other people are doing the same. So there is almost certainly a lot more software being produced than there was before.


I think the proportion of new software that is novel has absolutely plummeted after the advent of AI. In my experience, generative AI will easily reproduce code for which there are a multitude of examples on GitHub, like TODO CRUD React Apps. And many business problems can be solved with TODO CRUD React Apps (just look at Excel’s success), but not every business problem can be solved by TODO CRUD React Apps.

As an analogy: imagine if someone was bragging about using Gen AI to pump out romantasy smut novels that were spicy enough to get off to. Would you think they’re capable of producing the next Grapes of Wrath?


> I think the proportion of new software that is novel has absolutely plummeted after the advent of AI.

We were not awash in novel software before AI (say last decade in 2019).

I can only assume what you're really trying to say is "AI bad".


> I can only assume what you're really trying to say is "AI bad".

Lmao rarely have I seen a strawman so clearly announced.

No, if you want me to be more precise, it’s that generative AI is limited by its (1) its training corpus and (2) the amount of effort put into its prompts. I don’t think either of those statements are controversial to even the biggest AI bull, but let me know otherwise.

Following from (1) and (2), it takes little effort for even the most milquetoast vibe coder to produce something that there are millions of training examples of, and require very little specification in terms of prompting: TODO CRUD React Apps.

That’s not to say that it’s impossible to create compelling content or code with AI, it’s just why go through the effort if all you need is a TODO CRUD React App to e.g. make a grocery list or create a “Show HN”.


This question remains the 900-pound gorilla of this discussion

Claude Code released just over a year ago, agentic coding came into its own maybe in May or June of last year. Maybe give it a minute?

It’s been a minute and a half and I don’t see the evidence you can task an agent swarm to produce useful software without your input or review. I’ve seen a few experiments that failed, and I’ve seen manic garbage, but not yet anything useful outside of the agent operators imagination.

Agent swarms are what, a couple of months old? What are you even talking about. Yes, people/humans still drive this stuff, but if you think there isn't useful software out there that can be handily implemented with current gen agents that need very little or no review, then I don't know what to tell you, apart from "you're mistaken". And I say that as someone who uses three tools heavily but has otherwise no stake in them. The copium in this space is real. Everyone is special and irreplaceable, until another step change pushes them out.

The next thing after agent swarms will be swarm colonies and people will go "it's been a month since agentic swarm colonies, give it a month or two". People have been moving the goal posts like that for a couple years now, it's starting to grow stale. This is like self driving cars which were going to be workingin 2016 and replace 80% of drivers by 2017, all over again. People falling for hype instead of admitting that while it appears somewhat useful, nobody has any clue if it's 97% useful or just 3% useful but so far it's looking like the later.

I generally agree, but counterpoint: Waymo is successfully running robocabs in many cities today.

When does it come to Mumbai?

They're launching in London this year. So... 2035?

I would love to see this in Mumbai or Dhaka or something like that, just like thrown in there. Can it move 2 meters without stopping?

Don't take me wrong, I like Waymo but 2035 is probably realistic for the cities in more developing countries.


The whole point is that an agent swarm doesn’t need a month, supposedly.

We're talking about whether the human users have caught up with usage of tech, not the speed of the tech itself.

Why do you assume there isn't?

Enterprise (+API) usage of LLMs has continued to grow exponentially.


I work for one of those enterprises with lots of people trying out AI (thankfully leadership is actually sane, no mandates that you have to use it, just giving devs access to experiment with the tools and see what happens). Lots of people trying it out in earnest, lots of newsletters about new techniques and all that kinda stuff. Lots of people too, so there's all sorts of opinions from very excited to completely indifferent.

Precisely 0 projects are making it out any faster or (IMO more importantly) better. We have a PR review bot clogging up our PRs with fucking useless comments, rewriting the PR descriptions in obnoxious ways, that basically everyone hates and is getting shut off soon. From an actual productivity POV, people are just using it for a quick demo or proof of concept here and there before actually building the proper thing manually as before. And we have all the latest and greatest techniques, all the AGENTS.mds and tool calling and MCP integrations and unlimited access to every model we care to have access to and all the other bullshit that OpenAI et al are trying to shove on people.

It's not for a lack of trying, plenty of people are trying to make any part of it work, even if it's just to handle the truly small stuff that would take 5 minutes of work but is just tedious and small enough to be annoying to pick up. It's just not happening, even with extremely simple tasks (that IMO would be better off with a dedicated, small deterministic script) we still need human overview because it often shits the bed regardless, so the effort required to review things is equal or often greater than just doing the damn ticket yourself.

My personal favorite failure is when the transcript bots just... Don't transcript random chunks of the conversation, which can often lead to more confusion than if we just didn't have anything transcribed. We've turned off the transcript and summarization bots, because we've found 9/10 times they're actively detrimental to our planning and lead us down bad paths.


I build a code reviewer based on the claude code sdk that integrates with gitlab, pretty straightforward. The hard work is in the integration, not the review itself. That is taken care of with SDK.

Devs, even conservative ones, like it. I’ve built a lot of tooling in my life, but i never had the experience that devs reach out to me that fast because it is ‘broken’. (Expired token or a bug for huge MRs)


It doesn't appear to have improved the quality of the software we have either.

we are. you can check the APP STORE release yoy. it's skyrocketing.

I have barely downloaded any apps in the last 5-10 years except some necessary ones like bank apps etc. Who even needs that garbage? Steam also has tons of games but 80% make like no money at all and no one cares. Just piles of garbage. We already have limited hours per day and those are not really increasing so I wonder where are the users.

Here’s a talk about leaning into the garbage flow. And that was a decade ago.

https://youtu.be/E8Lhqri8tZk

I can’t imagine the number being economically meaningful now.


"The future is already here, it's just not evenly distributed"

> But there is a level of magnitude difference between coordinating AI agents and humans

And yet, from https://news.ycombinator.com/item?id=47048599

> One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task.

Which sounds pretty much the same as how work is broken down and handed out to humans.


Yes, but you can do this at the top level, and then have AI agents do this themselves for all the low level tasks, which is then orders of magnitude faster than with human coordination.

> But when everyone spends more, the effect is merely to raise the bar that defines special. The average American wedding now costs $30,000, roughly twice as much as in 1990. No one believes that couples who marry today are happier because weddings cost so much more than they used to.

It seems odd to claim this increase is due to keeping up with others' weddings when inflation between 1990 and 2015 was roughly the difference here. The weddings were/are more expensive because everything was/is more expensive. $15,000 in 1990 had the same purchasing power as ~$27,000 in 2015. So this hardly seems related to bigger, more extravagant weddings. People have had to spend more to maintain the same quality of wedding as the previous generation.


TBH, it seemed such an obvious point you're making that I assumed the author had to be comparing inflation-adjusted dollars, but from the (very little) digging that I did, it looks like that's not the case.

In fact, weddings decreased in inflation-adjusted cost between 1990 and 2023: https://ktvz.com/stacker-lifestyle/2024/03/01/how-us-wedding...

I would assume that downward trend has continued as inflation has spiked in the past few years and people had to spend more of their money in other areas.


> I assumed the author had to be comparing inflation-adjusted dollars

As did I.

90s weddings remind me of the Friends episode where Monica was scoping out her wedding. Chandler revealed how much money he had by writing it on a price of paper (that is, the audience never saw the dollar value), and Monica said something like "oh, we can go with best one, plan A" and Chandler said he didn't want to spend that much money "on one party". I've always wondered what amount of money that was.


Well they used the money to buy a house in New York so it wasn't 15000 , because that's not a down payment for a New York house/ apt, even in the 90s.


Inflation has made prices higher, but people's purchasing power has been decreasing all this time. Salaries, benefits etc have all not been keeping up with inflation for decades. It is why young people are marrying later, not able to afford to buy property etc. All the gains the economy has made over the past handful of decades have been captured by a small percentage of the population.


> Salaries, benefits etc have all not been keeping up with inflation for decades

I don't believe that's consistent with the data

https://fred.stlouisfed.org/series/MEHOINUSA672N


Also when you factor in the age of the wedding participants it almost seems like a regression. A couple in their 30s should be able to afford more than a couple in their 20s a generation ago.


> One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task.

This is definitely a way to keep those who wear Program and Project manager hats busy.


So the solution is to allow the AI scraping and hide the content, with significantly reduced fidelity and accuracy and not in the original representation, in some language model?


Don't forget the onslaught of ads that will distort the actual publications even more going forward.


The people who maintain open source software are considered "the vendor" by these compliance types. When it comes to open source, the user is really the vendor and the user has responsibility to themselves for compliance (this is pretty much spelled out in the licence and WARRANTY file). The compliance industry doesn't acknowledge how open source works and have tried, since forever, to shoehorn it into a paid vendor model. Open source maintainers creating destination/marketing websites espousing the advantages of their software as if it is a sellable/buyable product doesn't help and perpetuates that perception.


> "You’re better than this" "you made it about you." "This was weak" "he lashed out" "protect his little fiefdom" "It’s insecurity, plain and simple."

Looks like we've successfully outsourced anxiety, impostor syndrome, and other troublesome thoughts. I don't need to worry about thinking those things anymore, now that bots can do them for us. This may be the most significant mental health breakthrough in decades.


“The electric monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; electric monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.”

~ Douglas Adams, "Dirk Gently’s Holistic Detective Agency"


Unironically, this is great training data for humans.

No sane person would say this kind of stuff out loud; this often happens behind closed doors, if at all (because people don't or can't express their whole train of thought). Especially not on the internet, at least.

Having AI write like this is pretty illustrative of what a self-consistent, narcissistic narrative looks like. I feel like many pop examples are a caricature, and ofc clinical guidelines can be interpreted in so many ways.


What if the space is url encoded %20 ?


That wouldn't work because Windows doesn't understand url-encoded sequences.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: