Hacker Newsnew | past | comments | ask | show | jobs | submit | kakapo5672's commentslogin

What if the braincell-vibe JS libraries turn out pretty much identical to the legacy human JS libraries, aside from being better-commented. That might lead to an existential crisis for some folks.


Exactly.

Clearly, some white-collar jobs will be replaced. Hard to argue against that, given it's already beginning to happen. So the question becomes what is the eventual rate of conversion and what is the subsequent economic impact over time? I don't think anyone has a credible handle on that, except to note that it won't be zero.


Columbia. Challenger.


Yep, and the same with the internet. During the 1990s and 2000s, people kept wondering why the internet wasn't showing up in productivity numbers. Many asked if the internet was therefore just a fad or bubble. Same as some now do with AI.

It takes time for technology to show measurable impact in enormous economies. No reason why AI will be any different.


Sure, but you have to consider Carl Sagan's point, "The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." Some truly useful technologies start out slow and the question is asked if they are fads or bubbles even though they end up having huge impact. But plenty of things that at first appeared to be fads or bubbles truly were fads or bubbles.

Personally I think AI is unlikely to go the way of NFTs and it shows actual promise. What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it. The Internet didn't begin as a massive black hole sucking all the light out of the room for anything else before it really started showing commensurate ROI.


> What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it.

I think there are two layers of uncertainty here. One is, as you say, if the value is worth the investment. The other and possibly bigger issue is who is going to capture the value and how.

Assuming AI turns out to be wildly valuable, I'm not at all convinced that at the end of this money spending race that the companies pouring many billions of dollars into commercial LLMs are going to end up notably ahead of open models that are running the race on the cheap by drafting behind the "frontier" models.

For now the frontier models can stay ahead by burning heaps of money but if/when progress slows toward a limit whatever lead they have is going to quickly evaporate.

At some point I suspect some ugly legal battles as some attempt to construct some sort of moat that doesn't automatically drain after a few months of slowed progress. Google's recent complaining about people distilling gemini could be an early signal of this.

I have no idea how any of that would shake out legally, but I have a hard time sympathizing with commercial LLM providers (who slurped up most existing human knowledge without permission) if/when they start to get upset about people ripping them off.


All those racks of Nvidia machines might not pay off for the companies buying them, but I have a hard time believing that people are still questioning the utility of this stuff. In the last hour, Opus downloaded data for and implemented a couple of APIs that I would’ve otherwise paid hundreds a month for, end to end, from research all the way to testing its implementation. It’s so, incredibly, obviously useful.


That something is useful does not necessarily mean that it will be doable for companies the capture enough of value to make up for the billions in investments they have/will have make in the coming years.

Right now the frontier AI companies are explicitly running a kind of chicken race - increasing the burn rates so much that it gets harder and harder. With the hopes that they (and not their competitor) will be the one left standing. Especially OpenAI and Antropic, but non-AI companies like Oracle have also joined. If they keep it going, the likely outcome is that one of them folds - and the other(s) reap the rewards.

Utility (per cost) will go up the tougher the competition. Money captured by single entity possibly down with increased competition.


It's only really useful if what you produce with those API's is useful. It's easy to feel productive with AI tho, in a way that doesn't show up in economic statistics, hence the disconnect.


Well, it might actually decrease GDP in this case, because it's making it so I can just quickly make products that I would've otherwise purchased. But it's also made me more productive, and purchasing things isn't good for its own sake. So maybe measuring progress via GDP isn't ideal?

The thing I'm making with the APIs is very helpful to me, maybe it'll be helpful to others, who knows.


Even that you mentioned NFTs in comparison hurts my mind


I mean, it's an apt comparison, given that the Venn diagram between the pro-NFT hucksters and the pro-AI crowd is a circle. When you listen to people who were so publicly and embarrassingly wrong about the future try to sell you on their next hustle, skepticism is the correct posture.


We can't agree on anything if you think AI is a hustle. You are in a world of surprise


Columbus was not a genius. He was an idiot who believed the earth was smaller than the scientists of his day, and the scientists were right. Columbus became successful through pure luck, genocide and cruelty.

Most idiots like Columbus died in obscurity.


Yeah the inclusion of Columbus is admittedly not great, but it's part of the original quote and the overall point is still a good one.


Columbus, the man that didn't know where he was going to, and when he came back he couldn't tell where he was.


Also no particular reason to group it in with those two. There are plenty of things that never showed up at all. It's just not a signal It's kind of like "My kid is failing math, but he's just bored. Einstein failed a lot too you know". Regardless of whether Einstein actually failed anything, there are a lot more non-Einsteins that have failed.


It didn't take mobile apps with the launch of the iPhone 20 years to add to the economy though, did it?


The iPhone was not the first mobile device or even the first smartphone. Not to mention it did not support mobile applications as we know them today.


That seems a tad reductionist. Why not just say the iPhone was completely inconsequential because afterall it's simply another "computer". Why not go even back further and start the timer at the first physical implementation of a Turing machine?

The iPhone killer UX + App store release can be directly traced to the growth in tech in the subsequent years its release.


I think it would have happened regardless - late Symbian from Nokia was pretty close and Maemo was already a thing with N900 not that far off in the future, not to mention Android.

We might have been possibly better of actually, with the Apple walled garden abominations and user device lockdowns not being dragged into the mainstream.


As someone who worked for Nokia around the iPhone launch (on map search, not phones directly) - I also wanted to believe this at the time. But in retrospect, it feels like what actually mattered was that capacitive multi-touch screens were the only non-garbage interface, and only Apple bought FingerWorks...

Not clear that this is a helpful interpretation, other than "we're in the primordial ooze stage and the thing that matters will be something none of the current players have", but that's hard to take to the bank :-)


This is actually an old syndrome with technology. It takes a longt ime for the effect to be reliably measured. Famously, it took many years for the internet itself to show up in significant productivity gains (if the internet is actually useful why don't the numbers show that? - a common comment in the 1990s and 2000s). So it seems to me we're just the usual dynamic here. Productivity in trillion-dollar economies do not turn on a dime


>Famously, it took many years for the internet itself to show up in significant productivity gains

Yeah but the actual productivity gains that the internet and software tools introduced has had diminishing returns after a while.

Like, are people more productive today when they use Outlook and Slack than they were 20 years ago when using IBM Lotus Notes and IBM Sametime? I'm not. Are people more productive with the Excel of today than with Excel 2003/2007? I'm not. Is Windows 11 and MacOS Tahoe making people more productive than Windows 7 and Snow Leopard? Not me. Are IDEs of today offering so much more productivity boost than what Visual Studio, CodeWarrior and Borland Delphi did back in the day? Don't think so.

To me it seems that at least on the productivity side, we've mostly been reinventing the wheel "but in Rust/Electron" for the last 15 or so years, and the biggest productivity gains came IMHO from increased compute power due to semiconductor advancement, so that the same tasks finished faster today than 20 years ago, but not that the SW or the internet got so much more capable since then.


I think the biggest productivity improvements in software development over the last ~20 years came from open source (NPM install X / pip install Y save so much time constantly reinventing wheels) and automated tests.


True, FOSS changed the game at lot, at least in web development.


I've got people in my social network who firmly believe that every car is, in fact, "driven by Indonesians". Apparently a widespread belief.

I've pointed out that these vehicles are quickly become more prevalent, here and (especially) in China. To which the counter is that there plenty of Indonesians to go around.


After stunt Amazon pulled off, with its shop, being skeptical is warranted.

I know Google and Amazon aren't the same company, but their incentives are.


My goodness. Please introduce me to this "plenty of people". I'm in the field, and none of them work with me.

But I can tell you that statistics and parametrized functions have absolutely nothing to do with it. You're way out of your depth my friend.


Yes, yes, no one understands how anything works. Calculus is magic, derivatives are pixie dust, gradient descent is some kind of alien technology. It's amazing hairless apes have managed to get this far w/ automated boolean algebra handed to us from our long forgotten godly ancestors, so on & so forth.


Same demographic, same experience. AI has been incredibly liberating for me. I get all sorts of things done now that before were previosly impossible for all practical purposes. Among other things, it cuts through the noise of all the layers of detail, and allows me to focus on ideas, design, and just getting stuff built asap.

I also don't get all the hand-wringing. AI is an amazing tool. Use it and be happy.

Even less do I get all the cope about it not being effective, or even useless at some level. When I read posts such as that, it feels like a different planet. Just not my experience at all.


Adding to their sins, many of those airports are in "historically black neighborhoods", you know!


Whenever someone tells me that AI is worthless, does nothing, scam/slop etc, I ask them about their own AI usage, and their general knowledge about what's going on.

Invariably they've never used AI, or at most very rarely. (If they used AI beyond that, this would be admission that it was useful at some level).

Therefore it's reasonable to assume that you are in that boat. Now that might not be true in your case, who knows, but it's definitely true on average.


It's not worthless, it's just not worldchanging as is even in the fields where it's most useful, like programming. If the trajectory changes and we reach AGI then this changes too but right now it's just a way to

- fart out demos that you don't plan on maintaining, or want to use as a starting place

- generate first-draft unit tests/documentation

- generate boilerplate without too much functionality

- refactor in a very well covered codebase

It's very useful for all of the above! But it doesn't even replace a junior dev at my company in its current state. It's too agreeable, makes subtle mistakes that it can't permanently correct (GEMINI.md isn't a magic bullet, telling it to not do something does not guarantee that it won't do it again), and you as the developer submitting LLM-generated code for review need to review it closely before even putting it up (unless you feel like offloading this to your team) to the point that it's not that much faster than having written it yourself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: