"Cowboy thing" is putting it mildly. It invites/incentivises terrible behavioral patterns. The next guy looking has no idea what happened to that running system. (That next guy may well be you yourself a week or month later.)
> [...] who thought it was okay to yell at people about [...]
That society as a whole accepts this kind of abuse, no matter industry or circumstances, is beyond me. It's an abuse of power. If anybody did this to anyone, the only appropriate response should be to walk and never come back. Nobody would want to accept this kind of crap from family and friends, so why is it ok in a professional setting? Because of the money/power dynamics at play? We need consensus in society to walk, that would end it in no time.
To me that's the opposite. Whatever an LLM gives me, I view with skepticism. If I google sth then I quickly get a sense of how much I can trust it and what the BS factor is. I can refine my view in either case, but my a priori trust with an LLM is much lower.
Maybe we just need to work on training the general population to have a similar bias. (It will be harder than it sounds. Unbelievable amounts of capital are being bet on this not happening.)
In a discussion with my father-in-law about whether ChatGPT was trained on copyrighted materials, he literally asked ChatGPT and treated its response that it wasn't as useful evidence. He went to MIT, so he's arguably more educated than most people will ever be, so it's hard for me to be optimistic that trying to just explain this to people better will move the needle significantly.
Just because you are impressed by the capabilities of some tech (and rightfully so), doesn't mean it's intelligent.
First time I realized what recursion can do (like solving towers of hanoi in a few lines of code), I thought it was magic. But that doesn't make it "emergence of a new kind of intelligence".
A recent one is the RCA of a hang during PostgreSQL installation because of an unimplemented syscall (I work at a lab that deals with secure OS and sandboxes). If the search of the RCA was left to me, I would have spent 2-3 weeks sifting through the shared memory implementation within PostgeSQL but it only took me a night with the help of Opus 4.5.
To me, that's intelligence and a measurable direct benefit of the tool.
By that example, PostgreSQL itself is a form of intelligence relative to a physical filing system. It doesn't seem like your working definition of intelligence has a large overlap with a layman's conception of the word.
Plus by that example, computers have always been intelligent considering that they were created to, well, compute things several orders of magnitude faster than even the smartest human can do by hand.
The argument I and others here are making is that what you call "intelligent" is a property that also other tools exhibit which are rarely called "intelligent". You can certainly do that, but that does not prove us wrong (and also doesn't fit what most people would consider "intelligence", as fuzzy as that concept might be).
I use a compiler daily. It consumes C++ source files and emits machine code within seconds. Doing that myself would take months.
I just did my taxes using a sophisticated spreadsheet. Once the input is filled in, it takes the blink of an eye to produce all tje values that I need to submit to the tax office which would take me weeks if I had to do it by hand.
Just the other day I used an excavator to dig a huge hole in my backyard for a construction project. Took 3 hours. Doing it by hand would have taken weeks.
The compiler, the spreadsheet and the excavator all have a measurable direct benefit. I wouldn't call any of them "intelligent".
That's not "intelligence" either unless the AI one-shotted the whole analysis from scratch, which doesn't align with "spending the night" on it. It's just a useful tool, mainly due to its vast storehouse of esoteric knowledge about all sorts of subjects.
Likewise - I think sometimes we ascribe a mythical aura to the concept of “intelligence” because we don’t fully understand it. We should limit that aura to the concept of sentience, because if you can’t call something that can solve complex mathematical and programming problems (amongst many other things) intelligent, the word feels a bit useless.
It's even conceivable that 2 gets worse with AI: The AI does the proof for them, very convolutedly so, and as long as the proof checker eats it, it goes through. Comes the day when the complexity goes beyond what the AI assistant can handle and it gives up. At that point, the proof codes complexity will for a long time have passed the threshold of being comprehensible for any human and there is no progress possible. Hard stop.
Using a proof language with an SMT solver is basically that: an inexplicable tick that it’s fine, until a small change is needed, the tick is gone, and nothing can say why.
That's basically what sledgehammer (mentioned in the article) boils down to. The Lean folks use some safeguards to avoid issues with that, such as only using their "grind" at the end of a proof, where all the "building blocks" have been added to context.
Then why replace one imprecise term with another? Fiber is a carbohydrate. Humans use close to nothing from its energy. (Though it plays another important role in the digesive system.)
Try eating 100g of grass per hour during a marathon and you will see. That's the metabolic edge horses have over humans.
Horses don't eat during races (and aren't evolutionarily disposed to marathons, anyway). No edge there; it takes quite a while for their symbiotic gut flora to downconvert fodder to glucose.
In hindsight, it's easy to be smart. You picked two examples where somebody said "never gonna happen" and then it happened. How about the countless examples where somebody said the same and then the thing actually didn't happen?
Take millions playing the lottery. To each of them, I can confidently say "you won't win, not gonna happen". For almost all of them I'll be right. There will be one who wins, were I was wrong, and they will say "see, told you so". That doesn't mean my prediction was wrong. It means you are having a reporting bias.
GP also probably had a sampling bias. The ones who were actually concerned about the impending Russian invasion presumably fled out of the country (or at least, away from the major cities to rural areas that probably see less fighting)
I was in a neighboring country in Europe at the time, not Ukraine, but we didn't see any Ukrainians move into our area until a few weeks after the war started.
That's not to say the country wasn't prepared though. If the GP did talk to people on the ground days before it started, saying it won't happen would match the public propaganda at the time coming out of the Ukrainian government and their allies. They knew it was coming and seemed to decide they were better to faint like the weren't ready and avoid public panic before it started.
It "exhaustive brute forcing" approach does not need an LLM in the loop. Just brute force the possible outputs instead. They will contain all the most beautiful novels you can imagine!
reply