> “I’m Feeling Lucky” intelligence is optimized for arrival, not for becoming. You get the answer but nothing else (keep in mind we are assuming that it's a good answer). You don’t learn how ideas fight, mutate, or die. You don’t develop a sense for epistemic smell or the ability to feel when something is off before you can formally prove it.
All you're saying is that you can't imagine working on a task that is longer than 1 Google Search.
If "I'm feeling lucky" works by magic, that doesn't mean your life is free of all searching, it just means you get the answer to each Google Search in fewer steps, which means the overall complexity of tasks that you can handle goes up. That's good!
It doesn't mean you miss out on the journey of learning and being confused, it just means you're learning and being confused about more complicated things.
There is definitely bloat. A few months ago I was messing about with making a QWERTY piano in a web page, and it was utterly unplayable due to the bloat-induced latency in between the fingers and the ears.
People always learn the things they need to learn.
Were people clutching their pearls about how programmers were going to lack the fundamentals of assembly language after compilers came along? Probably, but it turned out fine.
People who need to program in assembly language still do. People who need to touch low-level things probably understand some of it but not as deeply. Most of us never need to worry about it.
I don't think the comparison (that's often made) between AI and compilers is valid though.
A compiler is deterministic. It's a function; it transforms input into output and validates it in the process. If the input is incorrect it simply throws an error.
AI doesn't validate anything, and transforms a vague input into a vague output, in a non-deterministic way.
A compiler can be declared bug-free, at least in theory.
But it doesn't mean anything to say that the chain 'prompt-LLM-code' is or isn't "correct". It's undecidable.
Actually, it isn't that different. Compilers are trash. They produce hilariously bloated and stupid code, even the C++ compilers, not to speak about your average JIT compiler.
However, in practice we don't care because it's good enough for 99% of the code. Sure, it could be like 5x better at least but who cares, our computers are fast enough.:tm:
AI is the same. Is it as good as the best human output? Definitely not. Does it do the job most of the time? Yes, and that's what people care about.
(But yes, for high-impact work - there's many people who know how to read x64 asm or PTX/SASS and they do insane stuff.)
Not usually they aren't. They can be made to be, but it requires extra effort and tradeoffs. Hence why there is a lot of work put into reproducible builds — something you would get for free if compilers were actually always deterministic.
Unless you are taking a wider view and recognizing that, fundamentally, nothing running on a computer can be nondeterministic, which is definitely true.
>People always learn the things they need to learn.
No, they don't. Which why a huge % of people are functionaly illiterate at the moment, know nothing about finance and statistics and such and are making horrendous decisions for their future and their bottom line, and so on.
There is also such a thing as technical knowledge loss between generations.
My impression of mistakes was that they were an indicator of someone who was doing a lot of work. They're not necessarily making mistakes at a higher rate per unit of work, they just do more of both per unit of time.
From that perspective, it makes sense that the people who made the most mistakes in the past will also make the most mistakes in the future, but it's only because the people who did the most work in the past will do the most work in the future.
If you fire everyone who makes mistakes you'll be left only with the people who never make anything at all.
In this case it was trivial to normalize for work done.
It’s very human to want to be forgiving of mistakes, after all who has not made any mistakes, but there are different classes of mistakes made by all different types of people. If you make a mistake you are the same type of person, but if you are pulling from a distribution by sampling by those who have made mistakes you are biasing your sample in favor of those prone to making such mistakes. In my experience any effect of learning is much smaller than this initial bias.
> If anything there fast reduction in value makes them less attractive.
Right. And if you buy a secondhand one you are increasing their value on the secondhand market. Reducing the depreciation increases the value of the brand new phone.
No it wasn't. That's the exact point I'm refuting.
If you don't think voting with your wallet works, then that is a position you can take. But you can't think it works when buying from the OEM but doesn't work when buying on the secondary market.
Sure you can, because you're talking about different inputs in your supply and demand scenario. You're also talking about different opportunity costs for the OEM, different incentives, and different outcomes. You're also assuming the person selling their Pixel is buying another Pixel, and not switching to a device made by a different OEM.
And ultimately, if buying it on the secondary market in such small numbers that it doesn't move the market, then it adequately addresses the concern.
Edit: I'm not saying there's zero effect of it, but it's likely statistically insignificant.
Create a vibe-coded demo -> showcase it with a faked/overblown video and an "It's not X It's Y. Read thread!!11!" type engagement bait -> Get LLM comments all riled up and excited in the comments to fake hype -> sell a course/LLM wrapper.
We do sort of have that with the capabilities stuff (although I admit hardly anyone knows how to use it).
But the tricky part is that "reading files" is done all the time in ways you might not think of as "reading files". For example loading dynamic libraries involves reading files. Making network connections involves reading files (resolv.conf, hosts). Formatting text for a specific locale involves reading files. Working out the timezone involves reading files.
Even just echoing "hello" to the terminal involves reading files:
Capabilities are craaaazy coarse on Linux. Really only a small piece of the sandboxing puzzle. Flatpak, Bubblewrap, and Firejail each provide an overall fuller view of what sandboxing can be.
He's counting out like 6 at a time. He needs a fast way to pick small quantities precisely, not a fast way to check large quantities. Once they're picked they're easily verified by eye.
All you're saying is that you can't imagine working on a task that is longer than 1 Google Search.
If "I'm feeling lucky" works by magic, that doesn't mean your life is free of all searching, it just means you get the answer to each Google Search in fewer steps, which means the overall complexity of tasks that you can handle goes up. That's good!
It doesn't mean you miss out on the journey of learning and being confused, it just means you're learning and being confused about more complicated things.
reply