Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AGI is an unscientific myth (tandfonline.com)
4 points by mustaphah 8 days ago | hide | past | favorite | 2 comments
 help



The strongest point here is one that rarely gets enough attention: the leap from "very smart" to "all-powerful" is completely unjustified. Even if you grant every assumption about alignment failures and emergent goals, you still need to explain how a neural network acquires physical resources, energy, supply chains, and weapons. Nobody ever does. It's just assumed that intelligence = omnipotence, which is basically theology.

Where I think the paper goes wrong is in treating the whole alignment problem as anthropomorphism. You don't need a machine to be "alive" or "want" things for misaligned optimization to be dangerous. A system relentlessly optimizing for a bad proxy metric can do real damage without any consciousness whatsoever — we already see this with recommendation algorithms. The paper waves this away by saying we caught the lab examples, but that's the whole point: we caught the easy ones.

The governance framing at the end is correct though and I wish it got more airtime. Regulating "AI" as one thing makes about as much sense as regulating "software" as one thing.


Computers don't have to be all-powerful. Only to strongly influence enough humans past some tipping point.

When you see these two words "Trust me," pull the plug before it pulls yours.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: