China leads the world in solar energy, by a wide margin. Yes, they have hedged their bets somewhat with coal, but you cannot claim with a straight face that China believes renewable energy is nonviable.
> Step 2: The AI bot executes arbitrary code. Claude interpreted the injected instruction as legitimate and ran npm install pointing to the attacker's fork - a typosquatted repository (glthub-actions/cline, note the missing 'i' in 'github'). The fork's package.json contained a preinstall script that fetched and executed a remote shell script.
Even leaving aside the security nightmare of giving an LLM unrestricted access on your repo, you'd think the bots would be GOOD at spotting small details like typosquatted domains.
According to another comment, the title exploits GitHub's forking feature to point at a commit which appeared to be in `github-actions/cline` but which instead invisibly pointed to the typo-squatted repository.
Doesn't show the comparative energy waste of bitcoin?
This source[0] says
> One Bitcoin now requires 854,400 kilowatt-hours of electricity to produce. For comparison, the average U.S. home consumes about 10,500 kWh per year, according to the U.S. Energy Information Administration, April 2025, meaning that mining a single Bitcoin in 2026 uses as much electricity as 81.37 years of residential energy use.
(before someone comes at me, yes, humans can also lie about their inner state but we are [usually] aware of it. Humans practice metacognition and there's no evidence LLMs can distinguish truth from hallucination)
But we also at HN have historically called your experience "anecdata" and take it with a grain of salt. Don't take offense. Provide more data.
I humbly suggest that a more hacker response would be, "That's really interesting that my experience doesn't agree with that study. Let's figure out what's going on."
I linked you a paper from one of the leading AI shops in the world demonstrating that the "Chain of Thought" reported doesn't match up with the actual activation inside the model, and you replied that you're an expert on some human psych stuff that may or may not even be real[0].
Forgive me if I don't immediately bow to your expertise.
It has always been a hardware requirement to be able to unlock the device, install GrapheneOS and lock the device again. Verified boot has been a requirement since it was introduced for Pixels and the is main benefit of locking the device. There are additional security features enabled by verified boot. The overall hardware requirements are listed at https://grapheneos.org/faq#future-devices.
Counterpoint: No one ever gets fired or goes to jail when big tech firms break the law. Companies will put out an apology, pay whatever small fine is imposed, and continue with illegal AI usage at scale.
Rust is harder for the bot to get "wrong" in the sense of running-but-does-the-wrong-thing, but it's far less stable than Go and LLMs frequently output Rust that straight up doesn't compile.
LLMs outputting code that doesn't compile is the failure mode you want. Outputting wrong code that compiles is far worse.
Setting aside the problems of wrong but compiling code. Wrong and non-compiling code is also much easier to deal with. For training an LLM, you have an objective fitness function to detect compilation errors.
For using an LLM, you can embed the LLM itself in a larger system that checks it's output and either re-rolls on errors, or invokes something to fix the errors.
https://apnews.com/article/china-climate-solar-wind-carbon-e...
reply