Author’s central point is that an LLM answer “is optimized for arrival, not for becoming” (to paraphrase from the Google “Lucky” part).
So a reasoning LLM that does the comparisons and checks “like a human” still fails the author’s test.
That said, this still feels like a skill issue. If you want to learn, see opposing views gather evidence to form your own opinions about, LLMs can still help massively. You just have to treat them research assistants instead of answer providers.
So a reasoning LLM that does the comparisons and checks “like a human” still fails the author’s test.
That said, this still feels like a skill issue. If you want to learn, see opposing views gather evidence to form your own opinions about, LLMs can still help massively. You just have to treat them research assistants instead of answer providers.