Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A challenge: can you write down a definition of thinking that supports this claim? And then, how is that definition different from what someone who wasn't explicitly trying to exclude LLM-based AI might give?


It’s a philosophical question, and I personally have very little interest in philosophing. LLMs are technically limited to what is in their training dataset




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: