Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He means that if the problem becomes known, the AI companies will hack in a workaround rather than solving the problem by making the model more intelligent. Given that they have been caught cheating in that way in the past, I can't blame the GP for not sharing his tests.
 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: