Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

9% false positives? That’s a troubling level of falsies.

The implications of using this tool are fun to think about though.

If it had a very low level of false positives, but wasn’t very good at identifying ai text, it would be very useful.

But false positive rates above very, very low levels will undermine any tool in this category.



Yeah it’s useless currently and will become more useless quickly, because people will scramble AI generated text, mix in human edits and people who use AI generators a lot will mimic their writing style. In short, the SNR will be abysmal outside of controlled environments.

Im pretty sure the smart people at OpenAI know this. I think this is a PR move signaling that they are “doing something”, looking concerned, yet insisting that everything is under control. In reality, nobody can predict the societal rift that this will cause, so this corporate playbook messaging is dishonest in spirit and muddies the waters. This is bad, both long term for OpenAI’s trust, but also because muddy waters makes it harder to have fruitful discussions about safeguards in commercial deployments of this tech.

That said, they’re incorrectly getting blamed for controlling the use of this tech, they’re no more than a prolific and representative champion of it. But the cat is out of the bag, and they absolutely cannot stop this train, and so they shouldn’t be blamed for not trying.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: