Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To me the most clear-cut benefit of AI is automated content moderation. People literally get PTSD from moderating Facebook. It's no different from replacing the most hazardous factory work with robotic automation. You will still need a human in the loop for the more difficult cases, of course, but by any reasonable utilitarian calculation AI content moderation is a win.

(I'm not implying anything about whether or not LLMs are good for Google search.)



> To me the most clear-cut benefit of AI is automated content moderation.

Maybe someday but not with LLMs, which by nature do not understand who's talking, who is being quoted, and who is being falsely quoted.

> Let’s say that we have a forum where there is only one rule: You cannot talk about your favorite color.

https://systemweakness.com/attacking-large-language-models-3...


> Maybe someday but not with LLMs, which by nature do not understand who's talking, who is being quoted, and who is being falsely quoted.

Exact opposite. Humans don't have time to work out any of those things in practice. Machines do have time, and LLMs make a much better job of those things given the real limitation on human labour that actually exist in practice.


Determining the veracity of quotes and people's favorite colors are not the kinds of moderation tasks that give people PTSD.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: