Hacker Newsnew | past | comments | ask | show | jobs | submit | lyavin's commentslogin

See also: OpenAI's Paul Christiano wrote on the same topic — https://www.lesserwrong.com/posts/AfGmsjGPXN97kNp57/argument...

(cf. https://intelligence.org/files/IEM.pdf for some of the arguments being argued against)


Interestingly, he once called MIRI[1] (formerly SIAI) “the best hope that we have”[2].

[1] https://intelligence.org/

[2] http://www.vetta.org/2009/08/funding-safe-agi/


Shane Legg isn't worried about "evil" AI, but rather an AGI/SI that simultaneously (a) has the ability to achieve complex goals in complex environments, and (b) not perfectly aligned with operators' goals [1].

We can also take a strong guess about some aspects of how sufficiently advanced cognition look from the outside, regardless of internal implementation, because of coherence theorems [2][3].

[1] http://econlog.econlib.org/archives/2016/03/so_far_unfriend....

[2] https://arbital.com/p/expected_utility_formalism/?l=7hh

[3] https://arbital.com/p/instrumental_convergence/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: