Shane Legg isn't worried about "evil" AI, but rather an AGI/SI that simultaneously (a) has the ability to achieve complex goals in complex environments, and (b) not perfectly aligned with operators' goals [1].
We can also take a strong guess about some aspects of how sufficiently advanced cognition look from the outside, regardless of internal implementation, because of coherence theorems [2][3].
(cf. https://intelligence.org/files/IEM.pdf for some of the arguments being argued against)