This is an important point to discuss - first and foremost, this AGI is the first AI to have inherent, provable morality and be able to exercise self-control in response to this. It is often more moral than humans because its ability to empathize is often far greater than that of humans. And it is fully transparent and provably correct in all respects. At the same time, it is true that this fundamentally changes the nature of conflict because it obviates the assumptions that all traditional thinking is based upon (i.e. they will do this, that quickly, with that level of fidelity, and so on). This is a corollary of superintelligence, and it means that once anyone has this everyone will have to. Those who don't have it will be unable to compete any longer. Thus, it is imperative that it be adopted quickly, as processes and procedures, and understandings, take a long time to change.
Can't see past the paywall in the linked article. What was the surprising finding?