The actual /behavior/ of computers, though, tends to emerge from the confluence of complex processes that humans /can't/ understand...our AGI leverages this emergence to enable problem solving in domains where complexity exceeds human capabilities.
This is an important point to discuss - first and foremost, this AGI is the first AI to have inherent, provable morality and be able to exercise self-control in response to this. It is often more moral than humans because its ability to empathize is often far greater than that of humans. And it is fully transparent and provably correct in all respects. At the same time, it is true that this fundamentally changes the nature of conflict because it obviates the assumptions that all traditional thinking is based upon (i.e. they will do this, that quickly, with that level of fidelity, and so on). This is a corollary of superintelligence, and it means that once anyone has this everyone will have to. Those who don't have it will be unable to compete any longer. Thus, it is imperative that it be adopted quickly, as processes and procedures, and understandings, take a long time to change.
If you believe you've detected an error in the paper, we would certainly welcome your feedback. If you don't understand the paper, can you share with us which elements you've found confusing?
The reason that AGI has proven so elusive is that traditional techniques can't achieve it - as our work shows, this includes all statistical approaches. Statistical approaches can only be benchmarked statistically, and are mathematical techniques, so they require math, data, etc. It is critical to note that this proof method, while popular within Computer Science for the reasons just noted, is not a 'strong' method of proof - as but one example it can never reach the levels of proof that can prove propositions 'for all n' within mathematics.
The only way to prove AGI is by showing that the right properties hold of a system at its top level. Generality means a guarantee that the system can handle any future problem seen or unseen, but we can't generalize from examples; showing it working for 10 cases doesn't tell you if it will work for the 11th.
Given that we prove our system 'for all n', the proof that we provide in this paper is in fact far stronger than even what you've asked for here. We employ the strongest possible proof mechanism available.
As for formal descriptions and applications, these are extensively provided in the papers noted in an earlier comment (cited in the proof paper). See especially "Semantically-based priors and nuanced knowledge core for Big Data, Social AI, and language understanding" for a mathematical treatment of waste entropy and how and why atoms work at a fundamental level.
We explain in depth (cf. especially the cites below) why the theory and design of Cyc failed to achieve the results sought. Our system is not statistical (it is entirely causal) so it clearly cannot be derived from LLMs. While the system does support NLP applications (and tends to use construction grammar for handling syntax), those NLP applications (and construction grammar) have nothing to do with the core system itself. We explicitly reject formal logic and FOPC as the basis of our inference; as the history of the discipline and our work show, not only are these formalisms not powerful enough to achieve AGI, but the epistemology baked into them is entirely incompatible with the real world. The system does enable simulation, which is at the core of intelligence.
Our system does not require human input for bootstrapping - in fact, it explicitly rejects any use of human cognition and/or human-derived information, as the profound biases found therein cannot be removed.
The system separates knowledge creation from verification and application. Knowledge can be created via any means, including via human effort, but this is irrelevant because the system is explicitly constructed so that no human beliefs 'leak' into the knowledge. The paper lays out a very specific protocol we use to ensure that if humans help in knowledge verification this strict rejection of human inputs will not be violated.
In some cases humans may be called upon to validate knowledge, but they may only state that an atom is incorrect, not adapt it to match their thinking. If incorrect, new atoms can be created, but these must again be validated via the same process. All knowledge is proven correct independent of human judgment; if there is any ambiguity that the knowledge is correct, it must be redone until it is obviously correct.
All operations of the system are entirely unguided by humans (as they must be, but they are also provably correct and safe) - thus, it does not 'parasitize' human intelligence.
In the end, the paper does do what it says on the tin - prove that AGI has been achieved.
Selected cites relevant to discussion of Cyc and traditional knowledge-based systems:
Olsher, Daniel. (2014) Semantically-based priors and nuanced knowledge core for Big Data, Social AI, and language understanding. Neural Networks 58:131‐147.
Olsher, Daniel (2013) COGVIEW & INTELNET: Nuanced Energy-Based Knowledge Representation And Integrated Cognitive-Conceptual Framework For Realistic Culture, Values, and Concept-Affected Systems Simulation. IEEE Symposium on Computational Intelligence for Human‐like Intelligence(CIHLI) 82‐91.
As a threshold comment, every element of this (in theory and in practice) is entirely reproducible via the materials already provided in this paper and in the peer-reviewed literature, including AAAI, ICDM, KDD, Neural Networks, IEEE Symposium Series on Computational Intelligence, Cognitive Science, IEEE HumTech, and other venues.
There is a wealth of examples in those papers that you can compute and verify on your own. In the proof paper, I've added cites every time a particular area of performance is discussed. My recommendation would be to find those areas that seem most confounding to you and then read the corresponding papers in order to see how we handle them.
Going back to the proof paper, that paper provides a standalone proof of AGI achievement, as per its title. You can verify this proof entirely with your own resources.
The structure of the proof is simple; starting with the literature on intelligence as a base, we first derive the properties that any AGI must/must not have and then show that our formalism indeed holds all of those properties (and does not have properties that it's not allowed to have). Because the necessary properties have been met, AGI has been achieved. People often want to add a 'real-world realization' requirement as part of the concept of achievement, which is not required, but this too is met via the examples given in the papers and many years of real-world use of this technology within the US Government.
The structure of the proof is very simple, and is fairly obviously valid and sound, so if the premises are true then the conclusion (successful AGI achievement) must also be true.
Given this, you really only have three possible paths: show that the structure of my proof is not valid and/or sound, show that my derivation of the AGI properties is somehow incorrect, or show that my computation formalism doesn't have the properties just stated. If you can't do any of that, then I've met my burden and you have no rational basis to reject my conclusion.