We have fully reusable rocket ships that let us put people in orbit at less than $200k per person (SpaceX reusable Falcon 9 / Dragon, likely to see fruition within the next 5 to 15 years or so).
We have hand held computers and smartphones with the cpu power of an entire server rack today and with petabytes of non-volatile storage that's nearly as fast as ram (memristors, possibly RSFQ logic, possible on a circa 20 year timeline).
We have fully automated factories (the grand-children of today's CNC machines and 3D printers), you upload some data and press a few buttons and not too long after a fully assembled complex product (a tablet computer, an automobile, an excavator, a spaceship) comes out the other end, as many as you want. And then you start self-replicating such factories this way. That's probably going to happen within the next 50 years, if not sooner.
And this is hardly everything. The future is a crazy place.
> The singularity specifically requires AIs that can build smarter AIs.
Not really. The singularity requires an intelligence explosion. A perfectly acceptable alternative route is human-intelligence augmentation, via some or all of biological hacking (genetic or otherwise), chemical hacking, and tool use. The former two are plausible but not yet off the ground, but regarding tool-based augmentation I think we're seeing meaningful, if preliminary, progress. (Then again, I think literacy counts in this bin.)
I think the key question is whether you think the self-amplifying returns of technology that we're already seeing will extend sustainably to amplifying intelligence (either human or artificial).
"specifically requires AIs that can build smarter AIs"
The trendy "rapture of the nerds" version of the singularity does.
The original definition is technology advancing at a faster rate than humans are capable of integrating and managing it. It was identified while trying to identify existential threats to humanity.
It's fascinating watching a warning about future dangers get repackaged by AI focused futurists into a utopian neo-religion.
well, specifically it refers to a period of technology advancement so rapid that we entirely lose the ability to make predictions about the other side. (this is by analogy with the mathematical definition of a singularity, a point at which your equations break down, by way of the physics application of the concept to black holes, out of which information cannot flow)
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."
- I. J. Good, 1965
"One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."
- Stanisław Ulam, 1958 (referring to a conversation with John von Neumann)
Good's "intelligence explosion" concept is the basis of the modern singularity religion but not the origin of the term 'Singularity' used to mean a technological singularity. Good never uses the term.
So your quote makes my point doesn't it? That a utopian version of the intelligence explosion has replaced the original meaning of the technological singularity. One that favours the views of the AI researchers who went into futurism when the research money dried up.
Edit: I didn't know this but when looking up the date of that Ulam quote I learned that Vernor Vinge seems to be the first person to use the term singularity in the context of Good's intelligence explosion. In 1983. In the pages of that wonderful bastion of great scifi and pseudoscience that I loved as a kid: Omni Magazine.
However, I am not convinced the schools are entirely contradictory, even with their strong claims.
For example,
- Accelerating Change school: this is mainly a (historical) trajectory claim based on positive feedback loop of technology. It does not seem to make any strong (maximum) claims about even trajectories beyond the point of strong AI.
- Event Horizon school: this is a forecastability claim that the repurcussions of AI or intelligence enhancement significantly beyond current human intelligence is unknowable until you get there.
- Intelligence Explosion school: this is a positive feedback claim that the most you can tell when human intelligence is surpassed is that further intelligence gains, be it by enhancement or AI, will feedback on itself to create further and faster change.
These claims seem to agree with each other on fundamental aspects of technology and intelligence.
In addition, in my opinion, it would not take too much imagination to combine the separate claims into a unified Singularity model.
That said, the school delineation is itself a useful tool.
Agreed. However, I don't think we really want AI. We want IA (Intelligence Augmentation). AI leads to slavery or at best pet-ness. IA leads to godhood (maybe?)
The slavery-inducing AI is more like pandoras box. Inevitably someone will try to create and harness such tech for a competitive advantage, but they have created something they cannot control...
That's a Foom, or hard takeoff singularity. Soft takeoff singularity can be done via emulated minds, superhuman but non-recursively-self-improving de novo AI, etc.