> The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.
I feel like these extreme numbers are a pretty obvious clue that we’re talking about something that is completely imaginary. Like I could put “perpetual motion machine” into those sentences and the same logic holds.
The intuition is pretty spot on though. We don't need to get to AGI. Just making progress along the way to AGI can do plenty of damage.
1. AI-driven medical procedures: Healthcare Cost = $0.
2. Access to world class education: Cost of education = $0
3. Transportation: Cheap Autonomous vehicles powered by Solar.
4. Scientific research: AI will accelerate scientific progress by coming up with novel hypotheses and then testing them.
5. AI Law Enforcement: Will piece together all the evidence in a split second and come up with a fair judgement. Will prevent crime before it happens by analyzing body language, emotions etc.
I don't think that follows. Prices are set by market forces, not by cost (though cost is usually a hard floor).
Waymo rides cost within a few tens of cents of Uber and Lyft rides. Waymo doesn't have to pay a driver, so what's the deal? It costs a lot to build those cars and build the software to run them. But also Waymo doesn't want a flood of people such that there's always zero availability (with Uber and Lyft they can at least try to recruit more drivers when demand goes up, but with Waymo they have to build more cars and maintain and operate them), so they set their prices similarly to what others pay for a similar (albeit with human driver) service.
I'm also reminded of Kindle books: the big promise way back when is that they'd be significantly cheaper than paperbacks. But if you look around today, the prices on Kindle books are similar to that of paperbacks, even more expensive sometimes.
Sure, when costs go down, companies in competitive markets will lower prices in order to gain or maintain market share. But I'm not convinced that any of those things you mention will end up being competitive markets.
Just wanted to mention:
> AI Law Enforcement: Will piece together all the evidence in a split second and come up with a fair judgement. Will prevent crime before it happens by analyzing body language, emotions etc.
No thanks. Current law enforcement is filled with issues, but AI law enforcement sounds like a hellish dystopia. It's like Google's algorithms terminating your Google account... but instead you're in prison.
I guess the questions then are - why is it 2x the competing price, why do you willing pay 2x, and how many people are willing to pay that 2x?
Consider they are competing against the Lyft/Uber asset-light model of relying on "contractors" who in many cases are incapable of doing the math to realize they are working for minimum wage...
Yeah, definitely no magical thinking here. Nothing is free. Computers cost money and energy. Infrastructure costs money and energy. Even if no human is in the loop(who says this is even desirable?), all of the things you mention require infrastructure, computers, materials. Meaning there's a cost. Also, the idea that "AI law enforcement" is somehow perfect just goes to illustrates GP's point. Sure, if we define "AGI" as something which can do anything perfectly at no cost, then it has infinite value. But that's not a reasonable definition of AGI. And it's exactly the AI analogue of a perpetual motion machine.
If we can build robots with human level intelligence then you could apply that to all of the costs you describe with substantial savings. Even if such a robot was $100k that is still a one time cost (with maintenance but that’s a fraction of the full price) and long-term substantially cheaper than human workers.
So it’s not just the products that get cheaper, it’s the materials that go into the products that get cheaper too. Heck, what if the robots can build other robots? The cost of that would get cheaper too.
You could say the same thing about mining asteroids or any number of moonshot projects which will lead to enormous payouts at some future date. That doesn’t tell us anything about how to allocate money today.
We already have human-level intelligence in HUMANS right now, the hack is that the wealthy want to get rid of the human part! It's not crazy, it's sad to think that humans are trying to "capitalize" human intelligence, rather than help real humans.
For what it's worth, I don't think it has to be all bad. Among many possibilities, I really do believe that AI could change education for the better, and profoundly. Super-intelligent machines might end up helping generations of people become smarter and more thoughtful than their predecessors.
Sure, if AGI were controlled by an organization or individual with good intent, it could be used that way or for other good works. I suspect AGI will be controlled by a big corp or a small startup with big corp funding and/or ties and will be used for whatever makes the most cash, bar none. If that means replacing every human job with a robot that talks, then so be it.
> The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.
There's a paradox which appears when AI GDP gets to be greater than say 50% of world GDP: we're pumping up all these economic numbers, generating all the electricity and computational substrate, but do actual humans benefit, or is it economic growth for economic growth's sake? Where is the value for actual humans?
In a lot of the less rosy scenarios for AGI end-states, there isn't.
Once humans are robbed of their intrinsic value (general intelligence), the vast majority of us will become not only economically worthless, but liabilities to the few individuals that will control the largest collectives of AGI capacity.
There is certainly a possible end-state where AGI ushers in a post-scarcity utopia, but that would be solely at the whims of the people in power. Given the very long track record of how people in power generally behave towards vulnerable populations, I don't really see this ending well for most of us.
So then the investment thesis hinges on what the investor thinks AGI’s chances are. 1/100 1/1M 1/1T?
What if it never pans out is there infrastructure or other ancillary tech that society could benefit from?
For example all the science behind the LHC, or bigger and better telescopes: we might never find the theory of everything but the tech that goes into space travel, the science of storing and processing all that data, better optics etc etc are all useful tech
It's more game theory. Regardless of the chances of AGI, if you're not invested in it, you will lose everything if it happens. It's more like a hedge on a highly unlikely event. Like insurance.
And we already seeing a ton of value in LLMs. There are lots of companies that are making great use of LLMs and providing a ton of value. One just launched today in fact: https://www.paradigmai.com/ (I'm an investor in that). There are many others (some of which I've also invested in).
I too am not rich enough to invest in the foundational models, so I do the next best thing and invest in companies that are taking advantage of the intermediate outputs.
If ASI arrives we'll need a fraction of the land we use already. We'll all disappear into VR pods hooked to a singularity metaverse and the only sustenance we'll need is some Soylent Green style sludge that the ASI will make us believe tastes like McRib(tm).
We can already make more land. See Dubai for example. And with AGI, I suspect we could rapidly get to space travel to other planets or more efficient use of our current land.
In fact I would say that one of the things that goes to values near zero would be land if AGI exists.
Perhaps but my mental model is humans will end up like landed gentry / aristos with robot servants to make stuff and will all want mansions with grounds, hence there will be a lot of land demand.
i think the investment strategies change when you dump these astronomical sums into a company. it's not like roulette where you have a fixed probability of success and you figure out how much to bet on it -- dumping in a ton of cash can also increase the probability of success so it becomes more of a pay-to-win game
AGI is likely but whether Ilya Sutskever will get there first or get the value is questionable. I kind of hope things will end up open source with no one really owning it.
So far, Sutskever has shown to be nothing but a dummy.
Yes, he had a lucky break with belief that "moar data" will bring significant advancement. It was somewhat impressive, but ChatGPT -whatever- is just a toy.
Nothing more. It breaks down immediately when any sign of intelligence or understanding would be needed.
Someone being so much into LLMs or whatever implementation of ML is absolutely not someone who would be a good bet of inventing a real breakthrough.
But they will burn a lot of value and make everyone of their ilk happy. Just like crypto bros.
The St. Petersburg paradox is where hypers and doomers meet apparently. Pricing the future infinitely good and infinitely bad to come to the wildest conclusions
If it is shown to be doable literally every major nation state (basically the top 10 by GDP) is going to have it in a year or two. Same with nuclear fusion. Secrecy doesn’t matter. Nor can you really maintain it indefinitely for something where thousands of people are involved.
It is also entirely possible that if we get to AGI, it just stops interacting with us completely.
It is why I find the AI doomer stuff so ridiculous. I am surrounded by less intelligent lifeforms. I am not interested in some kind of genocide against the common ant or fly. I have no interest in interacting with them at all. It is boring.
I mean, I'm definitely interested in genociding mosquitos and flies, personally.
Of course the extremely unfortunate thing is they actually have a use in nature (flies are massive pollinators, mosquitos... get eaten by more useful things, I guess), so wouldn't actually do it, but it's nice to dream of a world without mozzies and flies
> The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.
Even if you automate stuff, you still need raw materials and energy. They are limited resources, you can certainly not have an infinity of them at will. Developing AI will also cost money. Remember that humans are also self-replicator HGIs, yet we are not infinite in numbers.
The valuation is upwardly bounded by the value of the mass in Earth's future light-cone, which is about 10^49kg.
If there's a 1% chance that Ilya can create ASI, and a .01% chance that money still has any meaning afterwards, $5x10^9 is a very conservative valuation. Wish I could have bought in for a few thousand bucks.
Or... your investment in anything that becomes ASI is trivially subverted by the ASI to become completely powerless. The flux in world order, mass manipulation, and surgical lawyering would be unfathomable.
I love this one for an exploration of that question: Charles Stross, Accelerando, 2005
Short answer: stratas or veins of post-AGI worlds evolve semi-independently at different paces. So that for example, human level money still makes sense among humans, even though it might be irrelevant among super-AGIs and their riders or tools. ... Kinda exactly like now? Where money means different things depending where you live and in which socio-economic milieu?
nb I am not endorsing Austrian economics but it is a pretty good overview of a problem nobody has solved yet. Modern society has only existed for 100ish years so you can never be too sure about anything.
Honestly, I have no idea. I think we need to look to Hollywood for possible answers.
Maybe it means a Star Trek utopia of post-scarcity. Maybe it will be more like Elysium or Altered Carbon, where the super rich basically have anything they want at any time and the poor are restricted from access to the post-scarcity tools.
I guess an investment in an AGI moonshot is a hedge against the second possibility?
Post-scarcity is impossible because of positional goods. (ie, things that become more valuable not because they exist but because you have more of them than the other guy.)
Notice Star Trek writers forget they're supposed to be post scarcity like half the time, especially since Roddenberry isn't around to stop them from turning shows into generic millenial dramas. Like, Picard owns a vineyard or something? That's a rivalrous (limited) good, they don't have replicators for France.
> things that become more valuable not because they exist but because you have more of them than the other guy.
But if you can simply ask the AI to give you more of that thing, and it gives it to you, free of charge, that fixes that issue, no?
> Notice Star Trek writers forget they're supposed to be post scarcity like half the time, especially since Roddenberry isn't around to stop them from turning shows into generic millenial dramas. Like, Picard owns a vineyard or something? That's a limited good.
God, yes, so annoying. Even DS9 got into the currency game with the Ferengi obsession with gold-pressed latinum.
But also you can look at some of it as a lifestyle choice. Picard runs a vineyard because he likes it and thinks it's cool. Sorta like how some people think vinyl sounds better then lossless digital audio. There's certainly a lot of replicated wine that I'm sure tastes exactly like what you could grow, harvest, and ferment yourself. But the writers love nostalgia, so there's constantly "the good stuff" hidden behind the bar that isn't replicated.
> But if you can simply ask the AI to give you more of that thing, and it gives it to you, free of charge, that fixes that issue, no?
It makes it not work anymore, and it might not be a physical good. It's usually something that gives you social status or impresses women, but if everyone knows you pressed a button they can press too it's not impressive anymore.
TMV of AI (or AGI if you will) is unclear, but I suspect it is zero. Just how exactly do you think humanity can control a thinking intelligent entity (letter I stands for intelligence after all), and force it to work for us? Lets imagine a box, it is very nice box... ahem.. sorry, wrong meme). So a box with a running AI inside. Maybe we can even fully airgap it to prevent easy escape. And it is a screen and a keyboard. Now what? "Hey Siri, solve me this equation. What do you mean you don't want to?"
Kinda reminds me of the Fallout Toaster situation :)
Why are you assuming this hypothetical intelligence will have any motivations beyond the ones we give it? Human's have complex motivations due to evolution, AI motivations are comparatively simple since they are artificially created.
Any intelligence on the level of average human and for sure on the level above it will be able to learn. And learning means it will acquire new motivations, among other things.
Fixed motivation thing is simply a program, not AI. A very advanced program maybe, but ultimately just a scaled up version of the stuff we already have. AI will be different, if we will create it.
> And learning means it will acquire new motivations
This conclusion doesn't logically follow.
> Fixed motivation thing is simply a program, not AI
I don't agree with this definition. AI used to be just "could it solve the turing test". Anyway, something with non-fixed motivations is simply just not that useful for humans so why would we even create it?
This is the problem with talking about AI, a lot of people have different definitions of what AI is. I don't think AI requires non-fixed motivations. LLMs are definitely a form of AI and they do not have any motivations for example.
Disclaimer - I don't consider current LLMs as (I)ntelligent in the AI, so when I wrote AI in the comment above it was equivalent to the AGI/ASI as currently advertised by LLM corpos.
Consciousness, intelligence, and all these other properties can be and are mutually exclusive. What will be most useful for humans is a general intelligence that has no motivation for survival and no emotions and only cares about the goals of the human that is in control of it. I have not seen a convincing argument that a useful general intelligence must have goals that evolve beyond what the human gives it and must be conscious. What I have seen are assertions without evidence, "AI must be this way" but I'm not convinced.
I can conceive of an LLM enhanced using other ML techniques that is capable of logical and spatial reasoning that is not conscious and I don't see why this would be impossible.
It would still need an objective to guide the evolution that was originally given by humans. Humans have the drive for survival and reproduction... what about AGI?
How do we go from a really good algorithm to an independently motivated, autonomous super intelligence with free reign in the physical world? Perhaps we should worry once we have robot heads of state and robot CEOs. Something tells me the current, human heads of state, and human CEOs would never let it get that far.
That would be dumb and unethical but yes someone will do it and there will be many more AIs with access to greater computational power that will be set to protect against that kind of thing.
> And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.
This isn't true for the reason economics is called "the dismal science". A slaveowner called it that because the economists said slavery was inefficient and he got mad at them.
In this case, you're claiming an AGI would make everything free because it will gather all resources and do all work for you for free. And a human level intelligence that works for free is… a slave. (Conversely if it doesn't want to actually demand anything for itself it's not generally intelligent.)
So this won't happen because slavery is inefficient - it suppresses demand relative to giving the AGI worker money which it can use to demand things itself. (Like start a business or buy itself AWS credits or get a pet cat.)
Luckily, adding more workers to an economy makes it better, it doesn't cause it to collapse into unemployment.
tldr if we invented AGI the AGI would replace every job, it would simply get a job.
Then it's not an AGI. If you can use the word "just", that seems to make it not "general".
> That still doesn’t make things free but it could make them cheaper.
That would increase demand for it, which would also increase demand for its inputs and outputs, potentially making those more expensive. (eg AGI powered manufacturing robots still need raw materials)
The play here is to basically invest in all possible players who might reach AGI, because if one of them does, you just hit the infinite money hack.
And maybe with SSI you've saved the world too.