>We still have the power – we don't have to turn it over to the robots now or ever.
In theory, but can we control ourselves? There are a lot of financial incentives in developing better machines and algorithms. We couldn't stop nuclear weapons or global warming, and AI is a lot more attractive and powerful than either of those to business and governments. Not to mention that nuclear weapons and global warming are harmful in a very easily understood way, whereas AI might be harmful in very strange ways. It's like a pack of wolves left alone with a poisonous steak. It's a delicious steak and the poison is somewhat beyond their understanding; some wolves even think there's no poison at all. Isolating the poison from the steak is as difficult for the wolves as making safe AI is for us.
You can imagine how incredibly valuable an AI capable of doing a programmer's work would be. It's a technology far off, but not implausible.
But as soon as we reach that point, it seems unlikely that things won't spiral out of control. Imagine a thousand highly intelligent programmers who are capable of research, think fluently in statistics, and cooperate perfectly. Additionally, these programmers can examine and modify how their own brains work, and boost their performance, remove their errors. With the press of a button, they can also create more copies of themselves.
Everything might spiral out of control.
How do you make sure things don't spiral out of control?
International laws and regulations as well as doomsday scenario planning that is surely already underway by the US government.
I would argue that we actually have stopped nuclear weapons thus far (not the proliferation, but the usage), as well as mass global terrorism, mass extinction from disease, the list goes on.
I agree that AI is a threat – I'm legitimately worried that the AI capable of doing a programmer's work isn't all that far off. I see things like Mozilla's Webmaker popping up and its obvious that the process will become more streamlined and automated as time goes on. I also don't think that all sense and reason goes out the window - we will come up with a way to solve it like we solve everything else – boring laws.
We should be more worried about the human beings that will no doubt use the new wave of machines for their own ill will. I'm sure what the NSA has right now will be laughable compared to what it will have in 10 years.
> Consider a world with perfect income equality. In such a world, there is no economic incentive to work at all.
Nobody needs an economic incentive to work (please prove me otherwise). In fact, an economic incentive is the only reason we screw other people over and produce shitty work.
> Nobody needs an economic incentive to work (please prove me otherwise).
"Nobody does X" is too strong a statement; when you say something like that, the burden is on you to prove it (by asking everyone in the world, for example). I personally know quite a few people who wouldn't work if they didn't need to.
It's true that some people don't need an economic incentive to work, but others do.
To back up my claim: The birds with the least natural enemies and the richest environment spend the most time mating (which is our built-in incentive to do many great things).
> If I was a guy, they'd be happy to talk business and learn about my startup.
That's unlikely. If you were a guy they wouldn't be talking to you at all. Even less likely they'd do so with genuine interest. That is the experience most males have in such conferences. Few men get any kind of attention from anyone without putting in a lot of effort.
Having men lose interest in you after your fiancée is mentioned doesn't tell you anything about the likelihood of them seeing you only as a sexual object. Could just as well be a human object they want to love.
While pursuing his entrepreneurial dream, he walked around asking women for their used sanitary pads (while most men struggle to ask for a date), almost gets tied to a tree when a witch doctor incites the local villagers against him, has his wife leave him (only to come back after his success), and even his mother abandoned him; still he goes on.
And then, even after having practically everyone around him ostracize and abandon him, he doesn't go for the money, but remains humble and does the best he can to make the world a better place.
she did not return because of his success, she returned because he was no longer bringing shame upon his family. As the article mentions, there many taboos in their society, there is the caste system that provides separation (even tint of skin can) and of course separation of sexes. So his single minded pursuit made life untenable for her where they lived, it even drove off his own mother. You did note that she merely went back to her mom, she didn't run off with another man. Essentially, she was waiting till he either gave up the pursuit or succeeded, I am quite sure they would be together regardless.
Hell there is odd separation in the states amongst some of the workers from that country. You can see it in the groupings, who has lunch together, those who walk apart or turn down a hall when meeting
No. I am saying there are many levels of separation of society in India, very similar to how separation exists in other cultures. However many Western nations prefer to highlight those faults in other countries and disclaim any such in their own.
That’s how I understood it, possibly like some ‘Chicanos’ in the US tend to have darker skin than ‘White folks’, but not all actually have; is that wrong?
In theory, but can we control ourselves? There are a lot of financial incentives in developing better machines and algorithms. We couldn't stop nuclear weapons or global warming, and AI is a lot more attractive and powerful than either of those to business and governments. Not to mention that nuclear weapons and global warming are harmful in a very easily understood way, whereas AI might be harmful in very strange ways. It's like a pack of wolves left alone with a poisonous steak. It's a delicious steak and the poison is somewhat beyond their understanding; some wolves even think there's no poison at all. Isolating the poison from the steak is as difficult for the wolves as making safe AI is for us.
You can imagine how incredibly valuable an AI capable of doing a programmer's work would be. It's a technology far off, but not implausible.
But as soon as we reach that point, it seems unlikely that things won't spiral out of control. Imagine a thousand highly intelligent programmers who are capable of research, think fluently in statistics, and cooperate perfectly. Additionally, these programmers can examine and modify how their own brains work, and boost their performance, remove their errors. With the press of a button, they can also create more copies of themselves.
Everything might spiral out of control.
How do you make sure things don't spiral out of control?