Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not clickbait. You're being pedantic. It's clear the author meant true / general / strong AI and was merely addressing the hype of some non-AI researchers.

It's a response to people who think because of AlphaGo that we are on the cusp of achieving true / strong / general AI.

> An AI that solves a lot of problems better than humans, and transfers that learning across many problems with relative ease, is getting close to strong AI.

We're not close. See Yann LeCun's statement on AI after AlphaGo [1] and the HN response [2]

[1] https://www.facebook.com/yann.lecun/posts/10153426023477143

[2] https://news.ycombinator.com/item?id=11280744



I have always found the idea that there's a hard line between strong and not-strong AI extremely naive. The whole thing is a long gradient of progress. Strong AI is going to creep up on most programmers like the proverbial boiling frog, and internet forums will be full of people denying its existence throughout that process.


> I have always found the idea that there's a hard line between strong and not-strong AI extremely naive

You probably haven't studied much or any AI. In my opinion, those who haven't researched a subject don't understand the details and are likely to make more inaccurate predictions than experts.


What a randomly self-serving assumption to make.


I can't agree. If we can blur the difference between weak and strong AI, that 'transitional form' (missing link?) will be enormously important to the future of AI (and mankind).

In the past 50 years, AI has seen hundreds of small successes in narrow tasks that used to require humans. But none so far has shown the potential to scale up, generalize, and serve tasks other than the narrow one for which it was designed. Like IBM Watson, AlphaGo too is likely to be consigned to the AI scrapheap in the sky.

BUT... the deep net technique used by AlphaGo shows more promise to solve the remaining unsolved AI tasks than any AI method before it. Yes, we still don't know DL's limits, like whether it can integrate one-shot learning, or build and reuse a diverse knowledgebase, or transfer specific methods to solve new more general problems. But as of right now, it's shown greater promise to solve novel weak AI tasks than any past technique I've seen. The author overlooks that potential deliberately and provocatively, and IMO, pointlessly.

Can DL scale up into strong AI too? I think the important thing here isn't that the answer isn't obviously yes (as the author posits), but that the answer isn't obviously no. And in the 50+ year quest for strong AI, that's a first, at least for me.


It's not self-serving to say PhDs are better than me at predicting the future of their field.

It would be self-serving if I lied in an interview and said I was qualified for a job making robots if I'd never had any experience doing so.


It is easy to see if a title is clickbait or not. Just change it with some stupidly long title, such as "AlphaGo does not solve the problem of meaning, so it is not AGI" and ask yourself if this title would have generated a similar number of clicks. The title is as clickbait as it can get.


No, clickbait is "10 ways to find your spouse"

This is just a short title, and ITT people didn't read the first paragraph.


If you read the first paragraph, it mentions first AI then AGI, and does not deny that the latter is a subset of the former. In the second paragraph it even mentions that today’s AI is not AGI.

So it uses a strong claim — i.e. AlphaGo is not AI — then subsequently changes it to a smaller claim — AlphaGo is not AGI.

Reading the entire article does not change the fact that this can be seen as a technique to attract readers. Claim rare X; change X to be a subset of Y; then claim it was really new X all along. Moving the goal post.


Poppycock. Anyone who understands software knows AlphaGo used AI tech and that the title is referring to general AI. Anyone who doesn't will have their understanding clarified in the first paragraph.


There's a clear and meaningful difference between "not AI" and "not strong AI".

With respect, you are being pedantic.


look, if i walk up to somebody and say alphago is not AI, they'll understand what I'm saying, because the layperson doesn't distinguish AI and strong AI. Only techies do.

I think that's what the author was thinking when he wrote the title, and I'm giving him the benefit of the doubt.

I'm being flexible, not picky. Disagree with me? Fine, I really don't care.


Then why not title the article "Alphago is not strong AI"?


Read the article. The title isn't a substitute for reading. It's short because your attention span is short.


You didn't answer my question. There's an advantage for using the long title "Alphago is not strong AI": less people will be angry at a perceived clickbait title. What's the advantage for using the short title "Alphago is not AI"?


You're asking me why the author chose this title? I am not the author.

I could guess that he didn't use a qualifier because there are many different ones and none universally accepted. Or, maybe he wanted to reach a non-tech audience who wouldn't know the difference between strong AI and AI.

There are plenty of good reasons, and reading the article clarifies the author's meaning.


If that were true, the article is useless – because nobody ever said that AlphaGo was 'strong' AI.


> It's clear the author meant true / general / strong AI

Perhaps we need a new term to refer to "true" AI, preferably one that matches people's general notions of what an AI should be. "Synthetic Thought" maybe?


People just need to read the article. The author describes what he meant in the first paragraph.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: