Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Speaking as an AI researcher, almost nothing is "AI" research. In practice i feel most current AI research falls into two categories:

* Fuzzy problems -- image, sound and free text recognition. Where there is no real "true answer".

* Problems too hard to solve in a reasonable time without heuristics -- SAT, scheduling, etc. In practice NP-hard problems and further up the complexity heirachy -- AlphaGo goes here.

Once we know how to do something reliably, it stops being AI and just becomes "an algorithm" :)



Agree. Feng-hsiung Hsu, the chief designer of Deep Blue, the chess machine that beat Kasparov, once gave a talk at my university. I didn't go to the talk, but in the abstract, he stated clearly that he didn't think Deep Blue is AI -- it's just a search algorithm.

I don't understand why so many commenters here are so furious about the author's claim that AlphaGo is not AI. What qualifies as AI is really up to definition, and there doesn't seem to be any widely accepted one among AI researchers. The author of the IEEE article doesn't think AlphaGo matches his definition of AI. Other AI researchers may think otherwise. But that doesn't mean the author is trying to dismiss the achievement of the AlphaGo team.


>I don't understand why so many commenters here are so furious about the author's claim that AlphaGo is not AI.

I don't think they really are. I think they are annoyed because nobody (who knows the difference) ever claimed that AlphaGo was strong/true/general AI) - and the article feels like it's swatting at strawmen.


> Once we know how to do something reliably, it stops being AI and just becomes "an algorithm" :)

This.

AI is a moving definition and always seems to be "What we can't do right now."


is there any research into making a "generic" AI that can solve any problem without the researcher first having to know what that problem is? i.e., human style learning.


Certainly some very clever people are trying it, and have been since the 60s at least, but as far as i am aware the progress is limited (but it is not my area of expertise).

I think the main problem is you end up needing a language to describe the problem, and that ends up limiting the problems that can be solved, or you have to explain so carefully what the problem is it feels like cheating.


As a researcher in multiagent systems, I know this problem very well. Which is, I believe, exactly the point the article makes. But it goes on to say that this might be possible to overcome by embodiment and developmental psychology. Not new arguments for sure, but valid ones, and brought at the right time (a year ago) of a new AI hype.


Machine learning algorithms can solve many problems without knowing what they really are, only given examples (sometimes even without labels when unsupervised learning applies).

We are still quite far from anything we could call "human-style learning" though, but definitely getting there (just look at all the recent publications with reinforcement learning and elaborate ways to use memory in neural nets).


I think the parent is referring less to a system that can construct a single model without an explicit schema, and more to the thing that'd be a few steps after that—the ability to:

1. dynamically notice world-features ("instrumental goal features") that seem to correlate with terminal reward signals;

2. build+train entirely new contextual sub-models in response, that "notice" features relevant to activating the instrumental-goal feature;

3. shape goal-planning in terms of exploiting sub-model features to activate instrumental goals, rather than attempting to achieve terminal preferences directly. (And maybe also in terms of discovering sense-data that is "surprising" to the N most-useful sub-models.)

In other words, the AI should be able to interact with reward-stimuli at least as well as Pavlov's dog.

Right now, ML research does include the concept of "general game-playing" agents—but AFAIK, these agents are only expected to ever play one game per instance of the agent, with the generality being in how the same algorithm can become good at different games when "born into" different environments.

Humans (most animals, really) can become good at far more than a single game, because biological minds seem to build contextual models, that communicate with—but don't interfere with—the functioning of the terminal-preference-trained model.

So: is anyone trying to build an AI that can 1. learn that treats are tasty, and then 2. learn to play an unlimited number of games for treats, at least as well as a not-especially-smart dog?


Not entirely sure that is a fair description of human-style learning. Our overall problem to solve is 'survive and reproduce'. Anything else can be seen as just a sub-problem of that. Humans are taught by other humans how to solve problems since the day they are born. Our DNA passes on a millions of generations worth of learning from our ancestors about how to solve problems.


It is a fair description. That being: able to enumerate a large number of arbitrary goals and define a large number of basic pattern classifiers/feature extractors.

When people give credit to the human designers for AlphaGo's wins, that it is really a win for humanity, I disagree. The wins are alphago's even if the design is of human ingenuity.

When You say that the outputs of human ingenuity should be credited to Evolution, I similarly disagree. You might as well credit evolution for AlphaGo's win. While it is true that Evolution invented the first AGI (and in some though not all ways, a superior intelligence to it), it still makes sense to separate the products of human learning from whatever structural priors DNA passed along. I'll also point out that compared to most animals, humans actually have weaker priors and spend a lot of their early days learning to learn.


Tangential quibble: For anything that could be called "human-style" learning, which presumably requires abstract intelligence and cultural transmission, we're probably looking at something that has existed only during the period of human behavioral modernity - commonly taken to date from roughly 40-50,000 years ago. Assuming a generation is ~20 years (and there are some arguments that the average generation might have been closer to 25 years), that's only 2500 generations.

I think it's fascinating that we have developed from un-self-reflective animals, to abstract thinkers on the verge of creating wholly new abstract-thinking entities from scratch, in only two and half thousand steps. Especially given that the majority of the technical knowledge necessary was developed only in the last 500 years, or 25 generations.


Humans need just a bit more supervision than that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: