Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ya know what, AI might be the most handy-wavy term ever adopted in the tech world (and this is compared to big data, IoT, blockchain, etc.)

No one really can really define what precisely AI is. Just in autonomous driving alone, there are 6 different "levels" of AI. If I write a decision tree algorithm, is that AI?

My point is, you can basically use the term AI to justify virtually anything when it comes to the value of software, which just makes articles like this not valuable at all. Why not just call it "this is what software is capable of doing for a business"?

One thing is for sure about "AI", and really just "advancements in software", is that repeatable human tasks are and will be replaced by automated systems. The only thing left for non-software business interaction will be to simply deal with other humans beings.



I would say the current wave of AI is just training and inference.

Training is getting computers to recognize a pattern by giving positive and negative examples. It uses huge compute and you get a model.

Inference is using that model to recognize that pattern in new data. It uses less resources, so it can process lots of data quickly or work on a small system.

So when I hear hand-wavy stuff, I just sort of mentally think - what pattern are they looking for, what is their source data and what data will they use it on.


"Training is getting computers to recognise a pattern by giving positive and negative examples"

I think you are only talking about binary classification. Even multi class classification is at a significant state of the art level now. But yes, ultimately this is a pattern recognition process.


Hence the AI moniker is just the wrong word to use I think. I like ML a lot more, or even better would be Machine Cognition Learning.


My go-to definition is "machine enhanced statistics"


If you take a look at the papers in this field[1] you will see the almost never mention the words 'artificial intelligence'. It's 99% in the press, peddled by journalists trying to exploit every angle of sensationalism and fear.

[1] http://www.arxiv-sanity.com/


It's because "artificial intelligence" is a very broad expression and doesn't say a lot when you're in the domain. You won't find the expression "computer science" a lot either, it doesn't mean "computer science" is a sensationalist expression only used by journalists.


The names of the conferences and journals where they are published often have the term in them (see AAAI, IJCAI, JAIR, ECAI, etc.) so it's not like the term is not recognized in the scientific community.


Press is only half of the picture. The other half is investors being attracted to AI like moths to a flame.

And then there's the feedback loop. Media keeps telling that "AI" is going to change the future (for better or worse), which makes it seem important. This creates a fertile ground for investors to make money (whether because they believe a product will be revolutionary, or that they can sell it to a greater fool before it blows up). Resulting influx of money incentivizes everyone to try and attach "AI" to anything they can, no matter how hare-brained it is, in order to capture some of that money. Then the media notices all these new companies and report on how AI will change everything, for better or worse. Lather, rinse, repeat.


> My point is, you can basically use the term AI to justify virtually anything when it comes to the value of software, which just makes articles like this not valuable at all.

The article has SOME value!!!! E.g., a nice list of what can go wrong!

I won't defend the article or AI, but the article did at least hint at what it meant by "AI" -- training from data or some such. Or, one might say, large scale empirical curve fitting justified from its performance on the test data (there are some questions here, too, i.e., maybe some missing assumptions).

So, it appears that the article was relatively focused on what it regarded as "AI" and did not fall for the hype practice of calling nearly any software (from statistics, applied probability, optimization, simulation, etc.) AI. Uh, is the prediction of the motions of the planets and moons in our solar system from differential equations and Newton's laws "AI"? How about other applications of calculus, e.g., a little viral growth model

y'(t) = k y(t) (b - y(t))

Is that "AI"? I'd say no, but then want other work called AI should not be?


To be honest, it's the same in research. I'm an AI researcher, and was before the current boon.

AI in research, as best as I can judge, means "Solving a problem where there isn't a simple polynomial-time algorithm which produces a predictable answer". I once heard someone say "Problems are AI until we know how to do them".


> "Problems are AI until we know how to do them".

Philosophy sits in a similar niche. Physics, Chemistry, etc were all philosophical problems until we learnt better more specific ways to study them.


I really like this analogy. I'm reminded that Roger Needham used to say "If we knew what we were doing, it wouldn't be research." (citation: overheard from co-workers of his, also attributed to Einstein)


Those were metaphysical problems. There are other branches of philosophy that don't necessarily get handed off to science, like ethics and aesthetics, or existential questions and what makes life worth living.


You can stretch the term AI to refer to a lot of things, but not as far as I think you're implying. Even today, the vast majority of software written is unambiguously not AI, and the substantial majority of startups couldn't be called AI companies even as a dumb marketing tactic.


Also, some of the competition for most overstretched buzzword is pretty stiff. For example all of those "block chain" applications that are equivalent both philosophically and practically to one trusted party with an SQL database.

I don't think IoT is stretched at all, the name implies "refrigerators but with GSM" and "cars, but they report your every move to the manufacturer," and that's what we have. They're "things" (objects) on the "internet" (a data collection system for marketers).


> practically to one trusted party with an SQL database.

Nicely put. For some months I posted essentially this description to Fred Wilson's AVC.COM. Good to see that I'm not the Lone Ranger in this view.

IIRC I also mentioned that SQL database could be both fault tolerant, including redundant, and distributed. Also last time I read on SQL (e.g., Microsoft's SQL Server) it seemed that the opportunities for security, authentication, capabilities, access control lists, and encryption were good.


Majority of software written today is code bureaucracy, linking together people and decisionmakers.

"AI" can be stretched very far. If I apply linear regression on user-entered numbers to suggest them something, I can call it "AI" and be less dishonest with it than a lot of startups on the market.

(LR at least isn't a glorified RNG, which a DNN can become if one's not careful, or apply it to a domain that doesn't give obvious, visual feedback.)


No dataset, no training, no layers = not AI


Classically speaking, any time you gave computers a goal to optimize for, it was called "AI"; a large amount of AI research in the 80's and 90's was about how to optimize search. Thus SQLite's website, for instance, (correctly) says: "The query planner is an AI that tries to pick the fastest and most efficient algorithm for each SQL statement." Optimizing database queries was a common topic for artificial intelligence research in the 80's and 90's.

It's just that now we're so used to the idea of a computer searching for the best flight, the best map, the best way to optimize your code, the best way to execute your SQL query, that it doesn't seem like "magic" any more.

When you say "no dataset, no training" you're talking about machine learning. And when you say "layers", you're talking about deep learning.

[1] https://sqlite.org/queryplanner.html


Genetic algorithms might have fallen out of favour recently but I don't think anyone would exclude them from AI as a category. The field in my experience has always contained a lot of planning and inference which you seem to be excluding.


LR absolutely has datasets and training. You train a regression model in much the same way as you train any other ML model, DNNs included - you use some data to derive its parameters, use the rest to validate.

LR typically has number of layers = 1, though I bet someone imaginative enough could create "deep linear regression", if that was useful for anything.


The guys who coined the term "AI" strongly disagree.


> If I write a decision tree algorithm, is that AI?

It depends how that decision tree is produced. If you wrote it all by yourself, it's not. I mean, you just wrote a plain algorithm by yourself. A decision tree is a program, and vice versa.

If, OTOH, the tree is inferred from examples (good old CART for instance) or if it is generated on the fly (good old alpha-beta for instance), it definitely AI.

And actually, I'd say it's a good definition for AI. AI starts at the moment you don't write the decision trees / algorithms yourself but, instead, write a program that will produce the actual decision tree / algorithm.

Writing a program? Not AI. Writing a program that writes a program? Definitely AI.


I would disagree. What you're describing above is the difference between machine learning and AI. AI is a broader term that encompasses machine learning.


Nope. Alpha-beta (which I'm talking about in my post) is definitely not mahine-learning, for instance, and yet it is definitely an AI technique.


I would agree that it is. I would also say that handwritten decision trees are AI. The method of creating them wasn't AI but just Intelligence. The resulting program is a form of artificial intelligence. What do you think :)


>Writing a program that writes a program? Definitely AI.

So do you think the following is AI? print("print('Hello World!')")

It is a program which writes a program, so by what you've just said, it's AI...

Aside: For people interested in programs that write programs (that write programs, that write programs, that write programs ...), the following repo might be interesting: https://github.com/semitrivial/ions


You're taking my sentence a bit too literally there. Take it as a heuristic, not as a formal definition.

Anyway nobody's able to give a compact definition of AI that cannot be destroyed by a ton of counter-examples.


Compilers are AI?


"Plain" compilers are just translating from one notation to another, so they aren't.

But optimizing compilers that analyze your code and find that portion of your code can be rewritten to make the produced executable more efficient? I'd say it sound a lot like AI, yeah. I woulnd't be shocked if someone told me "gcc's -O3 option uses AI techniques".


If you have a compiler that can write a correct program from examples of the desired output, definitely yes.


That's usually called program synthesis, and a lot of it uses formal methods. It isn't usually considered AI (but recently the field started incorporating a lot of ML, like pretty much anything in CS these days.)


Correct me if I'm wrong, but I think program synthesis works more from writing very general formal specifications, rather than learning from a sample of desired outputs.

The latter is called Programming By Example, and the pattern-finding process needed to infer generalities from specific samples does benefit from machine learning techniques.


"programming by example" is a form of program synthesis, where the spec is provided by the examples. It is a formal specification, albeit incomplete. Program synthesis, as a concept, does not require the specification to be complete.


I have seen massive failures where management got sucked into AI hype and failed to realize most basic of advantages. Hired for fancy titles like Data Scientists etc. who were great in building simple prototypes but failed every time when met with real world use cases.


It is said that the true definition of AI is: "Awaiting Implementation."


This field, call it what you want, is exploding with papers, implementations, datasets, frameworks and courses. By any name you want, it is quite useful and has many applications. But if you prefer you can just get hang up on the term AI.


Or augmented intelligence, because that's what we're really doing with computers. Augmenting human intelligence, not replacing it.


Anything that is function approximation is AI. This includes decision trees


Not necessarily. Numerical integration of differential equations is function approximation. Yes the algorithm you would use may be an “artificial” implementation of “intelligence” but it is not AI in the sense people usually think of it today.

The bigger picture is that function approximation is ubiquitous in applied mathematics, whether it be numerical calculus or statistics. Machine learning encompasses the state-of-the-art, big-compute techniques that have made statistical function approximation so much more useful.


I'd say code generation is AI. That would make Lisp macros a degenerate case (though a Lisp macro can implement a decision tree or a DNN). It would also make DNNs and classical ML both degenerate cases across a different dimension - you have a fixed box called "bunch of linear algebra" with lots of input parameters, and code generation amounts to determining the best sets of those parameters (remember, code = data). Genetic programming (not genetic algorithms) would be in the middle somewhere, because it involves generating Turing-complete code and not just numbers.


Right, it's a buzzword.

Buzzwords are very similar to phatic expressions (aka small talk). They're words that don't convey much information, because they lack stringent definitions. As so, they do little other than feed confirmation biases. Which is why you often find these types of terms effectively used in marketing... and amongst those in management...



AI is just "if" statements with good marketing.


AI - amplified "if"s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: