Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Early Artificial Intelligence Projects: A Student Perspective (2006) (csail.mit.edu)
60 points by onemind on May 8, 2023 | hide | past | favorite | 7 comments


When I worked in the AI Lab as an undergrad, it was clear even to the advocates of GOFAI that the systems were brittle and only worked in special cases. Nevertheless the work went on, in part because we always learned something about what made these problems (e.g., image classification) hard for rule-based systems even though humans seem to solve them effortlessly. (I suppose cool tech sometimes emerged from these efforts that had nothing directly to do with the task at hand, and that's another partial justification.)

Now we have systems that are starting to catch up to and in some cases surpass human performance, and yet we (or at least I) have a hard time articulating what it is we learn about these types of problems from current statistical AI. I sometimes feel there is some kind of uncertainty principle at work here.


I've studied AI with Prolog and Lisp in the 90s but dropped it, because it felt laughable. I've switched to neuro computer science with hardware neural networks, but dropped it, because it felt laughable. How times have changed.


A scant 11 years ago, I was about to obtain an MSCS degree and was chatting with my follow students about what they wanted to do. There was talk about designing compilers or webapps or games or databases. I mentioned I thought AI was pretty cool and the universal answer was I should not waste time on that.


Wow, at that point we were at most a year from the first signs of the AI revolution that came with deep learning, and three years from DL exceeding human performance at multiple specialised tasks. You were not far off!


Just out of curiosity, what are you studying now that you feel is laughable?


> intelligence is the computational part of the ability to achieve goals in the world

I am a big fan of the idea of multiple intelligences, so I like this definition because the ability to achieve a goal on a chessboard doesn’t necessarily translate to solving word problems or navigating social situations. However, I am very aware that there is no one agreed upon definition, and the Wiki page on the topic even mentions that “intelligence” might just mean pointing to ourselves.


One of my favorite AI generated essays argues that intelligence is “doing whatever humans do.”

https://arr.am/2020/07/31/human-intelligence-an-ai-op-ed/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: