Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also, AIs are not constrained by senses shared with humans. Our information passing is mediated by few inputs — a stream of light as a matrix within a very narrow range of possible spectra; a stream of air pressure decoded into frequency series; touch etc. AIs could have many more modes of input and output.

Also… it seems weird to consider having “human” inputs embodied but having direct access to information stored elsewhere as not — surely there’s disk platters, SSDs and other devices existing in the real world, no?



Absolutely correct, more senses are definitely available to an AGI. In the context of building an AGI that possesses "like a human" implied or directly in the evaluation criteria, and the thesis that embodiment of senses is a prerequisite to fulfilling that criteria, we don't know whether or not more senses than human might thwart the development of the first AGI. We currently believe that more senses than "normal" tends to lead to neuro-atypical development like autism, so there is some evidence for caution against tossing in all the inputs we can imagine, based upon current thinking (I wonder if that thinking is based upon empirical fMRI scans). We do have the corpus of data built up by humans however, that says "no known cases in medical history of sapience failing to develop with this minimal set of senses".

My cursory search says hearing and sight are not necessary. Leaving us with smell, taste, and touch. Smell and taste are closely interlinked, so there is a possibility that a simulated smell-taste and touch are all that is required to "bootstrap" the first AGI, so to speak.

Your point recalls to mind the scene in Battlestar Galactica (the reboot) where Cavil fulminates "I don't want to be human". [1] I suspect once we boostrap the first AGI, you are absolutely correct, we will rapidly add more senses to the AGI's repertoire. But we only have one data point for a template to build AGI off of, ourselves, so I also suspect the first AGI will somewhat resemble us. If so, and if the embodiment principle is one of the right routes to take, then it makes sense to me to simulate what we empirically determine as the minimum sense set, and then expand from there. Walk before we run, so to speak.

It wouldn't hurt for someone investigating the embodiment principle to build for as many senses as they can envision, though. Many eyes, and all that, for attacking the problem space. I'm just expressing an uninformed gut feeling here, as I am not in the AI research space.

[1] https://www.youtube.com/watch?v=pM3CptVZDYU




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: