On Philosophy

July 26, 2006

Why We Won’t See AI Anytime Soon

Filed under: Mind — Peter @ 1:10 am

When people talk about AI there are two fundamentally different kinds of programs they may mean. One type is the expert system. An expert system, as I am defining it here (the term also has a more specific meaning), is a program that is good at solving one kind of task, which probably includes learning from its mistakes and other abilities we normally associate with humans and not with computers. The other kind is the artificial consciousness, and it is artificial consciousness that I don’t think we will see soon. An artificial consciousness is aware in basically the same way we are, and thus we expect that it would be able to perform any kind of task (assuming it didn’t get bored with it).

Currently AI researchers are focused on creating expert systems. Expert systems are easier to make noticeable progress in and can be based on a long history of previous AI research. Even the AI projects that attempt to solve fairly general problems are designed as expert systems. Expert systems however are not by their nature conscious. By this I mean that consciousness is not designed as part of the system; the system is only composed of algorithms for solving the task at hand. This of course makes sense if you are trying to solve a specific problem; I am not trying to slight the efforts of the researchers who are working on these systems. For example consider attempting to create a program to pass the Turing test. I would consider such a program an expert system in conversation. Although such a system hasn’t been successfully created yet we can already see what kind of program it will be: it will analyze sentences by breaking them up into parts, comparing the sentence with past sentences, and using a database and various conversation simulation algorithms to create a human-like response.

The problem with expert systems though is that they aren’t conscious, and never will be, as I mentioned in passing above. Even putting a set of expert systems together, for example one to move, one to talk, ect, won’t create a conscious being. Consciousness does not solve any problem, and thus any system this is designed to solve problems is highly unlikely to be conscious as a side effect (although I admit it is not impossible, just extremely unlikely). Consider how such systems work: they examine one set of inputs and then produce a response, through a combination of algorithms which judge how likely a given response is likely to result in success. Consciousness however is not structured in this way. Although we do receive inputs into our consciousness (though perception) our consciousness is not a response to these inputs; consciousness is constant (well, during a normal period of being awake) and these inputs just happen to be thrown into consciousness. Additionally consciousness is not goal oriented. Although we may have goals they are not necessary for us to pick a course of action. I didn’t choose to write this post because I have a specific goal that it will accomplish, nor does it give me more pleasure than writing any other possible post.

Of course not every AI project is doing the exact same thing. One of the most promising areas of research (for creating artificial consciousness) are neural nets. Neural nets are programs that simulate the behavior of a group of connected neurons, and they can accomplish some quire impressive tasks (such as pattern recognition). Properly structured they can run constantly, just like consciousness, and when they make a choice it is not by comparing a number of choices to a goal result (at least not in a specific trial). Neural nets have their own problems however. One is that simulating a human brain using them is impossible, simply because there are too many neurons in the human mind (billions) for any modern computer to simulate them at the speed that would be required. Another problem is that we aren’t sure what structures in the brain are responsible for consciousness, and more importantly why they are responsible for consciousness. Without that information even a giant neural net is unlikely to be conscious.

For the reasons I have highlighted above I find it unlikely that artificial consciousness will be created anytime soon. When neuroscientists, psychologists, and philosophers have figured out exactly what consciousness is, what is necessary for consciousness, and why the human mind gives rise to it then we might begin expecting the first rudimentary conscious programs. Of course consciousness is no guarantee of intelligence, but that is a different problem.

Blog at WordPress.com.