On Philosophy

July 26, 2006

Why We Won’t See AI Anytime Soon

Filed under: Mind — Peter @ 1:10 am

When people talk about AI there are two fundamentally different kinds of programs they may mean. One type is the expert system. An expert system, as I am defining it here (the term also has a more specific meaning), is a program that is good at solving one kind of task, which probably includes learning from its mistakes and other abilities we normally associate with humans and not with computers. The other kind is the artificial consciousness, and it is artificial consciousness that I don’t think we will see soon. An artificial consciousness is aware in basically the same way we are, and thus we expect that it would be able to perform any kind of task (assuming it didn’t get bored with it).

Currently AI researchers are focused on creating expert systems. Expert systems are easier to make noticeable progress in and can be based on a long history of previous AI research. Even the AI projects that attempt to solve fairly general problems are designed as expert systems. Expert systems however are not by their nature conscious. By this I mean that consciousness is not designed as part of the system; the system is only composed of algorithms for solving the task at hand. This of course makes sense if you are trying to solve a specific problem; I am not trying to slight the efforts of the researchers who are working on these systems. For example consider attempting to create a program to pass the Turing test. I would consider such a program an expert system in conversation. Although such a system hasn’t been successfully created yet we can already see what kind of program it will be: it will analyze sentences by breaking them up into parts, comparing the sentence with past sentences, and using a database and various conversation simulation algorithms to create a human-like response.

The problem with expert systems though is that they aren’t conscious, and never will be, as I mentioned in passing above. Even putting a set of expert systems together, for example one to move, one to talk, ect, won’t create a conscious being. Consciousness does not solve any problem, and thus any system this is designed to solve problems is highly unlikely to be conscious as a side effect (although I admit it is not impossible, just extremely unlikely). Consider how such systems work: they examine one set of inputs and then produce a response, through a combination of algorithms which judge how likely a given response is likely to result in success. Consciousness however is not structured in this way. Although we do receive inputs into our consciousness (though perception) our consciousness is not a response to these inputs; consciousness is constant (well, during a normal period of being awake) and these inputs just happen to be thrown into consciousness. Additionally consciousness is not goal oriented. Although we may have goals they are not necessary for us to pick a course of action. I didn’t choose to write this post because I have a specific goal that it will accomplish, nor does it give me more pleasure than writing any other possible post.

Of course not every AI project is doing the exact same thing. One of the most promising areas of research (for creating artificial consciousness) are neural nets. Neural nets are programs that simulate the behavior of a group of connected neurons, and they can accomplish some quire impressive tasks (such as pattern recognition). Properly structured they can run constantly, just like consciousness, and when they make a choice it is not by comparing a number of choices to a goal result (at least not in a specific trial). Neural nets have their own problems however. One is that simulating a human brain using them is impossible, simply because there are too many neurons in the human mind (billions) for any modern computer to simulate them at the speed that would be required. Another problem is that we aren’t sure what structures in the brain are responsible for consciousness, and more importantly why they are responsible for consciousness. Without that information even a giant neural net is unlikely to be conscious.

For the reasons I have highlighted above I find it unlikely that artificial consciousness will be created anytime soon. When neuroscientists, psychologists, and philosophers have figured out exactly what consciousness is, what is necessary for consciousness, and why the human mind gives rise to it then we might begin expecting the first rudimentary conscious programs. Of course consciousness is no guarantee of intelligence, but that is a different problem.

Advertisements

10 Comments

  1. Good article, but there’s one thing I can’t agree with: If scientists still don’t know what consciousness is, it’s not possible to say that something “is not consciousness.” It is kind of a white/black box problem. The human mind is a blackbox, and we don’t know what’s going on in there. We have models, theories, working systems, but we don’t know the how or why. With machines, as we built them up, we know how they work. The term “consciousness” itself is still a more metaphysical one, and so it’s mostly applied where we don’t have the impression to know how things work.

    But looking at it from a farther picture, I have no kind of evidence that the standard model humanoid has anymore consciousness than a system that’s capable of reacting in the same way. Well, besides that they solve no problems (just kidding ;).

    p

    Comment by phaylon — July 26, 2006 @ 3:12 am

  2. We know in general what consciousness is, but we don’t know enough of how it works is the point that I am making. I have evidence that people are conscious because I am conscious and other people are constructed in a similiar manner, making it highly likely they are conscious.

    Comment by Peter — July 26, 2006 @ 8:48 am

  3. Peter, by your definition, it sounds like consciousness means little more than a ‘quality of being like me’, which in my view, is roughly what most people mean when they talk about consciousness. But pity the poor computer scientist or engineer who is tasked with building a machine possessing consciousness! What requirements can you give them? If we can construct a machine that communicates like humans and solves problems like humans, there will still be those who say, “It’s not conscious, it’s still just a bucket of bolts. A bunch of mindless algorithms that lead to the appearance of consciousness.” There are bots that can probably fool most people into thinking that they are human, thereby passing the Turing test. Does that make them conscious? Mmmm, no, but it’s hard to explain why the answer is “no” beyond, “It doesn’t feel right”. It’s this quality of ‘not feeling right’ that makes this an impossible task for the AI researcher…it doesn’t matter how powerful the AI is, the goalposts can always be moved further downfield.

    Comment by Shad — July 26, 2006 @ 3:26 pm

  4. I disagree. I think it is probably not that hard to create a program that is conscious, it is just that researchers aren’t really trying to, they have given up on it as too hard, or are simply waiting for it to happen as a consequence of some other development.

    Comment by Peter — July 26, 2006 @ 3:40 pm

  5. I’m sorry, but “other people are constructed in a similar manner” doesn’t seem like good evidence to me. Sure, it lifts the probability, if you’re on the point that the “me” and the “you” are similar. But that’s no evidence of consciousness. My point is, if they wrote a program which was so complex that they wouldn’t understand how it works, and it seems conscious just like a human being, how do you tell if it *is* conscious or only happens to look like it?

    For me, things like consciousness are just more of an external, viewpoint definition.

    Comment by phaylon — July 27, 2006 @ 2:47 am

  6. If I am conscious and I believe that consciousness is dependant on features of the physical world then I can conclude that similiar systems are likely to be conscious as well.

    Obviously if something is too complex to understand then you can’t pass judgement on it. However every program written by people is understandable by people, so it doesn’t seem likely.

    Comment by Peter — July 27, 2006 @ 2:57 am

  7. It was a philosophical question, but nevermind.

    Look at your formulation. If you believe, then you can conclude. That’s not what makes evidence. *Unless* you define it purely based on the recognition of information from a black boxed thing. Be it human, tree, computer program or marketing employee. This, also, would be the only possibility to tell if a complex enough machine is conscious or not. By altering the definition to “it behaves ‘conscious’, so it must be.”

    Comment by phaylon — July 27, 2006 @ 3:54 am

  8. However every program written by people is understandable by people, so it doesn’t seem likely.

    I don’t think this is true. To start with, there are the various obfuscation contests. In addition, no one really understands every part of how a computer works. We all just understand bits and pieces, and a general theory of how it should be working together but not the specifics of each bit. Point being, it’s not hard to imagine a program that’s too complex for people to unravel. In fact, such programs are crufted up all the time.

    Comment by Carl — July 27, 2006 @ 6:47 am

  9. I have gone over this before, believe means “I have reason to think” in my writing, unless I am specifically talking about beliefs as beliefs. “it behaves ‘conscious’, so it must be.” No, see first post here, I have argued before that studing the operation of a system is what is really required to know if it is conscious, I am just saying that behavior does provide some, although not conclusive, evidence. It provides less evidence in the case of a machine because we don’t have as great of a reason to think that it is functioning in basically the same way as us in order to produce this behavior.

    Comment by Peter — July 27, 2006 @ 1:11 pm

  10. Yes Carl, everyprogram can be understood, they just aren’t all equally easy. Plus people use tools to break up complicted ones into easy to grap peices allowing them to understand how the entire system works by learning about it in small peices.

    Comment by Peter — July 27, 2006 @ 1:12 pm


RSS feed for comments on this post.

Create a free website or blog at WordPress.com.

%d bloggers like this: