Because of our experience we tend to associate consciousness with intelligence, and sometimes confuse the two. However the concepts are not necessarily one and the same. Consider for example that intelligence seems to come in a spectrum, with humans ranking as the most intelligent (so far) and the other animals falling somewhere lower on the scale. Consciousness however seems to be very much a “you have it or you don’t” kind of thing. Unlike intelligence it doesn’t make any sense to talk about someone being “less conscious”. Perhaps you might argue that you are “less conscious” when you have just woken up, but this is to confuse awareness, and clear headedness, with consciousness. (Although consciousness sometimes does mean awareness in the common usage, it is generally not how philosophers use the term, nor is it being used in that sense here.)
It’s easy to see how you might encounter consciousness without human intelligence. Consider Nagel’s criterion, that an entity is conscious when there is something “it is to be like” that entity. It seems reasonable that there is something it is to be like a monkey or a cat or a dog. However few of us would think that these animals are intelligent as people. If you are still unconvinced that intelligence and consciousness can be separated let me provide you with a few examples without drawing animals (who some may deny are conscious) into the picture. Without much debate I think we can agree that memory/learning and reasoning are key to intelligence. Consider then patients who have severe memory defects. A few of these patients have not only lost much of their past memories but are also unable to form new memories, and thus to learn. From their point of view it seems like every moment is the first moment that they have ever experienced, and they themselves may even claim that whatever they did in the past must not have been conscious (since they don’t remember it). However when interacting with such people it seems obvious that they must be conscious. They talk and react to the world in much the same way that everyone else does, so questioning their consciousness would imply that we could never be sure if anyone was conscious. Likewise patients who suffer damage to their reasoning capabilities, or who were born retarded, demonstrate that the other aspect of intelligence, reasoning, is not necessary for consciousness either.
Conversely we might encounter intelligent behavior without the presence of consciousness. For example the ability of a large computer database to retrieve data and make associations between might be considered intelligent behavior, but as far as we can tell there is nothing “it is to be like” any current model of computer. You might argue that computers aren’t “intelligent” in the same way that people are, but it is hard to really make this distinction without invoking consciousness itself as the criterion, and this would be circular reasoning. Any task that people can do which requires “human intelligence”, such as composing music, proving theorems, playing chess, recognizing objects, ect, can already be done by computers, at least in a limited fashion, and there is no reason that computers won’t become even “smarter” in the future.
This observation, that intelligence can be separate from consciousness, might cause a person to question the validity of AI research, since I have presented here the possibility that one could engineer a system with intelligent behavior but no consciousness. Of course there are other benefits to intelligent systems besides the attempt to create conscious minds, but many AI researchers are motivated by the desire to create real consciousness, in order to better understand our own minds.
However we needn’t give up hope on AI research just yet. What we need to do is characterize when there is “something it is to be like” a system in objective terms, so that researchers could examine their creations using this criterion to determine if they were conscious or not. Of course some may claim that this objective criterion is impossible, stating that it is part of the nature of consciousness that it must be described subjectively. Even if such a claim is true, however, that is not what the criterion needs to do. Our criterion needs to say only when a system is conscious, not how it is conscious or what the contents of its consciousness is. Although no one has developed such a criterion yet (but it is being worked on), we would expect that it would have to do with the way in which information was processed by that system and if such processing could support a “conscious viewpoint”.
Once we figure out how to determine if a system is conscious or not I suspect that creating consciousness in a laboratory setting may be fairly easy. After all we think that even simple animals, such as mice, may posses some form of consciousness, so it seems reasonable that researchers could develop equally primitive conscious systems without too much effort. Intelligence however is a completely different can of worms (because intelligence includes memory, learning, reasoning, ect), and thus it is probably for the best that AI researchers are concentrating on this harder problem at the moment.