On Philosophy

March 16, 2007

Solving The Harder Problem

Filed under: General Philosophy — Peter @ 12:00 am

In mathematics it isn’t unusual to have to prove something “harder” in order to prove something simpler. By this I mean that the only obvious proof, or the only short proof, is of something relatively general, and from this general result the specific result that was your original goal can be derived. I think human consciousness is like such a “harder” problem, from which the specific tasks of intelligence, like language, tool making, science, ect are derived.

Evidence to support this hypothesis is easily forthcoming. Just observe that humans (and to a lesser extent chimpanzees) are the only species that demonstrate flexible, meaning that the capabilities can be adapted to solve new problems, language, tool use, ect. (Certain animals, like bees seem to have a hard-wired version of some of these capacities, but compared to humans they are very limited, and so inflexible that they can only solve a specific task.) Obviously the ability to have a flexible language, or the ability to create tools, would generally be a survival advantage, and thus even if humans are the only species with all of these capabilities, perhaps by a lucky accident of evolution, it would be natural to suppose that other species would have one or two of our mental faculties, say predators that can coordinate complicated tasks linguistically, termites that can build the best nest instead of the same fixed plan, ect.

But this is not what we observe, and it might seem baffling. After all having the capacity for flexible language use is surely less complicated then general human-like intelligence. Thus we would expect more species to have evolutionarily stumbled upon this simpler capacity then to have arrived at completely general intelligence. (Evolution proceeds by basically random steps. Thus simpler adaptations are more likely to be arrived at, and will be arrived at in more species, and over a shorter evolutionary time-scale, than complex ones.) There are three possible conclusions. One is that general intelligence of any kind, even limited to a single capacity, is extremely unlikely, and that humans are simply evolutionarily unexpected so early in the planet’s history. Evolution does proceed by random steps, so this is possible, but, as mentioned, is also extremely unlikely. Another possibility then is that completely general intelligence is in fact simpler, in terms of the mental machinery behind it, than flexible intelligence limited to a single capacity. And the third possibility is that there is only completely general intelligence, which expresses itself in many ways, and hence that flexible intelligence in a single capacity is general intelligence plus extra limitations. As far as I am concerned here then the second and third possibilities, the only likely candidates, are essentially the same, implying that general intelligence is easier to arrive at than flexible intelligence limited to a single capacity.

So, returning to my initial analogy, it would seem that in order to create a flexible intellect for a single capacity it is easier to create a completely general intellect. And this could explain why artificial intelligence researchers haven’t actually delivered artificial intelligence, even after fifty years of working at the problem. You see artificial intelligence researchers try to create programs that display intelligence in some limited capacity, like using language or recognizing shapes. But if it is really the case that general intelligence is in some way simpler then they are tackling a harder problem than they have too. Nature stumbled upon general intelligence before flexible intelligence designed to tackle a specific task, and hence it is natural to suppose that it would be easier for artificial intelligence researchers to proceed in the same way.

So why don’t they? Well I think the problem is partly psychological. Creating a completely general intelligence simply seems harder, and artificial intelligence researchers need to get results in a timely manner in order to keep their funding. Thus they tackle what seem like more manageable problems in order to be able to be able to demonstrate some results. But although they are able to create systems that are acceptable at doing their specific task they never go about it in a way that seems intelligent or human-like. They don’t learn they way we do, nor do they generalize, or make new connections between seemingly unrelated facts (the basis of our ability to come up with new approaches to problems, and not just recombinations of old approaches). So although computers are able to outperform humans in some tasks that we use our intelligence to solve, like chess, they don’t do so by using a faculty much like our intelligence. And thus the successes of these machines don’t generalize well, we can’t put together these numerous limited systems and arrive at something generally intelligent. The other part of the problem is that trying to solve the general intelligence problem means tackling sub-problems that receive little attention by themselves. The ability to learn, abstract, ect, these are all problems that don’t have well-known general solutions. Instead researchers tackling specific problems use the nature of their limited domain to take shortcuts around these problems, leaving them basically unsolved, and thus getting no closer to a completely general intelligence. (For example, a program that is supposed to interact through text with people has the ability to parse sentences properly hard-wired in, and although it can “learn” new words it doesn’t do so by connecting them to ideas, it does so by observing their syntax and in what context they are appropriate, using hard-wired rules to “learn” these facts. You can’t teach such a program a completely foreign language from English, if it doesn’t share similar syntax, nor can you teach it to reason about the subjects of conversation.)

Is AI doomed? No. Nor have the limited systems developed already been in vain, after all they do have their uses. However if a real artificial intelligence is going to be developed given the way things are going I suspect it will be by some motivated amateurs or a retired professor, someone who isn’t forced to demonstrate results, and who thus can tackle the truly general problems without fear of failure.

Advertisements

3 Comments

  1. This entry is very interesting and original. I thought you were going to go in another direction when I started reading it, but I like the conclusion you came to. I guess you could say that Deep Blue plays chess like an ant builds a nest: by excluding general intelligence, it’s possible to solve specific (complex) tasks, but you can’t do true language processing unless you have general intelligence.

    Comment by Carl — March 16, 2007 @ 4:16 pm

  2. There is one aspect that philosophers do not consider and that is the Web. There are Petabytes out there. getting an answer to most questions is simply a question of tapping into Petabytes.

    This is why I consider language so important. Natural language is the way in which we pose questions. To index you need not only to match words but to match words in context.

    People have investigated a computer writing its own probrams. Actually if you look on the Web you will find that there is already a program to do more or less anthing. The problem is to string programs toether and ensure compatibility. Natural language comes in here as interface descriptions are given in natural language.

    Comment by Ian Parker — March 17, 2007 @ 4:34 am

  3. A few very smart AI people are working on the more general problem. See, especially, Eliezer Yudkowsky, and his colleagues at SIAI.

    A response to your point about general intelligence being easier than narrower cognitive abilities:
    1. What’s easy for humans to design is different from what is easy for evolution to stumble upon. These are two very different design proccesses.
    2. Some cog sci people think that human general intelligence is a result of quite small changes to our primate ancestors’ brains. One general idea is that the human language ability allows humans to integrate their different cognitive abilities in ways that allow them much greater cognitive flexibility than non-linguistic primates. Chimps, after all, do some very smart things (in social alliances, for example), but are limited by lack of communication abilities and by lack of flexibility. It could be that quite minor language improvements get you from chimps to humans, because this ability multiplies the power of various existing cognitive abilities. In this case, doing general intelligence by looking at the example of humans might involve trying to understand the diverse elements of human cognitive ability (memory, language, concept learning, etc.). OTOH, the best way to make a flying machine was not to just copy bird’s flying design (feathers, light body etc.) but to understand flight in precise physical terms and then build a machine according to these principles. The same might approach might work for AI: try to understand human general intelligence and build something that embodies it (this comes to the same as your proposal).

    Comment by Owain Evans — April 3, 2007 @ 10:35 pm


RSS feed for comments on this post.

Blog at WordPress.com.

%d bloggers like this: