On Philosophy

March 16, 2007

Solving The Harder Problem

Filed under: General Philosophy — Peter @ 12:00 am

In mathematics it isn’t unusual to have to prove something “harder” in order to prove something simpler. By this I mean that the only obvious proof, or the only short proof, is of something relatively general, and from this general result the specific result that was your original goal can be derived. I think human consciousness is like such a “harder” problem, from which the specific tasks of intelligence, like language, tool making, science, ect are derived.

Evidence to support this hypothesis is easily forthcoming. Just observe that humans (and to a lesser extent chimpanzees) are the only species that demonstrate flexible, meaning that the capabilities can be adapted to solve new problems, language, tool use, ect. (Certain animals, like bees seem to have a hard-wired version of some of these capacities, but compared to humans they are very limited, and so inflexible that they can only solve a specific task.) Obviously the ability to have a flexible language, or the ability to create tools, would generally be a survival advantage, and thus even if humans are the only species with all of these capabilities, perhaps by a lucky accident of evolution, it would be natural to suppose that other species would have one or two of our mental faculties, say predators that can coordinate complicated tasks linguistically, termites that can build the best nest instead of the same fixed plan, ect.

But this is not what we observe, and it might seem baffling. After all having the capacity for flexible language use is surely less complicated then general human-like intelligence. Thus we would expect more species to have evolutionarily stumbled upon this simpler capacity then to have arrived at completely general intelligence. (Evolution proceeds by basically random steps. Thus simpler adaptations are more likely to be arrived at, and will be arrived at in more species, and over a shorter evolutionary time-scale, than complex ones.) There are three possible conclusions. One is that general intelligence of any kind, even limited to a single capacity, is extremely unlikely, and that humans are simply evolutionarily unexpected so early in the planet’s history. Evolution does proceed by random steps, so this is possible, but, as mentioned, is also extremely unlikely. Another possibility then is that completely general intelligence is in fact simpler, in terms of the mental machinery behind it, than flexible intelligence limited to a single capacity. And the third possibility is that there is only completely general intelligence, which expresses itself in many ways, and hence that flexible intelligence in a single capacity is general intelligence plus extra limitations. As far as I am concerned here then the second and third possibilities, the only likely candidates, are essentially the same, implying that general intelligence is easier to arrive at than flexible intelligence limited to a single capacity.

So, returning to my initial analogy, it would seem that in order to create a flexible intellect for a single capacity it is easier to create a completely general intellect. And this could explain why artificial intelligence researchers haven’t actually delivered artificial intelligence, even after fifty years of working at the problem. You see artificial intelligence researchers try to create programs that display intelligence in some limited capacity, like using language or recognizing shapes. But if it is really the case that general intelligence is in some way simpler then they are tackling a harder problem than they have too. Nature stumbled upon general intelligence before flexible intelligence designed to tackle a specific task, and hence it is natural to suppose that it would be easier for artificial intelligence researchers to proceed in the same way.

So why don’t they? Well I think the problem is partly psychological. Creating a completely general intelligence simply seems harder, and artificial intelligence researchers need to get results in a timely manner in order to keep their funding. Thus they tackle what seem like more manageable problems in order to be able to be able to demonstrate some results. But although they are able to create systems that are acceptable at doing their specific task they never go about it in a way that seems intelligent or human-like. They don’t learn they way we do, nor do they generalize, or make new connections between seemingly unrelated facts (the basis of our ability to come up with new approaches to problems, and not just recombinations of old approaches). So although computers are able to outperform humans in some tasks that we use our intelligence to solve, like chess, they don’t do so by using a faculty much like our intelligence. And thus the successes of these machines don’t generalize well, we can’t put together these numerous limited systems and arrive at something generally intelligent. The other part of the problem is that trying to solve the general intelligence problem means tackling sub-problems that receive little attention by themselves. The ability to learn, abstract, ect, these are all problems that don’t have well-known general solutions. Instead researchers tackling specific problems use the nature of their limited domain to take shortcuts around these problems, leaving them basically unsolved, and thus getting no closer to a completely general intelligence. (For example, a program that is supposed to interact through text with people has the ability to parse sentences properly hard-wired in, and although it can “learn” new words it doesn’t do so by connecting them to ideas, it does so by observing their syntax and in what context they are appropriate, using hard-wired rules to “learn” these facts. You can’t teach such a program a completely foreign language from English, if it doesn’t share similar syntax, nor can you teach it to reason about the subjects of conversation.)

Is AI doomed? No. Nor have the limited systems developed already been in vain, after all they do have their uses. However if a real artificial intelligence is going to be developed given the way things are going I suspect it will be by some motivated amateurs or a retired professor, someone who isn’t forced to demonstrate results, and who thus can tackle the truly general problems without fear of failure.

Advertisements

Create a free website or blog at WordPress.com.