On Philosophy

December 8, 2007


Filed under: Epistemology — Peter @ 12:00 am

Obviously solving some problems takes longer than others, for example you can’t put a list of 10 items in order as fast as you can put a list of 2 items in order, simply because you have to look at more items in first case to put it in order. But problems can take different lengths of time for reasons beside the fact that they are dealing with different sized inputs. Some problems might be able to be solved within a time strictly proportional to the size the input, while others might take time proportional to the square or the cube of the size of the input. This kind of difference is called a difference in the computational complexity of the problem. Computational complexity covers a great deal of territory, but one famous unsolved problem within it is whether a certain class of hard to solve problems, designated NP, can be reduced to faster to solve problems, designated by P. NP problems are characterized by the fact that the fastest known solutions to them take time proportional to an exponential function, say 2n. Often this comes about because there are an exponential number of possible solutions, and, while a solution can be checked for correctness relatively quickly, there is no easy way to narrow in on the correct solution. Thus even the best algorithms for solving the problem essentially reduce to trying one solution after another until a correct answer is found or all the possibilities are exhausted. If we could reduce NP problems to P problems that it would entail that we would have a way to solve these problems in polynomial time (time proportional to some polynomial function of the input), meaning that we aren’t going through the possible solutions one after another in sequence.

But why should we care whether P=NP? As I was working on this piece I realized that I had gotten caught up in the problem, without really paying any attention to the significance of the problem itself. One reason we might care is because there are places where we rely on the fact that certain problems are hard to solve. For example, there are many NP problems where we can quickly generate a solution and then, on the basis of that solution, produce a hard problem that it is the unique answer to. We could then turn that problem into a lock and the answer into the key, and on that basis have confidence in the security of our lock because we know that coming up with that answer from scratch is NP, and thus hard enough to make breaking in infeasible. I have also heard that designing a protein to perform a specific function is NP because of certain difficulties determining how they will fold, although I don’t know enough about that particular field to say for sure. And NP problems pop up every so often in games and puzzles, with the fact that they are NP forcing players to use strategy, and preventing them from simply brute-forcing their way to victory. But, while I can give examples of problems that are NP, it is harder to instill the conviction that proving P=NP or the reverse, that P≠NP matters, except when it comes to encryption, which is the practical implementation of my key-lock example. While the problem seems interesting from an academic point of view there seem few immediate consequences. Sure if P=NP then many modern encryption schemes would be breakable, but that doesn’t mean we couldn’t replace them with something better. We could, for example, use quantum entanglement to one-up encryption by preventing eavesdroppers on our communications completely (because, given that they would collapse the wave-function, we could in principle detect when someone else was listening in). If that makes encryption impractical for the common uses we put it to, such as protecting our transactions on the internet, we might simply come up with other schemes to safeguard ourselves, such as personally paying on receipt of goods. On the other hand, if P≠NP then some encryption schemes may be safe, but little else of interest follows. Which means that the question is purely an academic one, while mathematicians may have great fun writing papers about it doesn’t matter in any practical sense. But that is true for many mathematical questions, so perhaps it shouldn’t surprise us. In any case my goal here isn’t really to say anything significant about whether P=NP, that was just a misleading title to grab readers.

Anyways, let us return for a moment to whether P=NP. If we can provide a solution to some generic NP problem that comes to a solution in polynomial time then I will take it that we will have shown P=NP. One flippant way to demonstrate that is simply to gesture at Moore’s law, which states that computing power doubles every year. If Moore’s law really was a law then obviously the speed at which we can compute would rise exponentially, which means that even an NP problem that we attempted to solve just by checking each answer would finish in polynomial time, simply because increasing computer speed would counteract our initial expectations about how long it would take. However, this isn’t really a reduction of NP to P for two reasons. First there are limits to how fast computers can get, and so Moore’s law can’t continue indefinitely. And, secondly, P algorithms would also experience an exponential speed-up, and so we might still wonder if we could make NP problems as fast as the P ones. A serious possibility regarding how NP problems might be reduced to P problems involves exploiting the quantum nature of the universe to check possible solutions simultaneously. In practical terms it might work as follows: the computer would start by diving the possible solutions into two equal sized groups (not by enumerating them of course, but by making a decision about how to generate the possible solutions, such that we would only generate one half or the other). The computer would then consult a random quantum event that had an equal chance of resulting in either of two outcomes. Depending on the result the computer would ignore one half of the solutions, and would proceed to repeat that process until it was considering only a single possible solution, which it would then check to see whether it was valid. This might seem like it would result in the computer checking only a single possible solution determined at random, but, according to quantum physics, when the computer consults the random quantum event to decide which half of the solutions to discard it actually enters a superposition of both possibilities, where in one part of the superposition it has discarded the first half and in the other it has discarded the second half. And as more and more quantum events are consulted the superposition becomes more and more complicated, until at the end the superposition is made up of the computer checking every possible solution. Because we reach this state by dividing the possible solutions over and over again this means that even if the number of solutions is exponential we will develop a complicated superposition which checks all of them in only polynomial time, thus reducing NP to P. But, while this sounds great, there is a small problem. While we can make the computer enter this complicated superposition it isn’t obvious how to get the answer out. In fact it is an open question whether getting the answer out is even possible. But, on the other hand, the answer is there in the superposition, whether we can see it or not. I’ll leave whether this counts as a reduction of NP to P up to the reader.

So, while the quantum solution was close to what we wanted, it wasn’t quite there because we couldn’t get the answer. So allow me to detail a second way to solve NP problems that does yield an answer in polynomial time, this time exploiting biological facts rather than quantum ones. This time our computer will be an artificially constructed bacteria with a very complicated DNA. Every time the bacteria divides it also divides the problem space (we assume individual bacteria contain structures that record this information), just as our computer above did when consulting its quantum event. And, when there is only one solution left if that solution doesn’t work the bacteria dies, otherwise it keeps dividing, but this time making perfect copies of itself. Leave a properly designed bacteria with enough food and space and it will solve an NP problem in polynomial time. This is because bacteria divide at a relatively constant rate, and thus, as with our quantum computer, manage to cover all of the possibilities because the number of bacteria increases exponentially, just as the complexity of the superposition increased exponentially, with each bacteria dealing with different possible solutions. And, unlike the quantum computer, we will be able to extract our solution because the wrong answers will all die, while the correct answers will continue to multiply. And thus, after a sufficient length of time, if we reach in and extract a bacterium it will encode a correct answer to the problem. (And, obviously, if there are no bacteria it that means that there were no correct answers.) Obviously the space and energy we need increase exponentially with the size of the problem, although we are free to trade space for time (we can make the bacteria take longer to come to a solution in order to have them take up less space), and we might be able to reduce the space requirement significantly in other ways as well, such as with heuristics that allow us to kill off early entire “families” or bacterial solutions. But these things are besides the point when it comes to computational complexity, all that matters is how many steps it takes to get the answer from scratch, and it is obvious that our specially designed bacteria reach the solution in polynomial time. And thus for bacteria computers P=NP.

Obviously these bacterial computers are extremely impractical, but, as with everything else in computational complexity, we consider only ideal situations, and in an ideal situation we can assume that the bacteria don’t make copying errors and have unlimited food and space. What the example of the bacteria computer and the quantum computer really show is not that P=NP, but that how complex solving a problem is depends on the primitive operations we have available to us. And when it comes to real-life computation the primitive operations are determined by the physical nature of the universe, because we can easily imagine universes in which we could extract the answer from our quantum computer, or where our bacteria computer is practical (for example, one in which matter is continuous and the bacteria also halve their size every generation). What this indicates is simply that it is somewhat foolish to try to axiomatically model computational complexity. Sure we can accurately model computational complexity for fixed kinds of machines using an axiom system, but we are simply incapable of determining a priori what the primitive computational operations are. Which just gives one more example of a fact that I established previously, that when it comes to questions regarding reality, such as “can we compute the answer to NP problems in polynomial time?”, answers to them ultimately rest on assumptions about the world which must be supported by observation and experimentation; pure mathematics can tell us nothing by itself.


Create a free website or blog at WordPress.com.

%d bloggers like this: