Why, really, is the Halting Problem so important?

Question Detail: 

I don't understand why the Halting Problem is so often used to dismiss the possibility of determining whether a program halts. The Wikipedia article correctly explains that a deterministic machine with finite memory will either halt or repeat a previous state. You can use the algorithm which detects whether a linked list loops to implement the Halting Function with space complexity of O(1).

It seems to me that the Halting Problem is nothing more than a so called "paradox," a self referencing (at the very least cyclical) contradiction in the same way as the Liar's paradox. The only conclusion it makes is that the Halting Function is susceptible to such malformed questions.

So, excluding paradoxical programs, the Halting Problem is decidable. So why do we hold it as evidence of the contrary?

Asked By : Brent
Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/32845

Answered By : jmite

Because a lot of really practical problems are the halting problem in disguise. A solution to them solves the halting problem.

You want a compiler that finds the fastest possible machine code for a given program? Actually the halting problem.

You have JavaScript, with some variables at a high security levels, and some at a low security level. You want to make sure that an attacker can't get at the high security information. This is also just the halting problem.

You have a parser for your programming language. You change it, but you want to make sure it still parses all the programs it used to. Actually the halting problem.

You have an anti-virus program, and you want to see if it ever executes a malicious instruction. Actually just the halting problem.

As for the wikipedia example, yes, you could model a modern computer as a finite-state machine. But there's two problems with this.

  1. Every computer would be a different automaton, depending on the exact number of bits of RAM. So this isn't useful for examining a particular piece of code, since the automaton is dependent on the machine on which it can run.

  2. You'd need $2^n$ states if you have n bits of RAM. So for your modern 8GB computer, that's $2^{32000000000}$. This is a number so big that wolfram alpha doesn't even know how to interpret it. When I do $2^{10^9}$ it says that it has $300000000$ decimal digits. This is clearly much to large to store in a normal computer.

The Halting problem lets us reason about the relative difficulty of algorithms. It lets us know that, there are some algorithms that don't exist, that sometimes, all we can do is guess at a problem, and never know if we've solved it.

If we didn't have the halting problem, we would still be searching for Hilbert's magical algorithm which inputs theorems and outputs whether they're true or not. Now we know we can stop looking, and we can put our efforts into finding heuristics and second-best methods for solving these problems.

UPDATE: Just to address a couple of issues raised in the comments.

@Tyler Fleming Cloutier: The "nonsensical" problem arises in the proof that the halting problem is undecidable, but what's at the core of undecidability is really having an infinite search space. You're searching for an object with a given property, and if one doesn't exist, there's no way to know when you're done.

The difficulty of a problem can be related to the number of quantifiers it has. Trying to show that there exists ($\exists$) an object with an arbitrary property, you have to search until you find one. If none exists, there's no way (in general) to know this. Proving that all objects ($\forall$) have a property is hard, but you can search for an object without the property to disprove it. The more alternations there are between forall and exists, the harder a problem is.

For more on this, see the Arithmetic Hierarchy. Anything above $\Sigma^0_0=\Pi^0_0$ is undecidable, though level 1 is semi-decidable.

It's also possible to show that there are undecidable problems without using a nonsensical paradox like the Halting problem or Liars paradox. A Turing Machine can be encoded using a string of bits, i.e. an integer. But a problem can be encoded as a language, i.e. a subset of the integers. It's known that there is no bijection between the set of integers and the set of all subsets of the integers. So there must be some problems (languages) which don't have an associated Turing machine (algorithm).

@Brent: yes, this admits that this is decidable for modern computers. But it's decidable for a specific machine. If you add a USB drive with disk space, or the ability to store on a network, or anything else, then the machine has changed and the result doesn't still hold.

It also has to be said that there are going to be many times where the algorithm says "this code will halt" because it the code will fail and run out of memory, and that adding a single extra bit of memory would cause the code to succeed and give a different result.

The thing is, Turing machines don't have an infinite amount of memory. There's never a time where an infinite amount of symbols are written to the tape. Instead, a Turing machine has "unbounded" memory, meaning that you can keep getting more sources of memory when you need it. Computers are like this. You can add RAM, or USB sticks, or hard drives, or network storage. Yes, you run out of memory when you run out of atoms in the universe. But having unlimited memory is a much more useful model.

No comments

Powered by Blogger.