Exactly when and why can a Turing-machine not solve the halting problem?












27












$begingroup$


I perfectly understand and accept the proof that a Turing-machine cannot solve the halting problem.



Indeed, this is not one of those questions that challenges the proof or result.



However, I feel that there is still something left to be explained ... I am still left wondering exactly why the Halting problem is not solvable. Of course, in the sense that there is a proof, there is a why here ... and yet ... I feel that some important other part of the why is missing.



Let me explain:



First, let's assume we just try to solve the 'empty-tape halting problem' and let's assume that the machines we are interested in only have two symbols: 1 and 0. Now, given some machine, will it halt, when stated on the empty tape (meaning: all 0) or not?



Now, we know that this problem is not Turing-solvable. If it were, we get a logical contradiction. OK, I get it. I have no problem with that at all, and like I said, I can follow the proof and I completely agree with it. I perfectly accept that this halting problem is not solvable.



But suppose I would actually try and give it a go: suppose I would try and solve this halting problem. We know the set of all turing-machines is enumerable, so let's just go through them one by one. Now, presumably this enumeration is such that it starts with relatively 'simple' machines. Indeed, I could first list all the ones with 1 internal state, then all the ones with 2, etc. since for any $n$, and with only $2$ symbols, there are only finitely many possible machines



Now, for all the machines with $1$ state, I can easily predict their behavior. Some halt. Some don't. OK, moving on to the machines with $2$ states. With some effort, I can predict the behavior for all of them as well. Cool. On to $3$ ... ok, now it becomes more difficult .. but even here I can do it. I know, because people working on the Busy Beaver problem have figured this out. And I believe they figured it out for $n=4$ as well ...



Interestingly, these researchers are using computers to help them figure out the halting or non-halting behavior for these relatively 'simple' machines. These computer programs are, in a way, trying to solve the halting problem, at least for very small values of $n$. Presumably, these machines 'analyze' and 'break down' the behavior of a machine with $4$ states into something that can be demonstrated to halt or not halt. But of course, we know they can't solve it for all $n$ ... they can't be perfect. And indeed, for $n=5$ the behavior of Turing-machines gets so complicated that human nor machine is able to figure out (yet) whether the machine halts or not.



So ... here is my question: what is it that we're running into that prevents us from figuring out the halting behavior?



The proof of the Halting problem uses self-reference. That is, if a machine could solve the halting, then we can show that thee must be a machine that halts on its own input (i.e. when given its own program, or its own number in some enumeration, or..) if and only if it does not .. a contradiction.



OK, but this is when we have a machine with certain powers ... in a way, a machine that can solve the halting problem is a machine with 'too much' power, which leads to a contradiction.



But, the halting-detection machines used by the Busy Beaver researchers don't have too much power. They have too little power. Currently they can't solve $n=5$. OK, so we give them some more power. Maybe at some point they can solve $n=5$ ... but they still can't solve $n=6$. Maybe we can give them enough power to solve $n=6$, or $n=7$ or ....



... so my question is: is there some 'special' value of $n$, say $n=m$ where this has to stop. Where, somehow, the only way to solve $n=m$, is by a machine that has 'too much' power? But why would that be? Is it because of some kind of self-reference? Because the only way to solve $n=m$ is by a machine that, as it tries to analyze and predict the behavior of some machine with $m$ states, cannot break it down to anything 'smaller' than something that requires solving $n=m$ itself? Some kind of 'minimal' value not unlike some set of minimum requirements that formal systems need to have in order to apply the Godel construction to them?



One thought I have is that this cannot be: like I said, for any $n$, there are only finitely many machines to consider. As such, it is computable; there is some machine that correctly classifies all machines with $n$ states as empty-tape halters or non-halters: it takes a machine on the input, goes through its finite list with pre-stored answers, and outputs that answer. There is a machine that does this for $n=5$, there is one for $n=6$, etc. And, none of those machines have too much power: no contradictions here. They all have too little.



Then again, these machine don't do any explicit analysis of the machines involved ... they just happen to give the right value. So, maybe there is still some value of $n$ where the approach of actually trying to analyze and predict the behavior of machine starts to break down for some fundamental, again possibly self-referential, reason?



Or: is it that the analytical approach merely gets harder and harder ... but that there is no 'special' point where it becomes, for some theoretical, fundamental reason, too hard? As such, the contradiction only comes from a machine that can do it for all infinitely many values of $n$? Indeed, maybe the problem is that in order to analyze the behavior of all machines with $n$ states, we need a machine that has to have more than $n$ states ... and so while for every $n$, there is a machine $M$ that can perform the analysis, the complexity of $M$ is greater than any of the machines with $n$ states, and hence you'd need another, even more complicated machine $M'$ in order to analyze machines with the kind of complexity that $M$ has ... thus setting up an infinite regress that you can never complete, i.e. there is no one machine that can 'do it all'?



Can someone help me how to think about this?










share|cite|improve this question











$endgroup$








  • 3




    $begingroup$
    @MattSamuel The OP is aware of this, as they state in their question.
    $endgroup$
    – Noah Schweber
    Dec 26 '18 at 22:02






  • 1




    $begingroup$
    @NoahSchweber Thanks Noah ... you are actually just the person I was hoping would see this post ... am I making sense? Do you see what I'm getting at?
    $endgroup$
    – Bram28
    Dec 26 '18 at 22:02








  • 3




    $begingroup$
    The issue is that having more "power" will not solve the problem. No matter how powerful your machine is it cannot solve a busy beaver state even for low n. All the turing machine can do is continue running and running and even after 1000000000000 steps we cannot truely be sure it doesn't halt without a pattern, and many of these busy beaver states don't exhibit any obvious pattern. Only an infinitely powerful turing machine (infinite time turing machine) can solve the problem.
    $endgroup$
    – Matthew Liu
    Dec 26 '18 at 22:27












  • $begingroup$
    Defining how a program $P$ halts is easy : there exists a finite integer $n$ such that running $P$ for $n$ steps yields the $0$ state. Defining what properties of $P$ guaranty that $n$ doesn't exist is exactly the purpose of our logic/mathematical theories (PA, ZFC...). That there exists two different integers such that $P$ is in the same state after $n$ and $m$ steps is one of those properties. But when looking at theories formalizing the arithmetic, we get much more, and we reach the incompleteness problem : we have no way to guaranty our theory is consistent.
    $endgroup$
    – reuns
    Dec 26 '18 at 22:39








  • 2




    $begingroup$
    @Bram28 Still won't matter how much the machine is able to "reason". Fundamentally speaking reasoning is no different from calculation speed. Our human brains are no different from machines other than the fact that our brains are still much more powerful but we aren't made out of metal so we make more mistakes.
    $endgroup$
    – Matthew Liu
    Dec 26 '18 at 23:14
















27












$begingroup$


I perfectly understand and accept the proof that a Turing-machine cannot solve the halting problem.



Indeed, this is not one of those questions that challenges the proof or result.



However, I feel that there is still something left to be explained ... I am still left wondering exactly why the Halting problem is not solvable. Of course, in the sense that there is a proof, there is a why here ... and yet ... I feel that some important other part of the why is missing.



Let me explain:



First, let's assume we just try to solve the 'empty-tape halting problem' and let's assume that the machines we are interested in only have two symbols: 1 and 0. Now, given some machine, will it halt, when stated on the empty tape (meaning: all 0) or not?



Now, we know that this problem is not Turing-solvable. If it were, we get a logical contradiction. OK, I get it. I have no problem with that at all, and like I said, I can follow the proof and I completely agree with it. I perfectly accept that this halting problem is not solvable.



But suppose I would actually try and give it a go: suppose I would try and solve this halting problem. We know the set of all turing-machines is enumerable, so let's just go through them one by one. Now, presumably this enumeration is such that it starts with relatively 'simple' machines. Indeed, I could first list all the ones with 1 internal state, then all the ones with 2, etc. since for any $n$, and with only $2$ symbols, there are only finitely many possible machines



Now, for all the machines with $1$ state, I can easily predict their behavior. Some halt. Some don't. OK, moving on to the machines with $2$ states. With some effort, I can predict the behavior for all of them as well. Cool. On to $3$ ... ok, now it becomes more difficult .. but even here I can do it. I know, because people working on the Busy Beaver problem have figured this out. And I believe they figured it out for $n=4$ as well ...



Interestingly, these researchers are using computers to help them figure out the halting or non-halting behavior for these relatively 'simple' machines. These computer programs are, in a way, trying to solve the halting problem, at least for very small values of $n$. Presumably, these machines 'analyze' and 'break down' the behavior of a machine with $4$ states into something that can be demonstrated to halt or not halt. But of course, we know they can't solve it for all $n$ ... they can't be perfect. And indeed, for $n=5$ the behavior of Turing-machines gets so complicated that human nor machine is able to figure out (yet) whether the machine halts or not.



So ... here is my question: what is it that we're running into that prevents us from figuring out the halting behavior?



The proof of the Halting problem uses self-reference. That is, if a machine could solve the halting, then we can show that thee must be a machine that halts on its own input (i.e. when given its own program, or its own number in some enumeration, or..) if and only if it does not .. a contradiction.



OK, but this is when we have a machine with certain powers ... in a way, a machine that can solve the halting problem is a machine with 'too much' power, which leads to a contradiction.



But, the halting-detection machines used by the Busy Beaver researchers don't have too much power. They have too little power. Currently they can't solve $n=5$. OK, so we give them some more power. Maybe at some point they can solve $n=5$ ... but they still can't solve $n=6$. Maybe we can give them enough power to solve $n=6$, or $n=7$ or ....



... so my question is: is there some 'special' value of $n$, say $n=m$ where this has to stop. Where, somehow, the only way to solve $n=m$, is by a machine that has 'too much' power? But why would that be? Is it because of some kind of self-reference? Because the only way to solve $n=m$ is by a machine that, as it tries to analyze and predict the behavior of some machine with $m$ states, cannot break it down to anything 'smaller' than something that requires solving $n=m$ itself? Some kind of 'minimal' value not unlike some set of minimum requirements that formal systems need to have in order to apply the Godel construction to them?



One thought I have is that this cannot be: like I said, for any $n$, there are only finitely many machines to consider. As such, it is computable; there is some machine that correctly classifies all machines with $n$ states as empty-tape halters or non-halters: it takes a machine on the input, goes through its finite list with pre-stored answers, and outputs that answer. There is a machine that does this for $n=5$, there is one for $n=6$, etc. And, none of those machines have too much power: no contradictions here. They all have too little.



Then again, these machine don't do any explicit analysis of the machines involved ... they just happen to give the right value. So, maybe there is still some value of $n$ where the approach of actually trying to analyze and predict the behavior of machine starts to break down for some fundamental, again possibly self-referential, reason?



Or: is it that the analytical approach merely gets harder and harder ... but that there is no 'special' point where it becomes, for some theoretical, fundamental reason, too hard? As such, the contradiction only comes from a machine that can do it for all infinitely many values of $n$? Indeed, maybe the problem is that in order to analyze the behavior of all machines with $n$ states, we need a machine that has to have more than $n$ states ... and so while for every $n$, there is a machine $M$ that can perform the analysis, the complexity of $M$ is greater than any of the machines with $n$ states, and hence you'd need another, even more complicated machine $M'$ in order to analyze machines with the kind of complexity that $M$ has ... thus setting up an infinite regress that you can never complete, i.e. there is no one machine that can 'do it all'?



Can someone help me how to think about this?










share|cite|improve this question











$endgroup$








  • 3




    $begingroup$
    @MattSamuel The OP is aware of this, as they state in their question.
    $endgroup$
    – Noah Schweber
    Dec 26 '18 at 22:02






  • 1




    $begingroup$
    @NoahSchweber Thanks Noah ... you are actually just the person I was hoping would see this post ... am I making sense? Do you see what I'm getting at?
    $endgroup$
    – Bram28
    Dec 26 '18 at 22:02








  • 3




    $begingroup$
    The issue is that having more "power" will not solve the problem. No matter how powerful your machine is it cannot solve a busy beaver state even for low n. All the turing machine can do is continue running and running and even after 1000000000000 steps we cannot truely be sure it doesn't halt without a pattern, and many of these busy beaver states don't exhibit any obvious pattern. Only an infinitely powerful turing machine (infinite time turing machine) can solve the problem.
    $endgroup$
    – Matthew Liu
    Dec 26 '18 at 22:27












  • $begingroup$
    Defining how a program $P$ halts is easy : there exists a finite integer $n$ such that running $P$ for $n$ steps yields the $0$ state. Defining what properties of $P$ guaranty that $n$ doesn't exist is exactly the purpose of our logic/mathematical theories (PA, ZFC...). That there exists two different integers such that $P$ is in the same state after $n$ and $m$ steps is one of those properties. But when looking at theories formalizing the arithmetic, we get much more, and we reach the incompleteness problem : we have no way to guaranty our theory is consistent.
    $endgroup$
    – reuns
    Dec 26 '18 at 22:39








  • 2




    $begingroup$
    @Bram28 Still won't matter how much the machine is able to "reason". Fundamentally speaking reasoning is no different from calculation speed. Our human brains are no different from machines other than the fact that our brains are still much more powerful but we aren't made out of metal so we make more mistakes.
    $endgroup$
    – Matthew Liu
    Dec 26 '18 at 23:14














27












27








27


13



$begingroup$


I perfectly understand and accept the proof that a Turing-machine cannot solve the halting problem.



Indeed, this is not one of those questions that challenges the proof or result.



However, I feel that there is still something left to be explained ... I am still left wondering exactly why the Halting problem is not solvable. Of course, in the sense that there is a proof, there is a why here ... and yet ... I feel that some important other part of the why is missing.



Let me explain:



First, let's assume we just try to solve the 'empty-tape halting problem' and let's assume that the machines we are interested in only have two symbols: 1 and 0. Now, given some machine, will it halt, when stated on the empty tape (meaning: all 0) or not?



Now, we know that this problem is not Turing-solvable. If it were, we get a logical contradiction. OK, I get it. I have no problem with that at all, and like I said, I can follow the proof and I completely agree with it. I perfectly accept that this halting problem is not solvable.



But suppose I would actually try and give it a go: suppose I would try and solve this halting problem. We know the set of all turing-machines is enumerable, so let's just go through them one by one. Now, presumably this enumeration is such that it starts with relatively 'simple' machines. Indeed, I could first list all the ones with 1 internal state, then all the ones with 2, etc. since for any $n$, and with only $2$ symbols, there are only finitely many possible machines



Now, for all the machines with $1$ state, I can easily predict their behavior. Some halt. Some don't. OK, moving on to the machines with $2$ states. With some effort, I can predict the behavior for all of them as well. Cool. On to $3$ ... ok, now it becomes more difficult .. but even here I can do it. I know, because people working on the Busy Beaver problem have figured this out. And I believe they figured it out for $n=4$ as well ...



Interestingly, these researchers are using computers to help them figure out the halting or non-halting behavior for these relatively 'simple' machines. These computer programs are, in a way, trying to solve the halting problem, at least for very small values of $n$. Presumably, these machines 'analyze' and 'break down' the behavior of a machine with $4$ states into something that can be demonstrated to halt or not halt. But of course, we know they can't solve it for all $n$ ... they can't be perfect. And indeed, for $n=5$ the behavior of Turing-machines gets so complicated that human nor machine is able to figure out (yet) whether the machine halts or not.



So ... here is my question: what is it that we're running into that prevents us from figuring out the halting behavior?



The proof of the Halting problem uses self-reference. That is, if a machine could solve the halting, then we can show that thee must be a machine that halts on its own input (i.e. when given its own program, or its own number in some enumeration, or..) if and only if it does not .. a contradiction.



OK, but this is when we have a machine with certain powers ... in a way, a machine that can solve the halting problem is a machine with 'too much' power, which leads to a contradiction.



But, the halting-detection machines used by the Busy Beaver researchers don't have too much power. They have too little power. Currently they can't solve $n=5$. OK, so we give them some more power. Maybe at some point they can solve $n=5$ ... but they still can't solve $n=6$. Maybe we can give them enough power to solve $n=6$, or $n=7$ or ....



... so my question is: is there some 'special' value of $n$, say $n=m$ where this has to stop. Where, somehow, the only way to solve $n=m$, is by a machine that has 'too much' power? But why would that be? Is it because of some kind of self-reference? Because the only way to solve $n=m$ is by a machine that, as it tries to analyze and predict the behavior of some machine with $m$ states, cannot break it down to anything 'smaller' than something that requires solving $n=m$ itself? Some kind of 'minimal' value not unlike some set of minimum requirements that formal systems need to have in order to apply the Godel construction to them?



One thought I have is that this cannot be: like I said, for any $n$, there are only finitely many machines to consider. As such, it is computable; there is some machine that correctly classifies all machines with $n$ states as empty-tape halters or non-halters: it takes a machine on the input, goes through its finite list with pre-stored answers, and outputs that answer. There is a machine that does this for $n=5$, there is one for $n=6$, etc. And, none of those machines have too much power: no contradictions here. They all have too little.



Then again, these machine don't do any explicit analysis of the machines involved ... they just happen to give the right value. So, maybe there is still some value of $n$ where the approach of actually trying to analyze and predict the behavior of machine starts to break down for some fundamental, again possibly self-referential, reason?



Or: is it that the analytical approach merely gets harder and harder ... but that there is no 'special' point where it becomes, for some theoretical, fundamental reason, too hard? As such, the contradiction only comes from a machine that can do it for all infinitely many values of $n$? Indeed, maybe the problem is that in order to analyze the behavior of all machines with $n$ states, we need a machine that has to have more than $n$ states ... and so while for every $n$, there is a machine $M$ that can perform the analysis, the complexity of $M$ is greater than any of the machines with $n$ states, and hence you'd need another, even more complicated machine $M'$ in order to analyze machines with the kind of complexity that $M$ has ... thus setting up an infinite regress that you can never complete, i.e. there is no one machine that can 'do it all'?



Can someone help me how to think about this?










share|cite|improve this question











$endgroup$




I perfectly understand and accept the proof that a Turing-machine cannot solve the halting problem.



Indeed, this is not one of those questions that challenges the proof or result.



However, I feel that there is still something left to be explained ... I am still left wondering exactly why the Halting problem is not solvable. Of course, in the sense that there is a proof, there is a why here ... and yet ... I feel that some important other part of the why is missing.



Let me explain:



First, let's assume we just try to solve the 'empty-tape halting problem' and let's assume that the machines we are interested in only have two symbols: 1 and 0. Now, given some machine, will it halt, when stated on the empty tape (meaning: all 0) or not?



Now, we know that this problem is not Turing-solvable. If it were, we get a logical contradiction. OK, I get it. I have no problem with that at all, and like I said, I can follow the proof and I completely agree with it. I perfectly accept that this halting problem is not solvable.



But suppose I would actually try and give it a go: suppose I would try and solve this halting problem. We know the set of all turing-machines is enumerable, so let's just go through them one by one. Now, presumably this enumeration is such that it starts with relatively 'simple' machines. Indeed, I could first list all the ones with 1 internal state, then all the ones with 2, etc. since for any $n$, and with only $2$ symbols, there are only finitely many possible machines



Now, for all the machines with $1$ state, I can easily predict their behavior. Some halt. Some don't. OK, moving on to the machines with $2$ states. With some effort, I can predict the behavior for all of them as well. Cool. On to $3$ ... ok, now it becomes more difficult .. but even here I can do it. I know, because people working on the Busy Beaver problem have figured this out. And I believe they figured it out for $n=4$ as well ...



Interestingly, these researchers are using computers to help them figure out the halting or non-halting behavior for these relatively 'simple' machines. These computer programs are, in a way, trying to solve the halting problem, at least for very small values of $n$. Presumably, these machines 'analyze' and 'break down' the behavior of a machine with $4$ states into something that can be demonstrated to halt or not halt. But of course, we know they can't solve it for all $n$ ... they can't be perfect. And indeed, for $n=5$ the behavior of Turing-machines gets so complicated that human nor machine is able to figure out (yet) whether the machine halts or not.



So ... here is my question: what is it that we're running into that prevents us from figuring out the halting behavior?



The proof of the Halting problem uses self-reference. That is, if a machine could solve the halting, then we can show that thee must be a machine that halts on its own input (i.e. when given its own program, or its own number in some enumeration, or..) if and only if it does not .. a contradiction.



OK, but this is when we have a machine with certain powers ... in a way, a machine that can solve the halting problem is a machine with 'too much' power, which leads to a contradiction.



But, the halting-detection machines used by the Busy Beaver researchers don't have too much power. They have too little power. Currently they can't solve $n=5$. OK, so we give them some more power. Maybe at some point they can solve $n=5$ ... but they still can't solve $n=6$. Maybe we can give them enough power to solve $n=6$, or $n=7$ or ....



... so my question is: is there some 'special' value of $n$, say $n=m$ where this has to stop. Where, somehow, the only way to solve $n=m$, is by a machine that has 'too much' power? But why would that be? Is it because of some kind of self-reference? Because the only way to solve $n=m$ is by a machine that, as it tries to analyze and predict the behavior of some machine with $m$ states, cannot break it down to anything 'smaller' than something that requires solving $n=m$ itself? Some kind of 'minimal' value not unlike some set of minimum requirements that formal systems need to have in order to apply the Godel construction to them?



One thought I have is that this cannot be: like I said, for any $n$, there are only finitely many machines to consider. As such, it is computable; there is some machine that correctly classifies all machines with $n$ states as empty-tape halters or non-halters: it takes a machine on the input, goes through its finite list with pre-stored answers, and outputs that answer. There is a machine that does this for $n=5$, there is one for $n=6$, etc. And, none of those machines have too much power: no contradictions here. They all have too little.



Then again, these machine don't do any explicit analysis of the machines involved ... they just happen to give the right value. So, maybe there is still some value of $n$ where the approach of actually trying to analyze and predict the behavior of machine starts to break down for some fundamental, again possibly self-referential, reason?



Or: is it that the analytical approach merely gets harder and harder ... but that there is no 'special' point where it becomes, for some theoretical, fundamental reason, too hard? As such, the contradiction only comes from a machine that can do it for all infinitely many values of $n$? Indeed, maybe the problem is that in order to analyze the behavior of all machines with $n$ states, we need a machine that has to have more than $n$ states ... and so while for every $n$, there is a machine $M$ that can perform the analysis, the complexity of $M$ is greater than any of the machines with $n$ states, and hence you'd need another, even more complicated machine $M'$ in order to analyze machines with the kind of complexity that $M$ has ... thus setting up an infinite regress that you can never complete, i.e. there is no one machine that can 'do it all'?



Can someone help me how to think about this?







turing-machines






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 27 '18 at 2:11









Namaste

1




1










asked Dec 26 '18 at 21:52









Bram28Bram28

64.5k44793




64.5k44793








  • 3




    $begingroup$
    @MattSamuel The OP is aware of this, as they state in their question.
    $endgroup$
    – Noah Schweber
    Dec 26 '18 at 22:02






  • 1




    $begingroup$
    @NoahSchweber Thanks Noah ... you are actually just the person I was hoping would see this post ... am I making sense? Do you see what I'm getting at?
    $endgroup$
    – Bram28
    Dec 26 '18 at 22:02








  • 3




    $begingroup$
    The issue is that having more "power" will not solve the problem. No matter how powerful your machine is it cannot solve a busy beaver state even for low n. All the turing machine can do is continue running and running and even after 1000000000000 steps we cannot truely be sure it doesn't halt without a pattern, and many of these busy beaver states don't exhibit any obvious pattern. Only an infinitely powerful turing machine (infinite time turing machine) can solve the problem.
    $endgroup$
    – Matthew Liu
    Dec 26 '18 at 22:27












  • $begingroup$
    Defining how a program $P$ halts is easy : there exists a finite integer $n$ such that running $P$ for $n$ steps yields the $0$ state. Defining what properties of $P$ guaranty that $n$ doesn't exist is exactly the purpose of our logic/mathematical theories (PA, ZFC...). That there exists two different integers such that $P$ is in the same state after $n$ and $m$ steps is one of those properties. But when looking at theories formalizing the arithmetic, we get much more, and we reach the incompleteness problem : we have no way to guaranty our theory is consistent.
    $endgroup$
    – reuns
    Dec 26 '18 at 22:39








  • 2




    $begingroup$
    @Bram28 Still won't matter how much the machine is able to "reason". Fundamentally speaking reasoning is no different from calculation speed. Our human brains are no different from machines other than the fact that our brains are still much more powerful but we aren't made out of metal so we make more mistakes.
    $endgroup$
    – Matthew Liu
    Dec 26 '18 at 23:14














  • 3




    $begingroup$
    @MattSamuel The OP is aware of this, as they state in their question.
    $endgroup$
    – Noah Schweber
    Dec 26 '18 at 22:02






  • 1




    $begingroup$
    @NoahSchweber Thanks Noah ... you are actually just the person I was hoping would see this post ... am I making sense? Do you see what I'm getting at?
    $endgroup$
    – Bram28
    Dec 26 '18 at 22:02








  • 3




    $begingroup$
    The issue is that having more "power" will not solve the problem. No matter how powerful your machine is it cannot solve a busy beaver state even for low n. All the turing machine can do is continue running and running and even after 1000000000000 steps we cannot truely be sure it doesn't halt without a pattern, and many of these busy beaver states don't exhibit any obvious pattern. Only an infinitely powerful turing machine (infinite time turing machine) can solve the problem.
    $endgroup$
    – Matthew Liu
    Dec 26 '18 at 22:27












  • $begingroup$
    Defining how a program $P$ halts is easy : there exists a finite integer $n$ such that running $P$ for $n$ steps yields the $0$ state. Defining what properties of $P$ guaranty that $n$ doesn't exist is exactly the purpose of our logic/mathematical theories (PA, ZFC...). That there exists two different integers such that $P$ is in the same state after $n$ and $m$ steps is one of those properties. But when looking at theories formalizing the arithmetic, we get much more, and we reach the incompleteness problem : we have no way to guaranty our theory is consistent.
    $endgroup$
    – reuns
    Dec 26 '18 at 22:39








  • 2




    $begingroup$
    @Bram28 Still won't matter how much the machine is able to "reason". Fundamentally speaking reasoning is no different from calculation speed. Our human brains are no different from machines other than the fact that our brains are still much more powerful but we aren't made out of metal so we make more mistakes.
    $endgroup$
    – Matthew Liu
    Dec 26 '18 at 23:14








3




3




$begingroup$
@MattSamuel The OP is aware of this, as they state in their question.
$endgroup$
– Noah Schweber
Dec 26 '18 at 22:02




$begingroup$
@MattSamuel The OP is aware of this, as they state in their question.
$endgroup$
– Noah Schweber
Dec 26 '18 at 22:02




1




1




$begingroup$
@NoahSchweber Thanks Noah ... you are actually just the person I was hoping would see this post ... am I making sense? Do you see what I'm getting at?
$endgroup$
– Bram28
Dec 26 '18 at 22:02






$begingroup$
@NoahSchweber Thanks Noah ... you are actually just the person I was hoping would see this post ... am I making sense? Do you see what I'm getting at?
$endgroup$
– Bram28
Dec 26 '18 at 22:02






3




3




$begingroup$
The issue is that having more "power" will not solve the problem. No matter how powerful your machine is it cannot solve a busy beaver state even for low n. All the turing machine can do is continue running and running and even after 1000000000000 steps we cannot truely be sure it doesn't halt without a pattern, and many of these busy beaver states don't exhibit any obvious pattern. Only an infinitely powerful turing machine (infinite time turing machine) can solve the problem.
$endgroup$
– Matthew Liu
Dec 26 '18 at 22:27






$begingroup$
The issue is that having more "power" will not solve the problem. No matter how powerful your machine is it cannot solve a busy beaver state even for low n. All the turing machine can do is continue running and running and even after 1000000000000 steps we cannot truely be sure it doesn't halt without a pattern, and many of these busy beaver states don't exhibit any obvious pattern. Only an infinitely powerful turing machine (infinite time turing machine) can solve the problem.
$endgroup$
– Matthew Liu
Dec 26 '18 at 22:27














$begingroup$
Defining how a program $P$ halts is easy : there exists a finite integer $n$ such that running $P$ for $n$ steps yields the $0$ state. Defining what properties of $P$ guaranty that $n$ doesn't exist is exactly the purpose of our logic/mathematical theories (PA, ZFC...). That there exists two different integers such that $P$ is in the same state after $n$ and $m$ steps is one of those properties. But when looking at theories formalizing the arithmetic, we get much more, and we reach the incompleteness problem : we have no way to guaranty our theory is consistent.
$endgroup$
– reuns
Dec 26 '18 at 22:39






$begingroup$
Defining how a program $P$ halts is easy : there exists a finite integer $n$ such that running $P$ for $n$ steps yields the $0$ state. Defining what properties of $P$ guaranty that $n$ doesn't exist is exactly the purpose of our logic/mathematical theories (PA, ZFC...). That there exists two different integers such that $P$ is in the same state after $n$ and $m$ steps is one of those properties. But when looking at theories formalizing the arithmetic, we get much more, and we reach the incompleteness problem : we have no way to guaranty our theory is consistent.
$endgroup$
– reuns
Dec 26 '18 at 22:39






2




2




$begingroup$
@Bram28 Still won't matter how much the machine is able to "reason". Fundamentally speaking reasoning is no different from calculation speed. Our human brains are no different from machines other than the fact that our brains are still much more powerful but we aren't made out of metal so we make more mistakes.
$endgroup$
– Matthew Liu
Dec 26 '18 at 23:14




$begingroup$
@Bram28 Still won't matter how much the machine is able to "reason". Fundamentally speaking reasoning is no different from calculation speed. Our human brains are no different from machines other than the fact that our brains are still much more powerful but we aren't made out of metal so we make more mistakes.
$endgroup$
– Matthew Liu
Dec 26 '18 at 23:14










5 Answers
5






active

oldest

votes


















33












$begingroup$

I think the following portion from your question is most important:




But, the halting-detection machines used by the Busy Beaver researchers don't have too much power. They have too little power. Currently they can't solve $n=5$. OK, so we give them some more power. Maybe at some point they can solve $n=5$ ... but they still can't solve $n=6$. Maybe we can give them enough power to solve $n=6$, or $n=7$



or ....



... so my question is: is there some 'special' value of $n$, say $n=m$ where this has to stop. Where, somehow, the only way to solve $n=m$, is by a machine that has 'too much' power? But why would that be?




The solution to solving $Sigma(5)$ isn't simply giving Turing machines "more power." The reason we don't know $Sigma(5)$ right now is because there are 5-state Turing machines which mathematicians believe will never halt, but have not been able to prove will never halt. The problem is not as simple as just enumerating through all of the 5-state Turing machines since once you've done that, you still need to figure out if the Turing machine halts or not, which, as you know, is not a trivial problem. We've been able to do this for 4-state Turing machines, but we don't know yet if we can do this for 5-state Turing machines because there may very well be 5-state Turing machines which we can never prove to be halting/non-halting within the system of classical mathematics (that is, ZFC).



Now, you've asked what is the magic number is: What is the magic number $n=m$ such that we will never be able to solve for $Sigma(n)$? As I've said above, that magic number could very well be $n=5$, but that hasn't been proven yet. However, mathematicians have proven that $Sigma(1919)$ is indeed un-knowable within ZFC. Before explaining this, I think it might be helpful to quote your question again:




Then again, these machine don't do any explicit analysis of the machines involved ... they just happen to give the right value. So, maybe there is still some value of $n$ where the approach of actually trying to analyze and predict the behavior of machine starts to break down for some fundamental, again possibly self-referential, reason?




My answer to this question is yes, there is a 1919-state Turing machine such that trying to predict if the machine halts would be fundamentally unsolvable within our system of mathematics. See, the way mathematicians proved $Sigma(1919)$ is unsolvable is by constructing a 1919-state Turing machine $M$ which halts if there is a contradiction within ZFC and never halts if ZFC is consistent. However, there's no way to prove ZFC is consistent using the axioms of ZFC because of Godel's Second Incompleteness Theorems. This means we can never use the ZFC axioms of mathematics to prove machine $M$ ever halts or not because doing so would constitute a proof that ZFC is consistent. Thus, mathematicians can't predict if machine $M$ halts or not because of Godel's Incompleteness Theorem, which means that the busy-beaver problem for 1919-state Turing machines is unsolvable.



I hope this gives you some more insight into why $Sigma(n)$ is solvable for small values of $n$ but unsolvable for larger values of $n$. Anyway, I am certainly not an expert in theory of computation, so if someone would like to add an alternate explanation/clarifying comments to my answer, feel free. Thanks!






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    Someone can find a theory T that is "obviously consistent" and does prove an explicit upper bound for $Sigma(1919)$ ? If T,ZFC both are consistent then T is stronger than ZFC since it will prove ZFC is consistent.
    $endgroup$
    – reuns
    Dec 26 '18 at 23:03








  • 3




    $begingroup$
    @Bram28 Yes, if there is a contradiction in ZFC, this Turing machine will find it and halt. On the other hand, if the Turing machine halts, then it has found a contradiction in ZFC and ZFC is inconsistent. Therefore, this is a biconditional statement.
    $endgroup$
    – Noble Mushtak
    Dec 26 '18 at 23:06






  • 1




    $begingroup$
    @Bram28 And continuing Noble's comment, this equivalence - appropriately phrased in the language of arithmetic - is provable in PA, and indeed much less.
    $endgroup$
    – Noah Schweber
    Dec 26 '18 at 23:18






  • 4




    $begingroup$
    @NobleMushtak: NBG is known to be equiconsistent with ZFC, so looking for an inconsistency of either of them is basically the same task. (I don't know for sure, but I could imagine the optimizations necessary for cramming it into 1919 states involve much more drastic reformulations of the system anyway). NF versus ZFC would be a different matter ...
    $endgroup$
    – Henning Makholm
    Dec 27 '18 at 14:52








  • 2




    $begingroup$
    @Bram28 : Currently 1919 is special. But, in 2016, this special number was 7918. (I know the paper I cite had led to work condensing the machine to fewer states and that there was success. I do not know if the 1919 result follows from that chain of work or possibly a different chain of work.) So, tomorrow, someone could show that the TM can be condensed to a 500 state machine, or to a 5 state machine? This is currently unknown. It may be unknowable how small a machine does this computation -- what is the intrinsic entropy of a computation?
    $endgroup$
    – Eric Towers
    Dec 27 '18 at 21:25



















11












$begingroup$

Since, as you observe, any finite amount of the halting problem - that is, any set of the form $Hupharpoonright s:={x<s:Phi_x(x)downarrow}$ - is computable, there isn't any particular sharp impossibility point. There are some interesting "phase transitions" which appear relevant - e.g. at a certain point we hit our first universal machine - but I don't know of any which have any claim to be the point where the halting problem becomes noncomputable.



On the other hand, as you also observe the way in which the $Hupharpoonright s$s are computable is non-uniform (otherwise, the whole halting problem would be computable). So we can try to measure this "ongoing complexity." There are two, to my mind, natural approaches:




  • Given $n$, how up up the "hierarchy of theories" - from fragments of PA, to fragments of $Z_2$, to fragments of ZFC, to ZFC + large cardinals - do we have to go to get a theory which can decide whether each of the first $n$ Turing machines halts on input $0$?


  • Given $n$, how complicated is the finite string consisting of the first $n$ bits of the characteristic function of the halting problem (call this string "$eta_n$")?



Of these two approaches, the first has some draw which the second lacks, but it's also far more vague and limited. The second winds up leading to a very rich theory, namely the theory of Kolmogorov complexity (and its attendant notions, like algorithmic randomness), and also partially subsumes the former question. So I think that's my answer to your question: ultimately you won't find a sharp transition point, but the study of the dynamic behavior of the complexity of the halting problem is quite rewarding.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thanks Noah! ... a little over my head .. though I like your "phase transitions" notion .... from one of the other answers I gather that indeed for some number of states we "transit" into the consistency of ZFC ... which is certainly 'special' real-life-practicing-mathematics in some sense ... but maybe not 'special' in the sense I had in mind?
    $endgroup$
    – Bram28
    Dec 26 '18 at 23:17










  • $begingroup$
    So thinking about this a little more .... "transitioning" into the consistency-of-ZFC space is pretty 'special' for sure ... and yet it doesn't feel 'special' enough to me: The proof of the unsolvability of the Halting problem seems so deeply logical that I was looking for a likewise more deeply logical barrier that we'd run into as we contemplate more and more complicated TMs ... the fact that ZFC plays such a prominent role in mathematics makes it 'special' ... but I feel not 'special' enough. So, maybe it's back to the thought that there is no deeply logical 'special' point.
    $endgroup$
    – Bram28
    Dec 27 '18 at 1:46












  • $begingroup$
    I think your first sentence is a bit misleading. Any set $Hrestriction s$ is finite and hence computable, but that doesn't mean that we can actually compute it, because we don't know what program computes this particular finite set because we don't know what set it is. There are indeed concrete finite values of $s$ like 1919 for which we definitely cannot prove that it is equal to anything in particular. This is an "impossibility point" by any reasonable measure. (If you are working in a different system than ZFC the point may be elsewhere, but it will be there regardless.)
    $endgroup$
    – Mario Carneiro
    Dec 28 '18 at 6:35










  • $begingroup$
    @MarioCarneiro I disagree - we can compute it, we just don't know what program will do the job. More substantively, each specific theory hits a wall somewhere, but I don't see any reason to argue that any one theory perfectly captures the "right" situation (re: "any reasonable measure"); so the right picture is still a "sliding scale."
    $endgroup$
    – Noah Schweber
    Dec 28 '18 at 17:59



















6












$begingroup$

For example, you can construct a Turing machine (I don't know how many states you need, but it is a finite number) that searches for a counterexample to Goldbach's conjecture, i.e. an even number $> 2$ that is not the sum of two primes, going through the even numbers one by one; for even number $n > 2$ it checks each $k$ from $2$ to $n/2$; if $k$ is prime and $n-k$ is prime it goes to the next $n$, but if it gets through all the $k$ it halts. Thus this Turing machine will halt if and only if Goldbach's conjecture is false. In order to decide whether it will halt, your analysis will need to decide Goldbach's conjecture.



And when you're done with that one, there are lots of other conjectures you could check with a Turing machine.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thanks! I am guessing we can program something like this with a Turing-machines with close to 100 states ... so that gives us some measure (or at least a kind of minimum) of 'how difficult' it is to figure out the halting behavior of machines with that number of states. One of the other answers does something similar with the consistency of ZFC, which has been shown to take 1919 states at the most. Interesting!
    $endgroup$
    – Bram28
    Dec 26 '18 at 23:13










  • $begingroup$
    Goldbach's conjecture is usually an example of what the halting problem is not. We don't know a proof for a conjecture but it may eventually be found (or we might find a proof that it isn't true), it is not an intrinsically unsolvable problem (we have a diagonalization proof that you can't build a machine that can solve the halting problem)
    $endgroup$
    – pqnet
    Dec 29 '18 at 7:25



















4












$begingroup$

A potential Busy Beaver has three possibilities:




  1. It is easy to show it stops

  2. It is easy to show it never stops

  3. It is difficult to show either case


Number 1 either stops quickly, or it has a repeating pattern with an eventual flaw that causes it to stop.



Number 2 has a repeating pattern and never has a flaw, causing it to continue forever.



Number 3 is the interesting case. Its behaviour is chaotic. It might have short-term patterns, but it has no long-term patterns. Its future states can be predicted a short way without actually running the machine. There comes a certain point where its behaviour can no longer be predicted without running it. At this point you need a machine capable of emulating a Turning machine but faster. However, you will reach the same problem with this hypothetical new machine too, as long as it has finite power, which all real-world machines have.



If you improve your analysis of Busy Beavers, you can decide whether certain candidates are actually case 1 or case 2. We can think of it as a spectrum of behaviour with stopping at 0, running forever at 2, and undecidability at 1. To start with we know that 0 to 0.5 will stop (case 1) and 1.5 to 2 will run forever (case 2), with 0.5 to 1.5 being undecidable without running it (case 3). We improve our understanding of Busy Beavers. Now we know that 0 to 0.95 will stop and 1.05 to 2 will run forever, with 0.95 to 1.05 being undecidable.



At some point, there's no way to predict without running the machine whether it will halt or not. The only way to determine its behaviour is to run it, and if it stops, it usually gives you no insight you can use for other potential Busy Beavers. If it doesn't stop after $10^6$ cycles, you can try $10^7$, then $10^8$, and so on. At some point you give up without knowing whether it will stop if given enough time.



The problem is similar to drawing a Mandelbrot set. Certain points "escape" to infinity quickly, others settle into a repeating pattern quickly. The points on the boundary of the Mandelbrot set are difficult to predict either way without calculating them. There are methods to make the decision easier, but there will always be chaotic behaviour between the two easily detectable outcomes that can't be predicted.



Hopefully I've answered "why". Then "when" will be determined by your understanding of the specific problem you're trying to solve and the power of the machine you're using. Over time we can eat into the chaos and make it decidable, but it grows much faster than we can ever solve.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    So basically a universal computer is a kind of "useful discrete chaos", then? Interesting, but seems to make sense on a certain level.
    $endgroup$
    – The_Sympathizer
    Dec 27 '18 at 10:34



















0












$begingroup$

I would like to offer an alternative way of thinking about the halting problem, which helped me understanding better why the halting problem is non computable, or better, why in general there are functions which are non computable.



In his famous paper on Computability, Turing mentions that he is going to prove that there are real numbers which are not computable.
Computable numbers are defined as those whose decimals are calculable by finite means, or in other words, decimals that can be computed
by a machine.



He also mentions that it is equally easy to define and investigate computable functions instead of computable numbers and that is what I would like to show.
I will report in short the lecture of the link I have posted already (https://www.youtube.com/watch?v=9JeIG_CsgvI&index=14&list=FLIknGRIW8gX2yutAMeHVEKw) because I think it is worth it:
it is indeed the first part of a lecture which proves Goedel first Incompleteness Theorem. Credits go to "UC Davis Academics" of course.



Let's define a function $f$ from non negative integers to the set ${0,1}$.
We let $Q$ be the set of all such functions. It is clear that $Q$ is infinite (we wil prove indeed it is uncountable infinite essentially).



Also a function $f$ in $Q$ is defined as computable if there is a computer program $P$, (loosely speaking a Turing Machine),
that can take any non negative integer $x$ and output $f(x)$.
We add the constraints that $P$ must always terminate in finite time and $P$ must be correct, in other words,
output the correct value of $f$ for all non negative integers.



We let $A$ be all the functions in $Q$ which are computable.
We can show that there exists a function in $Q$ that is not in $A$, i.e. there exist uncomputable functions.



We define a program as a series of finite statements over some finite alphabet $alpha$, in other words it can be thought as a single finite string. Suppose that the
language $L$ we are using to express our program is Turing complete, i.e. it can be used to simulate any Turing machine.



We can start enumerating in order of length the strings expressible in $alpha$. Strings of the same length are taken based on an
alphabetical order that can be defined arbitrarily in $alpha$.



You could indeed write a program $T$ to enumerate all those strings expressible in $alpha$, so for any string $s$ expressible in $alpha$,
$T$ in finite time will generate $s$.



Therefore you have a list $Z$ of strings in $alpha$ ordered by length.
Some of those strings in $Z$ will be legitimate programs in our chosen programming language $L$. Indeed all possible programs will be in $Z$, and in particular
the programs that compute functions in $A$ must all be there (by definition of computable function) and they are ordered in $Z$.



Let's call $H$ this ordered list of functions in $A$, ${f_{1}, f_{2},...}$ . Now applying the diagonalization method, by defining
$$g(x)=1-f_{i}(i)$$ It is easy to see that $g$ is in $Q$, however $g$ is not computable since it is not in $A$, and so we are done.






share|cite|improve this answer











$endgroup$














    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3053345%2fexactly-when-and-why-can-a-turing-machine-not-solve-the-halting-problem%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    33












    $begingroup$

    I think the following portion from your question is most important:




    But, the halting-detection machines used by the Busy Beaver researchers don't have too much power. They have too little power. Currently they can't solve $n=5$. OK, so we give them some more power. Maybe at some point they can solve $n=5$ ... but they still can't solve $n=6$. Maybe we can give them enough power to solve $n=6$, or $n=7$



    or ....



    ... so my question is: is there some 'special' value of $n$, say $n=m$ where this has to stop. Where, somehow, the only way to solve $n=m$, is by a machine that has 'too much' power? But why would that be?




    The solution to solving $Sigma(5)$ isn't simply giving Turing machines "more power." The reason we don't know $Sigma(5)$ right now is because there are 5-state Turing machines which mathematicians believe will never halt, but have not been able to prove will never halt. The problem is not as simple as just enumerating through all of the 5-state Turing machines since once you've done that, you still need to figure out if the Turing machine halts or not, which, as you know, is not a trivial problem. We've been able to do this for 4-state Turing machines, but we don't know yet if we can do this for 5-state Turing machines because there may very well be 5-state Turing machines which we can never prove to be halting/non-halting within the system of classical mathematics (that is, ZFC).



    Now, you've asked what is the magic number is: What is the magic number $n=m$ such that we will never be able to solve for $Sigma(n)$? As I've said above, that magic number could very well be $n=5$, but that hasn't been proven yet. However, mathematicians have proven that $Sigma(1919)$ is indeed un-knowable within ZFC. Before explaining this, I think it might be helpful to quote your question again:




    Then again, these machine don't do any explicit analysis of the machines involved ... they just happen to give the right value. So, maybe there is still some value of $n$ where the approach of actually trying to analyze and predict the behavior of machine starts to break down for some fundamental, again possibly self-referential, reason?




    My answer to this question is yes, there is a 1919-state Turing machine such that trying to predict if the machine halts would be fundamentally unsolvable within our system of mathematics. See, the way mathematicians proved $Sigma(1919)$ is unsolvable is by constructing a 1919-state Turing machine $M$ which halts if there is a contradiction within ZFC and never halts if ZFC is consistent. However, there's no way to prove ZFC is consistent using the axioms of ZFC because of Godel's Second Incompleteness Theorems. This means we can never use the ZFC axioms of mathematics to prove machine $M$ ever halts or not because doing so would constitute a proof that ZFC is consistent. Thus, mathematicians can't predict if machine $M$ halts or not because of Godel's Incompleteness Theorem, which means that the busy-beaver problem for 1919-state Turing machines is unsolvable.



    I hope this gives you some more insight into why $Sigma(n)$ is solvable for small values of $n$ but unsolvable for larger values of $n$. Anyway, I am certainly not an expert in theory of computation, so if someone would like to add an alternate explanation/clarifying comments to my answer, feel free. Thanks!






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      Someone can find a theory T that is "obviously consistent" and does prove an explicit upper bound for $Sigma(1919)$ ? If T,ZFC both are consistent then T is stronger than ZFC since it will prove ZFC is consistent.
      $endgroup$
      – reuns
      Dec 26 '18 at 23:03








    • 3




      $begingroup$
      @Bram28 Yes, if there is a contradiction in ZFC, this Turing machine will find it and halt. On the other hand, if the Turing machine halts, then it has found a contradiction in ZFC and ZFC is inconsistent. Therefore, this is a biconditional statement.
      $endgroup$
      – Noble Mushtak
      Dec 26 '18 at 23:06






    • 1




      $begingroup$
      @Bram28 And continuing Noble's comment, this equivalence - appropriately phrased in the language of arithmetic - is provable in PA, and indeed much less.
      $endgroup$
      – Noah Schweber
      Dec 26 '18 at 23:18






    • 4




      $begingroup$
      @NobleMushtak: NBG is known to be equiconsistent with ZFC, so looking for an inconsistency of either of them is basically the same task. (I don't know for sure, but I could imagine the optimizations necessary for cramming it into 1919 states involve much more drastic reformulations of the system anyway). NF versus ZFC would be a different matter ...
      $endgroup$
      – Henning Makholm
      Dec 27 '18 at 14:52








    • 2




      $begingroup$
      @Bram28 : Currently 1919 is special. But, in 2016, this special number was 7918. (I know the paper I cite had led to work condensing the machine to fewer states and that there was success. I do not know if the 1919 result follows from that chain of work or possibly a different chain of work.) So, tomorrow, someone could show that the TM can be condensed to a 500 state machine, or to a 5 state machine? This is currently unknown. It may be unknowable how small a machine does this computation -- what is the intrinsic entropy of a computation?
      $endgroup$
      – Eric Towers
      Dec 27 '18 at 21:25
















    33












    $begingroup$

    I think the following portion from your question is most important:




    But, the halting-detection machines used by the Busy Beaver researchers don't have too much power. They have too little power. Currently they can't solve $n=5$. OK, so we give them some more power. Maybe at some point they can solve $n=5$ ... but they still can't solve $n=6$. Maybe we can give them enough power to solve $n=6$, or $n=7$



    or ....



    ... so my question is: is there some 'special' value of $n$, say $n=m$ where this has to stop. Where, somehow, the only way to solve $n=m$, is by a machine that has 'too much' power? But why would that be?




    The solution to solving $Sigma(5)$ isn't simply giving Turing machines "more power." The reason we don't know $Sigma(5)$ right now is because there are 5-state Turing machines which mathematicians believe will never halt, but have not been able to prove will never halt. The problem is not as simple as just enumerating through all of the 5-state Turing machines since once you've done that, you still need to figure out if the Turing machine halts or not, which, as you know, is not a trivial problem. We've been able to do this for 4-state Turing machines, but we don't know yet if we can do this for 5-state Turing machines because there may very well be 5-state Turing machines which we can never prove to be halting/non-halting within the system of classical mathematics (that is, ZFC).



    Now, you've asked what is the magic number is: What is the magic number $n=m$ such that we will never be able to solve for $Sigma(n)$? As I've said above, that magic number could very well be $n=5$, but that hasn't been proven yet. However, mathematicians have proven that $Sigma(1919)$ is indeed un-knowable within ZFC. Before explaining this, I think it might be helpful to quote your question again:




    Then again, these machine don't do any explicit analysis of the machines involved ... they just happen to give the right value. So, maybe there is still some value of $n$ where the approach of actually trying to analyze and predict the behavior of machine starts to break down for some fundamental, again possibly self-referential, reason?




    My answer to this question is yes, there is a 1919-state Turing machine such that trying to predict if the machine halts would be fundamentally unsolvable within our system of mathematics. See, the way mathematicians proved $Sigma(1919)$ is unsolvable is by constructing a 1919-state Turing machine $M$ which halts if there is a contradiction within ZFC and never halts if ZFC is consistent. However, there's no way to prove ZFC is consistent using the axioms of ZFC because of Godel's Second Incompleteness Theorems. This means we can never use the ZFC axioms of mathematics to prove machine $M$ ever halts or not because doing so would constitute a proof that ZFC is consistent. Thus, mathematicians can't predict if machine $M$ halts or not because of Godel's Incompleteness Theorem, which means that the busy-beaver problem for 1919-state Turing machines is unsolvable.



    I hope this gives you some more insight into why $Sigma(n)$ is solvable for small values of $n$ but unsolvable for larger values of $n$. Anyway, I am certainly not an expert in theory of computation, so if someone would like to add an alternate explanation/clarifying comments to my answer, feel free. Thanks!






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      Someone can find a theory T that is "obviously consistent" and does prove an explicit upper bound for $Sigma(1919)$ ? If T,ZFC both are consistent then T is stronger than ZFC since it will prove ZFC is consistent.
      $endgroup$
      – reuns
      Dec 26 '18 at 23:03








    • 3




      $begingroup$
      @Bram28 Yes, if there is a contradiction in ZFC, this Turing machine will find it and halt. On the other hand, if the Turing machine halts, then it has found a contradiction in ZFC and ZFC is inconsistent. Therefore, this is a biconditional statement.
      $endgroup$
      – Noble Mushtak
      Dec 26 '18 at 23:06






    • 1




      $begingroup$
      @Bram28 And continuing Noble's comment, this equivalence - appropriately phrased in the language of arithmetic - is provable in PA, and indeed much less.
      $endgroup$
      – Noah Schweber
      Dec 26 '18 at 23:18






    • 4




      $begingroup$
      @NobleMushtak: NBG is known to be equiconsistent with ZFC, so looking for an inconsistency of either of them is basically the same task. (I don't know for sure, but I could imagine the optimizations necessary for cramming it into 1919 states involve much more drastic reformulations of the system anyway). NF versus ZFC would be a different matter ...
      $endgroup$
      – Henning Makholm
      Dec 27 '18 at 14:52








    • 2




      $begingroup$
      @Bram28 : Currently 1919 is special. But, in 2016, this special number was 7918. (I know the paper I cite had led to work condensing the machine to fewer states and that there was success. I do not know if the 1919 result follows from that chain of work or possibly a different chain of work.) So, tomorrow, someone could show that the TM can be condensed to a 500 state machine, or to a 5 state machine? This is currently unknown. It may be unknowable how small a machine does this computation -- what is the intrinsic entropy of a computation?
      $endgroup$
      – Eric Towers
      Dec 27 '18 at 21:25














    33












    33








    33





    $begingroup$

    I think the following portion from your question is most important:




    But, the halting-detection machines used by the Busy Beaver researchers don't have too much power. They have too little power. Currently they can't solve $n=5$. OK, so we give them some more power. Maybe at some point they can solve $n=5$ ... but they still can't solve $n=6$. Maybe we can give them enough power to solve $n=6$, or $n=7$



    or ....



    ... so my question is: is there some 'special' value of $n$, say $n=m$ where this has to stop. Where, somehow, the only way to solve $n=m$, is by a machine that has 'too much' power? But why would that be?




    The solution to solving $Sigma(5)$ isn't simply giving Turing machines "more power." The reason we don't know $Sigma(5)$ right now is because there are 5-state Turing machines which mathematicians believe will never halt, but have not been able to prove will never halt. The problem is not as simple as just enumerating through all of the 5-state Turing machines since once you've done that, you still need to figure out if the Turing machine halts or not, which, as you know, is not a trivial problem. We've been able to do this for 4-state Turing machines, but we don't know yet if we can do this for 5-state Turing machines because there may very well be 5-state Turing machines which we can never prove to be halting/non-halting within the system of classical mathematics (that is, ZFC).



    Now, you've asked what is the magic number is: What is the magic number $n=m$ such that we will never be able to solve for $Sigma(n)$? As I've said above, that magic number could very well be $n=5$, but that hasn't been proven yet. However, mathematicians have proven that $Sigma(1919)$ is indeed un-knowable within ZFC. Before explaining this, I think it might be helpful to quote your question again:




    Then again, these machine don't do any explicit analysis of the machines involved ... they just happen to give the right value. So, maybe there is still some value of $n$ where the approach of actually trying to analyze and predict the behavior of machine starts to break down for some fundamental, again possibly self-referential, reason?




    My answer to this question is yes, there is a 1919-state Turing machine such that trying to predict if the machine halts would be fundamentally unsolvable within our system of mathematics. See, the way mathematicians proved $Sigma(1919)$ is unsolvable is by constructing a 1919-state Turing machine $M$ which halts if there is a contradiction within ZFC and never halts if ZFC is consistent. However, there's no way to prove ZFC is consistent using the axioms of ZFC because of Godel's Second Incompleteness Theorems. This means we can never use the ZFC axioms of mathematics to prove machine $M$ ever halts or not because doing so would constitute a proof that ZFC is consistent. Thus, mathematicians can't predict if machine $M$ halts or not because of Godel's Incompleteness Theorem, which means that the busy-beaver problem for 1919-state Turing machines is unsolvable.



    I hope this gives you some more insight into why $Sigma(n)$ is solvable for small values of $n$ but unsolvable for larger values of $n$. Anyway, I am certainly not an expert in theory of computation, so if someone would like to add an alternate explanation/clarifying comments to my answer, feel free. Thanks!






    share|cite|improve this answer











    $endgroup$



    I think the following portion from your question is most important:




    But, the halting-detection machines used by the Busy Beaver researchers don't have too much power. They have too little power. Currently they can't solve $n=5$. OK, so we give them some more power. Maybe at some point they can solve $n=5$ ... but they still can't solve $n=6$. Maybe we can give them enough power to solve $n=6$, or $n=7$



    or ....



    ... so my question is: is there some 'special' value of $n$, say $n=m$ where this has to stop. Where, somehow, the only way to solve $n=m$, is by a machine that has 'too much' power? But why would that be?




    The solution to solving $Sigma(5)$ isn't simply giving Turing machines "more power." The reason we don't know $Sigma(5)$ right now is because there are 5-state Turing machines which mathematicians believe will never halt, but have not been able to prove will never halt. The problem is not as simple as just enumerating through all of the 5-state Turing machines since once you've done that, you still need to figure out if the Turing machine halts or not, which, as you know, is not a trivial problem. We've been able to do this for 4-state Turing machines, but we don't know yet if we can do this for 5-state Turing machines because there may very well be 5-state Turing machines which we can never prove to be halting/non-halting within the system of classical mathematics (that is, ZFC).



    Now, you've asked what is the magic number is: What is the magic number $n=m$ such that we will never be able to solve for $Sigma(n)$? As I've said above, that magic number could very well be $n=5$, but that hasn't been proven yet. However, mathematicians have proven that $Sigma(1919)$ is indeed un-knowable within ZFC. Before explaining this, I think it might be helpful to quote your question again:




    Then again, these machine don't do any explicit analysis of the machines involved ... they just happen to give the right value. So, maybe there is still some value of $n$ where the approach of actually trying to analyze and predict the behavior of machine starts to break down for some fundamental, again possibly self-referential, reason?




    My answer to this question is yes, there is a 1919-state Turing machine such that trying to predict if the machine halts would be fundamentally unsolvable within our system of mathematics. See, the way mathematicians proved $Sigma(1919)$ is unsolvable is by constructing a 1919-state Turing machine $M$ which halts if there is a contradiction within ZFC and never halts if ZFC is consistent. However, there's no way to prove ZFC is consistent using the axioms of ZFC because of Godel's Second Incompleteness Theorems. This means we can never use the ZFC axioms of mathematics to prove machine $M$ ever halts or not because doing so would constitute a proof that ZFC is consistent. Thus, mathematicians can't predict if machine $M$ halts or not because of Godel's Incompleteness Theorem, which means that the busy-beaver problem for 1919-state Turing machines is unsolvable.



    I hope this gives you some more insight into why $Sigma(n)$ is solvable for small values of $n$ but unsolvable for larger values of $n$. Anyway, I am certainly not an expert in theory of computation, so if someone would like to add an alternate explanation/clarifying comments to my answer, feel free. Thanks!







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Dec 26 '18 at 23:39

























    answered Dec 26 '18 at 22:49









    Noble MushtakNoble Mushtak

    15.3k1835




    15.3k1835








    • 1




      $begingroup$
      Someone can find a theory T that is "obviously consistent" and does prove an explicit upper bound for $Sigma(1919)$ ? If T,ZFC both are consistent then T is stronger than ZFC since it will prove ZFC is consistent.
      $endgroup$
      – reuns
      Dec 26 '18 at 23:03








    • 3




      $begingroup$
      @Bram28 Yes, if there is a contradiction in ZFC, this Turing machine will find it and halt. On the other hand, if the Turing machine halts, then it has found a contradiction in ZFC and ZFC is inconsistent. Therefore, this is a biconditional statement.
      $endgroup$
      – Noble Mushtak
      Dec 26 '18 at 23:06






    • 1




      $begingroup$
      @Bram28 And continuing Noble's comment, this equivalence - appropriately phrased in the language of arithmetic - is provable in PA, and indeed much less.
      $endgroup$
      – Noah Schweber
      Dec 26 '18 at 23:18






    • 4




      $begingroup$
      @NobleMushtak: NBG is known to be equiconsistent with ZFC, so looking for an inconsistency of either of them is basically the same task. (I don't know for sure, but I could imagine the optimizations necessary for cramming it into 1919 states involve much more drastic reformulations of the system anyway). NF versus ZFC would be a different matter ...
      $endgroup$
      – Henning Makholm
      Dec 27 '18 at 14:52








    • 2




      $begingroup$
      @Bram28 : Currently 1919 is special. But, in 2016, this special number was 7918. (I know the paper I cite had led to work condensing the machine to fewer states and that there was success. I do not know if the 1919 result follows from that chain of work or possibly a different chain of work.) So, tomorrow, someone could show that the TM can be condensed to a 500 state machine, or to a 5 state machine? This is currently unknown. It may be unknowable how small a machine does this computation -- what is the intrinsic entropy of a computation?
      $endgroup$
      – Eric Towers
      Dec 27 '18 at 21:25














    • 1




      $begingroup$
      Someone can find a theory T that is "obviously consistent" and does prove an explicit upper bound for $Sigma(1919)$ ? If T,ZFC both are consistent then T is stronger than ZFC since it will prove ZFC is consistent.
      $endgroup$
      – reuns
      Dec 26 '18 at 23:03








    • 3




      $begingroup$
      @Bram28 Yes, if there is a contradiction in ZFC, this Turing machine will find it and halt. On the other hand, if the Turing machine halts, then it has found a contradiction in ZFC and ZFC is inconsistent. Therefore, this is a biconditional statement.
      $endgroup$
      – Noble Mushtak
      Dec 26 '18 at 23:06






    • 1




      $begingroup$
      @Bram28 And continuing Noble's comment, this equivalence - appropriately phrased in the language of arithmetic - is provable in PA, and indeed much less.
      $endgroup$
      – Noah Schweber
      Dec 26 '18 at 23:18






    • 4




      $begingroup$
      @NobleMushtak: NBG is known to be equiconsistent with ZFC, so looking for an inconsistency of either of them is basically the same task. (I don't know for sure, but I could imagine the optimizations necessary for cramming it into 1919 states involve much more drastic reformulations of the system anyway). NF versus ZFC would be a different matter ...
      $endgroup$
      – Henning Makholm
      Dec 27 '18 at 14:52








    • 2




      $begingroup$
      @Bram28 : Currently 1919 is special. But, in 2016, this special number was 7918. (I know the paper I cite had led to work condensing the machine to fewer states and that there was success. I do not know if the 1919 result follows from that chain of work or possibly a different chain of work.) So, tomorrow, someone could show that the TM can be condensed to a 500 state machine, or to a 5 state machine? This is currently unknown. It may be unknowable how small a machine does this computation -- what is the intrinsic entropy of a computation?
      $endgroup$
      – Eric Towers
      Dec 27 '18 at 21:25








    1




    1




    $begingroup$
    Someone can find a theory T that is "obviously consistent" and does prove an explicit upper bound for $Sigma(1919)$ ? If T,ZFC both are consistent then T is stronger than ZFC since it will prove ZFC is consistent.
    $endgroup$
    – reuns
    Dec 26 '18 at 23:03






    $begingroup$
    Someone can find a theory T that is "obviously consistent" and does prove an explicit upper bound for $Sigma(1919)$ ? If T,ZFC both are consistent then T is stronger than ZFC since it will prove ZFC is consistent.
    $endgroup$
    – reuns
    Dec 26 '18 at 23:03






    3




    3




    $begingroup$
    @Bram28 Yes, if there is a contradiction in ZFC, this Turing machine will find it and halt. On the other hand, if the Turing machine halts, then it has found a contradiction in ZFC and ZFC is inconsistent. Therefore, this is a biconditional statement.
    $endgroup$
    – Noble Mushtak
    Dec 26 '18 at 23:06




    $begingroup$
    @Bram28 Yes, if there is a contradiction in ZFC, this Turing machine will find it and halt. On the other hand, if the Turing machine halts, then it has found a contradiction in ZFC and ZFC is inconsistent. Therefore, this is a biconditional statement.
    $endgroup$
    – Noble Mushtak
    Dec 26 '18 at 23:06




    1




    1




    $begingroup$
    @Bram28 And continuing Noble's comment, this equivalence - appropriately phrased in the language of arithmetic - is provable in PA, and indeed much less.
    $endgroup$
    – Noah Schweber
    Dec 26 '18 at 23:18




    $begingroup$
    @Bram28 And continuing Noble's comment, this equivalence - appropriately phrased in the language of arithmetic - is provable in PA, and indeed much less.
    $endgroup$
    – Noah Schweber
    Dec 26 '18 at 23:18




    4




    4




    $begingroup$
    @NobleMushtak: NBG is known to be equiconsistent with ZFC, so looking for an inconsistency of either of them is basically the same task. (I don't know for sure, but I could imagine the optimizations necessary for cramming it into 1919 states involve much more drastic reformulations of the system anyway). NF versus ZFC would be a different matter ...
    $endgroup$
    – Henning Makholm
    Dec 27 '18 at 14:52






    $begingroup$
    @NobleMushtak: NBG is known to be equiconsistent with ZFC, so looking for an inconsistency of either of them is basically the same task. (I don't know for sure, but I could imagine the optimizations necessary for cramming it into 1919 states involve much more drastic reformulations of the system anyway). NF versus ZFC would be a different matter ...
    $endgroup$
    – Henning Makholm
    Dec 27 '18 at 14:52






    2




    2




    $begingroup$
    @Bram28 : Currently 1919 is special. But, in 2016, this special number was 7918. (I know the paper I cite had led to work condensing the machine to fewer states and that there was success. I do not know if the 1919 result follows from that chain of work or possibly a different chain of work.) So, tomorrow, someone could show that the TM can be condensed to a 500 state machine, or to a 5 state machine? This is currently unknown. It may be unknowable how small a machine does this computation -- what is the intrinsic entropy of a computation?
    $endgroup$
    – Eric Towers
    Dec 27 '18 at 21:25




    $begingroup$
    @Bram28 : Currently 1919 is special. But, in 2016, this special number was 7918. (I know the paper I cite had led to work condensing the machine to fewer states and that there was success. I do not know if the 1919 result follows from that chain of work or possibly a different chain of work.) So, tomorrow, someone could show that the TM can be condensed to a 500 state machine, or to a 5 state machine? This is currently unknown. It may be unknowable how small a machine does this computation -- what is the intrinsic entropy of a computation?
    $endgroup$
    – Eric Towers
    Dec 27 '18 at 21:25











    11












    $begingroup$

    Since, as you observe, any finite amount of the halting problem - that is, any set of the form $Hupharpoonright s:={x<s:Phi_x(x)downarrow}$ - is computable, there isn't any particular sharp impossibility point. There are some interesting "phase transitions" which appear relevant - e.g. at a certain point we hit our first universal machine - but I don't know of any which have any claim to be the point where the halting problem becomes noncomputable.



    On the other hand, as you also observe the way in which the $Hupharpoonright s$s are computable is non-uniform (otherwise, the whole halting problem would be computable). So we can try to measure this "ongoing complexity." There are two, to my mind, natural approaches:




    • Given $n$, how up up the "hierarchy of theories" - from fragments of PA, to fragments of $Z_2$, to fragments of ZFC, to ZFC + large cardinals - do we have to go to get a theory which can decide whether each of the first $n$ Turing machines halts on input $0$?


    • Given $n$, how complicated is the finite string consisting of the first $n$ bits of the characteristic function of the halting problem (call this string "$eta_n$")?



    Of these two approaches, the first has some draw which the second lacks, but it's also far more vague and limited. The second winds up leading to a very rich theory, namely the theory of Kolmogorov complexity (and its attendant notions, like algorithmic randomness), and also partially subsumes the former question. So I think that's my answer to your question: ultimately you won't find a sharp transition point, but the study of the dynamic behavior of the complexity of the halting problem is quite rewarding.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Thanks Noah! ... a little over my head .. though I like your "phase transitions" notion .... from one of the other answers I gather that indeed for some number of states we "transit" into the consistency of ZFC ... which is certainly 'special' real-life-practicing-mathematics in some sense ... but maybe not 'special' in the sense I had in mind?
      $endgroup$
      – Bram28
      Dec 26 '18 at 23:17










    • $begingroup$
      So thinking about this a little more .... "transitioning" into the consistency-of-ZFC space is pretty 'special' for sure ... and yet it doesn't feel 'special' enough to me: The proof of the unsolvability of the Halting problem seems so deeply logical that I was looking for a likewise more deeply logical barrier that we'd run into as we contemplate more and more complicated TMs ... the fact that ZFC plays such a prominent role in mathematics makes it 'special' ... but I feel not 'special' enough. So, maybe it's back to the thought that there is no deeply logical 'special' point.
      $endgroup$
      – Bram28
      Dec 27 '18 at 1:46












    • $begingroup$
      I think your first sentence is a bit misleading. Any set $Hrestriction s$ is finite and hence computable, but that doesn't mean that we can actually compute it, because we don't know what program computes this particular finite set because we don't know what set it is. There are indeed concrete finite values of $s$ like 1919 for which we definitely cannot prove that it is equal to anything in particular. This is an "impossibility point" by any reasonable measure. (If you are working in a different system than ZFC the point may be elsewhere, but it will be there regardless.)
      $endgroup$
      – Mario Carneiro
      Dec 28 '18 at 6:35










    • $begingroup$
      @MarioCarneiro I disagree - we can compute it, we just don't know what program will do the job. More substantively, each specific theory hits a wall somewhere, but I don't see any reason to argue that any one theory perfectly captures the "right" situation (re: "any reasonable measure"); so the right picture is still a "sliding scale."
      $endgroup$
      – Noah Schweber
      Dec 28 '18 at 17:59
















    11












    $begingroup$

    Since, as you observe, any finite amount of the halting problem - that is, any set of the form $Hupharpoonright s:={x<s:Phi_x(x)downarrow}$ - is computable, there isn't any particular sharp impossibility point. There are some interesting "phase transitions" which appear relevant - e.g. at a certain point we hit our first universal machine - but I don't know of any which have any claim to be the point where the halting problem becomes noncomputable.



    On the other hand, as you also observe the way in which the $Hupharpoonright s$s are computable is non-uniform (otherwise, the whole halting problem would be computable). So we can try to measure this "ongoing complexity." There are two, to my mind, natural approaches:




    • Given $n$, how up up the "hierarchy of theories" - from fragments of PA, to fragments of $Z_2$, to fragments of ZFC, to ZFC + large cardinals - do we have to go to get a theory which can decide whether each of the first $n$ Turing machines halts on input $0$?


    • Given $n$, how complicated is the finite string consisting of the first $n$ bits of the characteristic function of the halting problem (call this string "$eta_n$")?



    Of these two approaches, the first has some draw which the second lacks, but it's also far more vague and limited. The second winds up leading to a very rich theory, namely the theory of Kolmogorov complexity (and its attendant notions, like algorithmic randomness), and also partially subsumes the former question. So I think that's my answer to your question: ultimately you won't find a sharp transition point, but the study of the dynamic behavior of the complexity of the halting problem is quite rewarding.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Thanks Noah! ... a little over my head .. though I like your "phase transitions" notion .... from one of the other answers I gather that indeed for some number of states we "transit" into the consistency of ZFC ... which is certainly 'special' real-life-practicing-mathematics in some sense ... but maybe not 'special' in the sense I had in mind?
      $endgroup$
      – Bram28
      Dec 26 '18 at 23:17










    • $begingroup$
      So thinking about this a little more .... "transitioning" into the consistency-of-ZFC space is pretty 'special' for sure ... and yet it doesn't feel 'special' enough to me: The proof of the unsolvability of the Halting problem seems so deeply logical that I was looking for a likewise more deeply logical barrier that we'd run into as we contemplate more and more complicated TMs ... the fact that ZFC plays such a prominent role in mathematics makes it 'special' ... but I feel not 'special' enough. So, maybe it's back to the thought that there is no deeply logical 'special' point.
      $endgroup$
      – Bram28
      Dec 27 '18 at 1:46












    • $begingroup$
      I think your first sentence is a bit misleading. Any set $Hrestriction s$ is finite and hence computable, but that doesn't mean that we can actually compute it, because we don't know what program computes this particular finite set because we don't know what set it is. There are indeed concrete finite values of $s$ like 1919 for which we definitely cannot prove that it is equal to anything in particular. This is an "impossibility point" by any reasonable measure. (If you are working in a different system than ZFC the point may be elsewhere, but it will be there regardless.)
      $endgroup$
      – Mario Carneiro
      Dec 28 '18 at 6:35










    • $begingroup$
      @MarioCarneiro I disagree - we can compute it, we just don't know what program will do the job. More substantively, each specific theory hits a wall somewhere, but I don't see any reason to argue that any one theory perfectly captures the "right" situation (re: "any reasonable measure"); so the right picture is still a "sliding scale."
      $endgroup$
      – Noah Schweber
      Dec 28 '18 at 17:59














    11












    11








    11





    $begingroup$

    Since, as you observe, any finite amount of the halting problem - that is, any set of the form $Hupharpoonright s:={x<s:Phi_x(x)downarrow}$ - is computable, there isn't any particular sharp impossibility point. There are some interesting "phase transitions" which appear relevant - e.g. at a certain point we hit our first universal machine - but I don't know of any which have any claim to be the point where the halting problem becomes noncomputable.



    On the other hand, as you also observe the way in which the $Hupharpoonright s$s are computable is non-uniform (otherwise, the whole halting problem would be computable). So we can try to measure this "ongoing complexity." There are two, to my mind, natural approaches:




    • Given $n$, how up up the "hierarchy of theories" - from fragments of PA, to fragments of $Z_2$, to fragments of ZFC, to ZFC + large cardinals - do we have to go to get a theory which can decide whether each of the first $n$ Turing machines halts on input $0$?


    • Given $n$, how complicated is the finite string consisting of the first $n$ bits of the characteristic function of the halting problem (call this string "$eta_n$")?



    Of these two approaches, the first has some draw which the second lacks, but it's also far more vague and limited. The second winds up leading to a very rich theory, namely the theory of Kolmogorov complexity (and its attendant notions, like algorithmic randomness), and also partially subsumes the former question. So I think that's my answer to your question: ultimately you won't find a sharp transition point, but the study of the dynamic behavior of the complexity of the halting problem is quite rewarding.






    share|cite|improve this answer









    $endgroup$



    Since, as you observe, any finite amount of the halting problem - that is, any set of the form $Hupharpoonright s:={x<s:Phi_x(x)downarrow}$ - is computable, there isn't any particular sharp impossibility point. There are some interesting "phase transitions" which appear relevant - e.g. at a certain point we hit our first universal machine - but I don't know of any which have any claim to be the point where the halting problem becomes noncomputable.



    On the other hand, as you also observe the way in which the $Hupharpoonright s$s are computable is non-uniform (otherwise, the whole halting problem would be computable). So we can try to measure this "ongoing complexity." There are two, to my mind, natural approaches:




    • Given $n$, how up up the "hierarchy of theories" - from fragments of PA, to fragments of $Z_2$, to fragments of ZFC, to ZFC + large cardinals - do we have to go to get a theory which can decide whether each of the first $n$ Turing machines halts on input $0$?


    • Given $n$, how complicated is the finite string consisting of the first $n$ bits of the characteristic function of the halting problem (call this string "$eta_n$")?



    Of these two approaches, the first has some draw which the second lacks, but it's also far more vague and limited. The second winds up leading to a very rich theory, namely the theory of Kolmogorov complexity (and its attendant notions, like algorithmic randomness), and also partially subsumes the former question. So I think that's my answer to your question: ultimately you won't find a sharp transition point, but the study of the dynamic behavior of the complexity of the halting problem is quite rewarding.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Dec 26 '18 at 22:59









    Noah SchweberNoah Schweber

    128k10152294




    128k10152294












    • $begingroup$
      Thanks Noah! ... a little over my head .. though I like your "phase transitions" notion .... from one of the other answers I gather that indeed for some number of states we "transit" into the consistency of ZFC ... which is certainly 'special' real-life-practicing-mathematics in some sense ... but maybe not 'special' in the sense I had in mind?
      $endgroup$
      – Bram28
      Dec 26 '18 at 23:17










    • $begingroup$
      So thinking about this a little more .... "transitioning" into the consistency-of-ZFC space is pretty 'special' for sure ... and yet it doesn't feel 'special' enough to me: The proof of the unsolvability of the Halting problem seems so deeply logical that I was looking for a likewise more deeply logical barrier that we'd run into as we contemplate more and more complicated TMs ... the fact that ZFC plays such a prominent role in mathematics makes it 'special' ... but I feel not 'special' enough. So, maybe it's back to the thought that there is no deeply logical 'special' point.
      $endgroup$
      – Bram28
      Dec 27 '18 at 1:46












    • $begingroup$
      I think your first sentence is a bit misleading. Any set $Hrestriction s$ is finite and hence computable, but that doesn't mean that we can actually compute it, because we don't know what program computes this particular finite set because we don't know what set it is. There are indeed concrete finite values of $s$ like 1919 for which we definitely cannot prove that it is equal to anything in particular. This is an "impossibility point" by any reasonable measure. (If you are working in a different system than ZFC the point may be elsewhere, but it will be there regardless.)
      $endgroup$
      – Mario Carneiro
      Dec 28 '18 at 6:35










    • $begingroup$
      @MarioCarneiro I disagree - we can compute it, we just don't know what program will do the job. More substantively, each specific theory hits a wall somewhere, but I don't see any reason to argue that any one theory perfectly captures the "right" situation (re: "any reasonable measure"); so the right picture is still a "sliding scale."
      $endgroup$
      – Noah Schweber
      Dec 28 '18 at 17:59


















    • $begingroup$
      Thanks Noah! ... a little over my head .. though I like your "phase transitions" notion .... from one of the other answers I gather that indeed for some number of states we "transit" into the consistency of ZFC ... which is certainly 'special' real-life-practicing-mathematics in some sense ... but maybe not 'special' in the sense I had in mind?
      $endgroup$
      – Bram28
      Dec 26 '18 at 23:17










    • $begingroup$
      So thinking about this a little more .... "transitioning" into the consistency-of-ZFC space is pretty 'special' for sure ... and yet it doesn't feel 'special' enough to me: The proof of the unsolvability of the Halting problem seems so deeply logical that I was looking for a likewise more deeply logical barrier that we'd run into as we contemplate more and more complicated TMs ... the fact that ZFC plays such a prominent role in mathematics makes it 'special' ... but I feel not 'special' enough. So, maybe it's back to the thought that there is no deeply logical 'special' point.
      $endgroup$
      – Bram28
      Dec 27 '18 at 1:46












    • $begingroup$
      I think your first sentence is a bit misleading. Any set $Hrestriction s$ is finite and hence computable, but that doesn't mean that we can actually compute it, because we don't know what program computes this particular finite set because we don't know what set it is. There are indeed concrete finite values of $s$ like 1919 for which we definitely cannot prove that it is equal to anything in particular. This is an "impossibility point" by any reasonable measure. (If you are working in a different system than ZFC the point may be elsewhere, but it will be there regardless.)
      $endgroup$
      – Mario Carneiro
      Dec 28 '18 at 6:35










    • $begingroup$
      @MarioCarneiro I disagree - we can compute it, we just don't know what program will do the job. More substantively, each specific theory hits a wall somewhere, but I don't see any reason to argue that any one theory perfectly captures the "right" situation (re: "any reasonable measure"); so the right picture is still a "sliding scale."
      $endgroup$
      – Noah Schweber
      Dec 28 '18 at 17:59
















    $begingroup$
    Thanks Noah! ... a little over my head .. though I like your "phase transitions" notion .... from one of the other answers I gather that indeed for some number of states we "transit" into the consistency of ZFC ... which is certainly 'special' real-life-practicing-mathematics in some sense ... but maybe not 'special' in the sense I had in mind?
    $endgroup$
    – Bram28
    Dec 26 '18 at 23:17




    $begingroup$
    Thanks Noah! ... a little over my head .. though I like your "phase transitions" notion .... from one of the other answers I gather that indeed for some number of states we "transit" into the consistency of ZFC ... which is certainly 'special' real-life-practicing-mathematics in some sense ... but maybe not 'special' in the sense I had in mind?
    $endgroup$
    – Bram28
    Dec 26 '18 at 23:17












    $begingroup$
    So thinking about this a little more .... "transitioning" into the consistency-of-ZFC space is pretty 'special' for sure ... and yet it doesn't feel 'special' enough to me: The proof of the unsolvability of the Halting problem seems so deeply logical that I was looking for a likewise more deeply logical barrier that we'd run into as we contemplate more and more complicated TMs ... the fact that ZFC plays such a prominent role in mathematics makes it 'special' ... but I feel not 'special' enough. So, maybe it's back to the thought that there is no deeply logical 'special' point.
    $endgroup$
    – Bram28
    Dec 27 '18 at 1:46






    $begingroup$
    So thinking about this a little more .... "transitioning" into the consistency-of-ZFC space is pretty 'special' for sure ... and yet it doesn't feel 'special' enough to me: The proof of the unsolvability of the Halting problem seems so deeply logical that I was looking for a likewise more deeply logical barrier that we'd run into as we contemplate more and more complicated TMs ... the fact that ZFC plays such a prominent role in mathematics makes it 'special' ... but I feel not 'special' enough. So, maybe it's back to the thought that there is no deeply logical 'special' point.
    $endgroup$
    – Bram28
    Dec 27 '18 at 1:46














    $begingroup$
    I think your first sentence is a bit misleading. Any set $Hrestriction s$ is finite and hence computable, but that doesn't mean that we can actually compute it, because we don't know what program computes this particular finite set because we don't know what set it is. There are indeed concrete finite values of $s$ like 1919 for which we definitely cannot prove that it is equal to anything in particular. This is an "impossibility point" by any reasonable measure. (If you are working in a different system than ZFC the point may be elsewhere, but it will be there regardless.)
    $endgroup$
    – Mario Carneiro
    Dec 28 '18 at 6:35




    $begingroup$
    I think your first sentence is a bit misleading. Any set $Hrestriction s$ is finite and hence computable, but that doesn't mean that we can actually compute it, because we don't know what program computes this particular finite set because we don't know what set it is. There are indeed concrete finite values of $s$ like 1919 for which we definitely cannot prove that it is equal to anything in particular. This is an "impossibility point" by any reasonable measure. (If you are working in a different system than ZFC the point may be elsewhere, but it will be there regardless.)
    $endgroup$
    – Mario Carneiro
    Dec 28 '18 at 6:35












    $begingroup$
    @MarioCarneiro I disagree - we can compute it, we just don't know what program will do the job. More substantively, each specific theory hits a wall somewhere, but I don't see any reason to argue that any one theory perfectly captures the "right" situation (re: "any reasonable measure"); so the right picture is still a "sliding scale."
    $endgroup$
    – Noah Schweber
    Dec 28 '18 at 17:59




    $begingroup$
    @MarioCarneiro I disagree - we can compute it, we just don't know what program will do the job. More substantively, each specific theory hits a wall somewhere, but I don't see any reason to argue that any one theory perfectly captures the "right" situation (re: "any reasonable measure"); so the right picture is still a "sliding scale."
    $endgroup$
    – Noah Schweber
    Dec 28 '18 at 17:59











    6












    $begingroup$

    For example, you can construct a Turing machine (I don't know how many states you need, but it is a finite number) that searches for a counterexample to Goldbach's conjecture, i.e. an even number $> 2$ that is not the sum of two primes, going through the even numbers one by one; for even number $n > 2$ it checks each $k$ from $2$ to $n/2$; if $k$ is prime and $n-k$ is prime it goes to the next $n$, but if it gets through all the $k$ it halts. Thus this Turing machine will halt if and only if Goldbach's conjecture is false. In order to decide whether it will halt, your analysis will need to decide Goldbach's conjecture.



    And when you're done with that one, there are lots of other conjectures you could check with a Turing machine.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Thanks! I am guessing we can program something like this with a Turing-machines with close to 100 states ... so that gives us some measure (or at least a kind of minimum) of 'how difficult' it is to figure out the halting behavior of machines with that number of states. One of the other answers does something similar with the consistency of ZFC, which has been shown to take 1919 states at the most. Interesting!
      $endgroup$
      – Bram28
      Dec 26 '18 at 23:13










    • $begingroup$
      Goldbach's conjecture is usually an example of what the halting problem is not. We don't know a proof for a conjecture but it may eventually be found (or we might find a proof that it isn't true), it is not an intrinsically unsolvable problem (we have a diagonalization proof that you can't build a machine that can solve the halting problem)
      $endgroup$
      – pqnet
      Dec 29 '18 at 7:25
















    6












    $begingroup$

    For example, you can construct a Turing machine (I don't know how many states you need, but it is a finite number) that searches for a counterexample to Goldbach's conjecture, i.e. an even number $> 2$ that is not the sum of two primes, going through the even numbers one by one; for even number $n > 2$ it checks each $k$ from $2$ to $n/2$; if $k$ is prime and $n-k$ is prime it goes to the next $n$, but if it gets through all the $k$ it halts. Thus this Turing machine will halt if and only if Goldbach's conjecture is false. In order to decide whether it will halt, your analysis will need to decide Goldbach's conjecture.



    And when you're done with that one, there are lots of other conjectures you could check with a Turing machine.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Thanks! I am guessing we can program something like this with a Turing-machines with close to 100 states ... so that gives us some measure (or at least a kind of minimum) of 'how difficult' it is to figure out the halting behavior of machines with that number of states. One of the other answers does something similar with the consistency of ZFC, which has been shown to take 1919 states at the most. Interesting!
      $endgroup$
      – Bram28
      Dec 26 '18 at 23:13










    • $begingroup$
      Goldbach's conjecture is usually an example of what the halting problem is not. We don't know a proof for a conjecture but it may eventually be found (or we might find a proof that it isn't true), it is not an intrinsically unsolvable problem (we have a diagonalization proof that you can't build a machine that can solve the halting problem)
      $endgroup$
      – pqnet
      Dec 29 '18 at 7:25














    6












    6








    6





    $begingroup$

    For example, you can construct a Turing machine (I don't know how many states you need, but it is a finite number) that searches for a counterexample to Goldbach's conjecture, i.e. an even number $> 2$ that is not the sum of two primes, going through the even numbers one by one; for even number $n > 2$ it checks each $k$ from $2$ to $n/2$; if $k$ is prime and $n-k$ is prime it goes to the next $n$, but if it gets through all the $k$ it halts. Thus this Turing machine will halt if and only if Goldbach's conjecture is false. In order to decide whether it will halt, your analysis will need to decide Goldbach's conjecture.



    And when you're done with that one, there are lots of other conjectures you could check with a Turing machine.






    share|cite|improve this answer









    $endgroup$



    For example, you can construct a Turing machine (I don't know how many states you need, but it is a finite number) that searches for a counterexample to Goldbach's conjecture, i.e. an even number $> 2$ that is not the sum of two primes, going through the even numbers one by one; for even number $n > 2$ it checks each $k$ from $2$ to $n/2$; if $k$ is prime and $n-k$ is prime it goes to the next $n$, but if it gets through all the $k$ it halts. Thus this Turing machine will halt if and only if Goldbach's conjecture is false. In order to decide whether it will halt, your analysis will need to decide Goldbach's conjecture.



    And when you're done with that one, there are lots of other conjectures you could check with a Turing machine.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Dec 26 '18 at 22:31









    Robert IsraelRobert Israel

    331k23221477




    331k23221477












    • $begingroup$
      Thanks! I am guessing we can program something like this with a Turing-machines with close to 100 states ... so that gives us some measure (or at least a kind of minimum) of 'how difficult' it is to figure out the halting behavior of machines with that number of states. One of the other answers does something similar with the consistency of ZFC, which has been shown to take 1919 states at the most. Interesting!
      $endgroup$
      – Bram28
      Dec 26 '18 at 23:13










    • $begingroup$
      Goldbach's conjecture is usually an example of what the halting problem is not. We don't know a proof for a conjecture but it may eventually be found (or we might find a proof that it isn't true), it is not an intrinsically unsolvable problem (we have a diagonalization proof that you can't build a machine that can solve the halting problem)
      $endgroup$
      – pqnet
      Dec 29 '18 at 7:25


















    • $begingroup$
      Thanks! I am guessing we can program something like this with a Turing-machines with close to 100 states ... so that gives us some measure (or at least a kind of minimum) of 'how difficult' it is to figure out the halting behavior of machines with that number of states. One of the other answers does something similar with the consistency of ZFC, which has been shown to take 1919 states at the most. Interesting!
      $endgroup$
      – Bram28
      Dec 26 '18 at 23:13










    • $begingroup$
      Goldbach's conjecture is usually an example of what the halting problem is not. We don't know a proof for a conjecture but it may eventually be found (or we might find a proof that it isn't true), it is not an intrinsically unsolvable problem (we have a diagonalization proof that you can't build a machine that can solve the halting problem)
      $endgroup$
      – pqnet
      Dec 29 '18 at 7:25
















    $begingroup$
    Thanks! I am guessing we can program something like this with a Turing-machines with close to 100 states ... so that gives us some measure (or at least a kind of minimum) of 'how difficult' it is to figure out the halting behavior of machines with that number of states. One of the other answers does something similar with the consistency of ZFC, which has been shown to take 1919 states at the most. Interesting!
    $endgroup$
    – Bram28
    Dec 26 '18 at 23:13




    $begingroup$
    Thanks! I am guessing we can program something like this with a Turing-machines with close to 100 states ... so that gives us some measure (or at least a kind of minimum) of 'how difficult' it is to figure out the halting behavior of machines with that number of states. One of the other answers does something similar with the consistency of ZFC, which has been shown to take 1919 states at the most. Interesting!
    $endgroup$
    – Bram28
    Dec 26 '18 at 23:13












    $begingroup$
    Goldbach's conjecture is usually an example of what the halting problem is not. We don't know a proof for a conjecture but it may eventually be found (or we might find a proof that it isn't true), it is not an intrinsically unsolvable problem (we have a diagonalization proof that you can't build a machine that can solve the halting problem)
    $endgroup$
    – pqnet
    Dec 29 '18 at 7:25




    $begingroup$
    Goldbach's conjecture is usually an example of what the halting problem is not. We don't know a proof for a conjecture but it may eventually be found (or we might find a proof that it isn't true), it is not an intrinsically unsolvable problem (we have a diagonalization proof that you can't build a machine that can solve the halting problem)
    $endgroup$
    – pqnet
    Dec 29 '18 at 7:25











    4












    $begingroup$

    A potential Busy Beaver has three possibilities:




    1. It is easy to show it stops

    2. It is easy to show it never stops

    3. It is difficult to show either case


    Number 1 either stops quickly, or it has a repeating pattern with an eventual flaw that causes it to stop.



    Number 2 has a repeating pattern and never has a flaw, causing it to continue forever.



    Number 3 is the interesting case. Its behaviour is chaotic. It might have short-term patterns, but it has no long-term patterns. Its future states can be predicted a short way without actually running the machine. There comes a certain point where its behaviour can no longer be predicted without running it. At this point you need a machine capable of emulating a Turning machine but faster. However, you will reach the same problem with this hypothetical new machine too, as long as it has finite power, which all real-world machines have.



    If you improve your analysis of Busy Beavers, you can decide whether certain candidates are actually case 1 or case 2. We can think of it as a spectrum of behaviour with stopping at 0, running forever at 2, and undecidability at 1. To start with we know that 0 to 0.5 will stop (case 1) and 1.5 to 2 will run forever (case 2), with 0.5 to 1.5 being undecidable without running it (case 3). We improve our understanding of Busy Beavers. Now we know that 0 to 0.95 will stop and 1.05 to 2 will run forever, with 0.95 to 1.05 being undecidable.



    At some point, there's no way to predict without running the machine whether it will halt or not. The only way to determine its behaviour is to run it, and if it stops, it usually gives you no insight you can use for other potential Busy Beavers. If it doesn't stop after $10^6$ cycles, you can try $10^7$, then $10^8$, and so on. At some point you give up without knowing whether it will stop if given enough time.



    The problem is similar to drawing a Mandelbrot set. Certain points "escape" to infinity quickly, others settle into a repeating pattern quickly. The points on the boundary of the Mandelbrot set are difficult to predict either way without calculating them. There are methods to make the decision easier, but there will always be chaotic behaviour between the two easily detectable outcomes that can't be predicted.



    Hopefully I've answered "why". Then "when" will be determined by your understanding of the specific problem you're trying to solve and the power of the machine you're using. Over time we can eat into the chaos and make it decidable, but it grows much faster than we can ever solve.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      So basically a universal computer is a kind of "useful discrete chaos", then? Interesting, but seems to make sense on a certain level.
      $endgroup$
      – The_Sympathizer
      Dec 27 '18 at 10:34
















    4












    $begingroup$

    A potential Busy Beaver has three possibilities:




    1. It is easy to show it stops

    2. It is easy to show it never stops

    3. It is difficult to show either case


    Number 1 either stops quickly, or it has a repeating pattern with an eventual flaw that causes it to stop.



    Number 2 has a repeating pattern and never has a flaw, causing it to continue forever.



    Number 3 is the interesting case. Its behaviour is chaotic. It might have short-term patterns, but it has no long-term patterns. Its future states can be predicted a short way without actually running the machine. There comes a certain point where its behaviour can no longer be predicted without running it. At this point you need a machine capable of emulating a Turning machine but faster. However, you will reach the same problem with this hypothetical new machine too, as long as it has finite power, which all real-world machines have.



    If you improve your analysis of Busy Beavers, you can decide whether certain candidates are actually case 1 or case 2. We can think of it as a spectrum of behaviour with stopping at 0, running forever at 2, and undecidability at 1. To start with we know that 0 to 0.5 will stop (case 1) and 1.5 to 2 will run forever (case 2), with 0.5 to 1.5 being undecidable without running it (case 3). We improve our understanding of Busy Beavers. Now we know that 0 to 0.95 will stop and 1.05 to 2 will run forever, with 0.95 to 1.05 being undecidable.



    At some point, there's no way to predict without running the machine whether it will halt or not. The only way to determine its behaviour is to run it, and if it stops, it usually gives you no insight you can use for other potential Busy Beavers. If it doesn't stop after $10^6$ cycles, you can try $10^7$, then $10^8$, and so on. At some point you give up without knowing whether it will stop if given enough time.



    The problem is similar to drawing a Mandelbrot set. Certain points "escape" to infinity quickly, others settle into a repeating pattern quickly. The points on the boundary of the Mandelbrot set are difficult to predict either way without calculating them. There are methods to make the decision easier, but there will always be chaotic behaviour between the two easily detectable outcomes that can't be predicted.



    Hopefully I've answered "why". Then "when" will be determined by your understanding of the specific problem you're trying to solve and the power of the machine you're using. Over time we can eat into the chaos and make it decidable, but it grows much faster than we can ever solve.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      So basically a universal computer is a kind of "useful discrete chaos", then? Interesting, but seems to make sense on a certain level.
      $endgroup$
      – The_Sympathizer
      Dec 27 '18 at 10:34














    4












    4








    4





    $begingroup$

    A potential Busy Beaver has three possibilities:




    1. It is easy to show it stops

    2. It is easy to show it never stops

    3. It is difficult to show either case


    Number 1 either stops quickly, or it has a repeating pattern with an eventual flaw that causes it to stop.



    Number 2 has a repeating pattern and never has a flaw, causing it to continue forever.



    Number 3 is the interesting case. Its behaviour is chaotic. It might have short-term patterns, but it has no long-term patterns. Its future states can be predicted a short way without actually running the machine. There comes a certain point where its behaviour can no longer be predicted without running it. At this point you need a machine capable of emulating a Turning machine but faster. However, you will reach the same problem with this hypothetical new machine too, as long as it has finite power, which all real-world machines have.



    If you improve your analysis of Busy Beavers, you can decide whether certain candidates are actually case 1 or case 2. We can think of it as a spectrum of behaviour with stopping at 0, running forever at 2, and undecidability at 1. To start with we know that 0 to 0.5 will stop (case 1) and 1.5 to 2 will run forever (case 2), with 0.5 to 1.5 being undecidable without running it (case 3). We improve our understanding of Busy Beavers. Now we know that 0 to 0.95 will stop and 1.05 to 2 will run forever, with 0.95 to 1.05 being undecidable.



    At some point, there's no way to predict without running the machine whether it will halt or not. The only way to determine its behaviour is to run it, and if it stops, it usually gives you no insight you can use for other potential Busy Beavers. If it doesn't stop after $10^6$ cycles, you can try $10^7$, then $10^8$, and so on. At some point you give up without knowing whether it will stop if given enough time.



    The problem is similar to drawing a Mandelbrot set. Certain points "escape" to infinity quickly, others settle into a repeating pattern quickly. The points on the boundary of the Mandelbrot set are difficult to predict either way without calculating them. There are methods to make the decision easier, but there will always be chaotic behaviour between the two easily detectable outcomes that can't be predicted.



    Hopefully I've answered "why". Then "when" will be determined by your understanding of the specific problem you're trying to solve and the power of the machine you're using. Over time we can eat into the chaos and make it decidable, but it grows much faster than we can ever solve.






    share|cite|improve this answer









    $endgroup$



    A potential Busy Beaver has three possibilities:




    1. It is easy to show it stops

    2. It is easy to show it never stops

    3. It is difficult to show either case


    Number 1 either stops quickly, or it has a repeating pattern with an eventual flaw that causes it to stop.



    Number 2 has a repeating pattern and never has a flaw, causing it to continue forever.



    Number 3 is the interesting case. Its behaviour is chaotic. It might have short-term patterns, but it has no long-term patterns. Its future states can be predicted a short way without actually running the machine. There comes a certain point where its behaviour can no longer be predicted without running it. At this point you need a machine capable of emulating a Turning machine but faster. However, you will reach the same problem with this hypothetical new machine too, as long as it has finite power, which all real-world machines have.



    If you improve your analysis of Busy Beavers, you can decide whether certain candidates are actually case 1 or case 2. We can think of it as a spectrum of behaviour with stopping at 0, running forever at 2, and undecidability at 1. To start with we know that 0 to 0.5 will stop (case 1) and 1.5 to 2 will run forever (case 2), with 0.5 to 1.5 being undecidable without running it (case 3). We improve our understanding of Busy Beavers. Now we know that 0 to 0.95 will stop and 1.05 to 2 will run forever, with 0.95 to 1.05 being undecidable.



    At some point, there's no way to predict without running the machine whether it will halt or not. The only way to determine its behaviour is to run it, and if it stops, it usually gives you no insight you can use for other potential Busy Beavers. If it doesn't stop after $10^6$ cycles, you can try $10^7$, then $10^8$, and so on. At some point you give up without knowing whether it will stop if given enough time.



    The problem is similar to drawing a Mandelbrot set. Certain points "escape" to infinity quickly, others settle into a repeating pattern quickly. The points on the boundary of the Mandelbrot set are difficult to predict either way without calculating them. There are methods to make the decision easier, but there will always be chaotic behaviour between the two easily detectable outcomes that can't be predicted.



    Hopefully I've answered "why". Then "when" will be determined by your understanding of the specific problem you're trying to solve and the power of the machine you're using. Over time we can eat into the chaos and make it decidable, but it grows much faster than we can ever solve.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Dec 27 '18 at 3:45









    CJ DennisCJ Dennis

    477211




    477211












    • $begingroup$
      So basically a universal computer is a kind of "useful discrete chaos", then? Interesting, but seems to make sense on a certain level.
      $endgroup$
      – The_Sympathizer
      Dec 27 '18 at 10:34


















    • $begingroup$
      So basically a universal computer is a kind of "useful discrete chaos", then? Interesting, but seems to make sense on a certain level.
      $endgroup$
      – The_Sympathizer
      Dec 27 '18 at 10:34
















    $begingroup$
    So basically a universal computer is a kind of "useful discrete chaos", then? Interesting, but seems to make sense on a certain level.
    $endgroup$
    – The_Sympathizer
    Dec 27 '18 at 10:34




    $begingroup$
    So basically a universal computer is a kind of "useful discrete chaos", then? Interesting, but seems to make sense on a certain level.
    $endgroup$
    – The_Sympathizer
    Dec 27 '18 at 10:34











    0












    $begingroup$

    I would like to offer an alternative way of thinking about the halting problem, which helped me understanding better why the halting problem is non computable, or better, why in general there are functions which are non computable.



    In his famous paper on Computability, Turing mentions that he is going to prove that there are real numbers which are not computable.
    Computable numbers are defined as those whose decimals are calculable by finite means, or in other words, decimals that can be computed
    by a machine.



    He also mentions that it is equally easy to define and investigate computable functions instead of computable numbers and that is what I would like to show.
    I will report in short the lecture of the link I have posted already (https://www.youtube.com/watch?v=9JeIG_CsgvI&index=14&list=FLIknGRIW8gX2yutAMeHVEKw) because I think it is worth it:
    it is indeed the first part of a lecture which proves Goedel first Incompleteness Theorem. Credits go to "UC Davis Academics" of course.



    Let's define a function $f$ from non negative integers to the set ${0,1}$.
    We let $Q$ be the set of all such functions. It is clear that $Q$ is infinite (we wil prove indeed it is uncountable infinite essentially).



    Also a function $f$ in $Q$ is defined as computable if there is a computer program $P$, (loosely speaking a Turing Machine),
    that can take any non negative integer $x$ and output $f(x)$.
    We add the constraints that $P$ must always terminate in finite time and $P$ must be correct, in other words,
    output the correct value of $f$ for all non negative integers.



    We let $A$ be all the functions in $Q$ which are computable.
    We can show that there exists a function in $Q$ that is not in $A$, i.e. there exist uncomputable functions.



    We define a program as a series of finite statements over some finite alphabet $alpha$, in other words it can be thought as a single finite string. Suppose that the
    language $L$ we are using to express our program is Turing complete, i.e. it can be used to simulate any Turing machine.



    We can start enumerating in order of length the strings expressible in $alpha$. Strings of the same length are taken based on an
    alphabetical order that can be defined arbitrarily in $alpha$.



    You could indeed write a program $T$ to enumerate all those strings expressible in $alpha$, so for any string $s$ expressible in $alpha$,
    $T$ in finite time will generate $s$.



    Therefore you have a list $Z$ of strings in $alpha$ ordered by length.
    Some of those strings in $Z$ will be legitimate programs in our chosen programming language $L$. Indeed all possible programs will be in $Z$, and in particular
    the programs that compute functions in $A$ must all be there (by definition of computable function) and they are ordered in $Z$.



    Let's call $H$ this ordered list of functions in $A$, ${f_{1}, f_{2},...}$ . Now applying the diagonalization method, by defining
    $$g(x)=1-f_{i}(i)$$ It is easy to see that $g$ is in $Q$, however $g$ is not computable since it is not in $A$, and so we are done.






    share|cite|improve this answer











    $endgroup$


















      0












      $begingroup$

      I would like to offer an alternative way of thinking about the halting problem, which helped me understanding better why the halting problem is non computable, or better, why in general there are functions which are non computable.



      In his famous paper on Computability, Turing mentions that he is going to prove that there are real numbers which are not computable.
      Computable numbers are defined as those whose decimals are calculable by finite means, or in other words, decimals that can be computed
      by a machine.



      He also mentions that it is equally easy to define and investigate computable functions instead of computable numbers and that is what I would like to show.
      I will report in short the lecture of the link I have posted already (https://www.youtube.com/watch?v=9JeIG_CsgvI&index=14&list=FLIknGRIW8gX2yutAMeHVEKw) because I think it is worth it:
      it is indeed the first part of a lecture which proves Goedel first Incompleteness Theorem. Credits go to "UC Davis Academics" of course.



      Let's define a function $f$ from non negative integers to the set ${0,1}$.
      We let $Q$ be the set of all such functions. It is clear that $Q$ is infinite (we wil prove indeed it is uncountable infinite essentially).



      Also a function $f$ in $Q$ is defined as computable if there is a computer program $P$, (loosely speaking a Turing Machine),
      that can take any non negative integer $x$ and output $f(x)$.
      We add the constraints that $P$ must always terminate in finite time and $P$ must be correct, in other words,
      output the correct value of $f$ for all non negative integers.



      We let $A$ be all the functions in $Q$ which are computable.
      We can show that there exists a function in $Q$ that is not in $A$, i.e. there exist uncomputable functions.



      We define a program as a series of finite statements over some finite alphabet $alpha$, in other words it can be thought as a single finite string. Suppose that the
      language $L$ we are using to express our program is Turing complete, i.e. it can be used to simulate any Turing machine.



      We can start enumerating in order of length the strings expressible in $alpha$. Strings of the same length are taken based on an
      alphabetical order that can be defined arbitrarily in $alpha$.



      You could indeed write a program $T$ to enumerate all those strings expressible in $alpha$, so for any string $s$ expressible in $alpha$,
      $T$ in finite time will generate $s$.



      Therefore you have a list $Z$ of strings in $alpha$ ordered by length.
      Some of those strings in $Z$ will be legitimate programs in our chosen programming language $L$. Indeed all possible programs will be in $Z$, and in particular
      the programs that compute functions in $A$ must all be there (by definition of computable function) and they are ordered in $Z$.



      Let's call $H$ this ordered list of functions in $A$, ${f_{1}, f_{2},...}$ . Now applying the diagonalization method, by defining
      $$g(x)=1-f_{i}(i)$$ It is easy to see that $g$ is in $Q$, however $g$ is not computable since it is not in $A$, and so we are done.






      share|cite|improve this answer











      $endgroup$
















        0












        0








        0





        $begingroup$

        I would like to offer an alternative way of thinking about the halting problem, which helped me understanding better why the halting problem is non computable, or better, why in general there are functions which are non computable.



        In his famous paper on Computability, Turing mentions that he is going to prove that there are real numbers which are not computable.
        Computable numbers are defined as those whose decimals are calculable by finite means, or in other words, decimals that can be computed
        by a machine.



        He also mentions that it is equally easy to define and investigate computable functions instead of computable numbers and that is what I would like to show.
        I will report in short the lecture of the link I have posted already (https://www.youtube.com/watch?v=9JeIG_CsgvI&index=14&list=FLIknGRIW8gX2yutAMeHVEKw) because I think it is worth it:
        it is indeed the first part of a lecture which proves Goedel first Incompleteness Theorem. Credits go to "UC Davis Academics" of course.



        Let's define a function $f$ from non negative integers to the set ${0,1}$.
        We let $Q$ be the set of all such functions. It is clear that $Q$ is infinite (we wil prove indeed it is uncountable infinite essentially).



        Also a function $f$ in $Q$ is defined as computable if there is a computer program $P$, (loosely speaking a Turing Machine),
        that can take any non negative integer $x$ and output $f(x)$.
        We add the constraints that $P$ must always terminate in finite time and $P$ must be correct, in other words,
        output the correct value of $f$ for all non negative integers.



        We let $A$ be all the functions in $Q$ which are computable.
        We can show that there exists a function in $Q$ that is not in $A$, i.e. there exist uncomputable functions.



        We define a program as a series of finite statements over some finite alphabet $alpha$, in other words it can be thought as a single finite string. Suppose that the
        language $L$ we are using to express our program is Turing complete, i.e. it can be used to simulate any Turing machine.



        We can start enumerating in order of length the strings expressible in $alpha$. Strings of the same length are taken based on an
        alphabetical order that can be defined arbitrarily in $alpha$.



        You could indeed write a program $T$ to enumerate all those strings expressible in $alpha$, so for any string $s$ expressible in $alpha$,
        $T$ in finite time will generate $s$.



        Therefore you have a list $Z$ of strings in $alpha$ ordered by length.
        Some of those strings in $Z$ will be legitimate programs in our chosen programming language $L$. Indeed all possible programs will be in $Z$, and in particular
        the programs that compute functions in $A$ must all be there (by definition of computable function) and they are ordered in $Z$.



        Let's call $H$ this ordered list of functions in $A$, ${f_{1}, f_{2},...}$ . Now applying the diagonalization method, by defining
        $$g(x)=1-f_{i}(i)$$ It is easy to see that $g$ is in $Q$, however $g$ is not computable since it is not in $A$, and so we are done.






        share|cite|improve this answer











        $endgroup$



        I would like to offer an alternative way of thinking about the halting problem, which helped me understanding better why the halting problem is non computable, or better, why in general there are functions which are non computable.



        In his famous paper on Computability, Turing mentions that he is going to prove that there are real numbers which are not computable.
        Computable numbers are defined as those whose decimals are calculable by finite means, or in other words, decimals that can be computed
        by a machine.



        He also mentions that it is equally easy to define and investigate computable functions instead of computable numbers and that is what I would like to show.
        I will report in short the lecture of the link I have posted already (https://www.youtube.com/watch?v=9JeIG_CsgvI&index=14&list=FLIknGRIW8gX2yutAMeHVEKw) because I think it is worth it:
        it is indeed the first part of a lecture which proves Goedel first Incompleteness Theorem. Credits go to "UC Davis Academics" of course.



        Let's define a function $f$ from non negative integers to the set ${0,1}$.
        We let $Q$ be the set of all such functions. It is clear that $Q$ is infinite (we wil prove indeed it is uncountable infinite essentially).



        Also a function $f$ in $Q$ is defined as computable if there is a computer program $P$, (loosely speaking a Turing Machine),
        that can take any non negative integer $x$ and output $f(x)$.
        We add the constraints that $P$ must always terminate in finite time and $P$ must be correct, in other words,
        output the correct value of $f$ for all non negative integers.



        We let $A$ be all the functions in $Q$ which are computable.
        We can show that there exists a function in $Q$ that is not in $A$, i.e. there exist uncomputable functions.



        We define a program as a series of finite statements over some finite alphabet $alpha$, in other words it can be thought as a single finite string. Suppose that the
        language $L$ we are using to express our program is Turing complete, i.e. it can be used to simulate any Turing machine.



        We can start enumerating in order of length the strings expressible in $alpha$. Strings of the same length are taken based on an
        alphabetical order that can be defined arbitrarily in $alpha$.



        You could indeed write a program $T$ to enumerate all those strings expressible in $alpha$, so for any string $s$ expressible in $alpha$,
        $T$ in finite time will generate $s$.



        Therefore you have a list $Z$ of strings in $alpha$ ordered by length.
        Some of those strings in $Z$ will be legitimate programs in our chosen programming language $L$. Indeed all possible programs will be in $Z$, and in particular
        the programs that compute functions in $A$ must all be there (by definition of computable function) and they are ordered in $Z$.



        Let's call $H$ this ordered list of functions in $A$, ${f_{1}, f_{2},...}$ . Now applying the diagonalization method, by defining
        $$g(x)=1-f_{i}(i)$$ It is easy to see that $g$ is in $Q$, however $g$ is not computable since it is not in $A$, and so we are done.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Feb 13 at 8:15

























        answered Feb 10 at 22:14









        Marco BellocchiMarco Bellocchi

        5081412




        5081412






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3053345%2fexactly-when-and-why-can-a-turing-machine-not-solve-the-halting-problem%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Bundesstraße 106

            Verónica Boquete

            Ida-Boy-Ed-Garten