Because that will fail to detect a program that halts in X+1 time. The problem isn’t to detect if a program that halts halts, the problem is to generally create an algorithm that will guarantee that the analyzed program will always halt given an infinite time running on an infinite computer.
But you could also do a mean time analysis on specific tasks and have it cut off at a standard deviation or two (90-98% of task times covered), and have a checkbox or something for when the user expects longer times.
You could probably even make this adaptive, with a cutoff at 2x the standard time, and updating the median estimate after each run.
I was recently tasked with the traveling salesman problem on a project. My first pass was quick but produced sloppy inefficient results. Well boss didn't like it so he had me go back at it again so it would be far more accurate. Well now it slogs through figuring out an optimal solution of several thousand points.
That’s not even close to solving the halting problem. FF doesn’t check if the program has been in its current state before. It literally just checks if 10 seconds have passed without JS emptying its event loop.
Right. There is no solution to the halting problem, that's been proven. But you just showed you can very easily create a way of practically solving it. Just waiting for 10 seconds does it. That will catch every infinite loop while also having some false positives. And that will be fine in most applications.
My point is that even if a solution to the halting problem is impossible, there is often a very possible solution that will get you close enough for a real world scenario. And there are definitely more sophisticated methods of catching non-halting programs with fewer false positives.
For JavaScript apps, stopping them when they consume too much resources is definitely a good idea. But if you work on some project where it's common to run computionally intensive tasks, it can be harder to detect non-halting.
Just because it's not possible on a Turing Machine doesn't mean it's impossible on a PC with finite memory. You just have to track all the memory that is available to the algorithm and once you detect a state you've seen already, you know it's not halting ever. The detection algorithm will need an insane amount of memory though.
Edit: think about the amount of memory that would need. It's crazy but theoretically possible. In real world use cases only if the algorithm you're watching has access to a tiny amount of memory.
There have been genuine efforts to do that. Obviously (well, for a very niche use of "obviously") it's not always possible, but detecting infinite loops isn't like the uncertainty principle.
“The other programmers keep accidentally writing code that ends up in an infinite loop. I'd like you to make a program that can reliably detect that.”
You may joke, but if I had a penny for every time someone asked me to solve a problem, that basically boils down to the halting problem, I'd be rich.
Yeah, accidentally running into the halting problem is common in automatic code analysis.
It'd be nice if we wrote something to detect it running into the halting problem.
I have always wondered why the answer to the halting problem isn't: "If no output has been returned in X time, BREAK, restart program from beginning."
what if it needed just one more second to complete?
Damn Vogons.
Because that will fail to detect a program that halts in X+1 time. The problem isn’t to detect if a program that halts halts, the problem is to generally create an algorithm that will guarantee that the analyzed program will always halt given an infinite time running on an infinite computer.
Yes, that is the theoretical rational, but in use cases, if users always restart the program after X minutes, practical you can just hard reboot
But you could also do a mean time analysis on specific tasks and have it cut off at a standard deviation or two (90-98% of task times covered), and have a checkbox or something for when the user expects longer times.
You could probably even make this adaptive, with a cutoff at 2x the standard time, and updating the median estimate after each run.
I was recently tasked with the traveling salesman problem on a project. My first pass was quick but produced sloppy inefficient results. Well boss didn't like it so he had me go back at it again so it would be far more accurate. Well now it slogs through figuring out an optimal solution of several thousand points.
Just check the git blame.
A full solution to the halting problem can't exist. But you can definitely write a program that will "reliably" detect them to a certain percentage.
And many applications do exactly that. Firefox asked me today if I wanted to stop a tab because it was processing for too long.
That’s not even close to solving the halting problem. FF doesn’t check if the program has been in its current state before. It literally just checks if 10 seconds have passed without JS emptying its event loop.
Right. There is no solution to the halting problem, that's been proven. But you just showed you can very easily create a way of practically solving it. Just waiting for 10 seconds does it. That will catch every infinite loop while also having some false positives. And that will be fine in most applications.
My point is that even if a solution to the halting problem is impossible, there is often a very possible solution that will get you close enough for a real world scenario. And there are definitely more sophisticated methods of catching non-halting programs with fewer false positives.
Have you never written a useful program that took more than 10 seconds to complete?
I fully agree with your sentiment. But just in case: If you’re blocking the main thread of a browser for seconds at a time, you should look into that.
I'm not talking about web applications.
My loop isn’t infinite, just longer than the heat death of the universe.
For JavaScript apps, stopping them when they consume too much resources is definitely a good idea. But if you work on some project where it's common to run computionally intensive tasks, it can be harder to detect non-halting.
Just because it's not possible on a Turing Machine doesn't mean it's impossible on a PC with finite memory. You just have to track all the memory that is available to the algorithm and once you detect a state you've seen already, you know it's not halting ever. The detection algorithm will need an insane amount of memory though.
Edit: think about the amount of memory that would need. It's crazy but theoretically possible. In real world use cases only if the algorithm you're watching has access to a tiny amount of memory.
Easy.
If( loop == inf) {
End;
}
Pay check please?
Here you go:
Woah! This is worthless!
There have been genuine efforts to do that. Obviously (well, for a very niche use of "obviously") it's not always possible, but detecting infinite loops isn't like the uncertainty principle.
It's called The Terminator.