Another weird one related to Gödel’s theorems is Löb’s theorem: given a sound formal system F and a sentence s, if F proves that “if s is provable in F, then s,” then F also proves s. That is:
F ⊢ (Prov_F(“s”) → s) → s
Which is strange because you might think that proving “if s is provable, then s” would be possible regardless of whether s is actually provable. But Löb’s theorem shows that such self-referential statements can only be proven when s itself is already provable.
This is called "Löb’s hypothesis" and it's an incredible piece of logical machinery[1]. If you truly understand it, it's pretty mind-blowing that it's actually a logically sound statement. It's one of my favorite ways to prove Gödel's Second Theorem of Incompleteness.
The final bit of Baez's article has an interesting bit here:
> So, conceivably, the concept of 'standard' natural number, and the concept of 'standard' model of Peano arithmetic, are more subjective than most mathematicians think. Perhaps some of my 'standard' natural numbers are nonstandard for you! I think most mathematicians would reject this possibility... but not all.
It's probably worth elaborating why the majority of logicians (and likely most mathematicians) believe that standard natural numbers are not subjective (although my own opinion is more mixed).
Basically the crux is, do you believe that statements of the form using all/never quantifiers such as "this machine will never halt" or "this machine will always halt" have objective truth values?
If you do, then you are implicitly subscribing to a view that the standard natural numbers objectively exist and do not depend on subjective preferences.
Natural numbers are "natural" natural numbers. That is, I would argue, that the Peano definition is the most simple, straightforward and obvious (not that I would have come up with Peano's axioms) and therefore "natural" way to define something like whole numbers. And interestingly it is the prototypical way to define an (countably) infinite set, which is the actual cause of the trouble. If you limit everything to finite sets of stuff, all the Goedel evilness goes away. But then, math would be boring. If you admit non-finite sets, you will have countably infinite sets, and those will always be isomorphic to natural numbers. I'd bet good money that if there was an alien civilization, there would be an alien named Peano.
Isn't this a wittgensteinian problem? ie how you interpret language is inherently subjective. But regardless of what language you use, or how you intend to bind the terms to reality semantically, apriori truth is still apriori truth. It's the only basis of objective "truth" we have access to. Throwing that out the window feels like tossing the baby out with the bath water.
Imo, this just comes down to the fact that most people would consider "standard" to be a floating signifier. I don't think the idea that mathematical concepts changing definitions when you change axioms is at all controversial in itself.
I'm currently a machine learning grad student taking a meta-complexity class and came across this blog post. I found the whole thing very interesting. In particular the idea that some things are uncomputable seems fundamentally unaddressed in ML.
We usually assume that (a) the entire universe is computable and (b) even stronger than that, the entire universe is _learnable_, so we can just approximate everything using almost any function as long as we use neural networks and backpropagation, and have enough data. Clearly there's more to the story here.
> We usually assume that (a) the entire universe is computable and (b) even stronger than that, the entire universe is _learnable_, so we can just approximate everything using almost any function as long as we use neural networks and backpropagation, and have enough data.
I don't think the assumption is that strong. The assumption is rather that human learning is computable and therefore a machine equivalent of it should be too.
While I am to old to say what is taught today, this is exactly the map vs territory issue I mentioned in my other comment.
It is all there in what you would have been taught, but hidden because we tend to avoid the hay in the haystack problems and focus on the needles, because we don't have access to the hay.
As an example that can cause huge arguments, if for analysis you used Rudin, go look for an equally binary operator in that book, as every construction of the reals is a measure zero set, it is actually impossible to prove the equality of two real numbers. ZFC uses constructablity, Spivik uses Cauchy sequences etc...
If you look at the paper[1], about the equivalence of PAC/VC dimensionality it is all there, framing learning as a decision problem, set shattering etc.
Richardson's theorm, especially with more recent papers is another lens.
With the idea that all models are wrong, but some are useful, it does seem that curriculums tend to leave these concepts to postgraduate classes, often hiding them or ignoring them, in descriptive complexity theory they should have been front and center IMHO.
Assuming something is computable or learnable is important for finding pathological cases that are useful, don't confuse the map with the territory.
We have enough examples to know the neuronic model is wrong, but the proposed models we have found aren't useful yet, and with what this post provided shows that may always hold.
Physics and other fields make similar assumptions, like physics assumptions that laplacian determinism is true, despite counterexamples.
Gödel, Rice, Turing. etc... may be proven wrong some day, but right now Halt ~= open frame ~= symbol grounding ~= system identification in the general case is the safe bet.
But that doesn't help get work done, or possibly find new math that changes that.
I don't think you need anything fancy to tackle the "surprise examination" or "unexpected hanging" paradox. This is my take on it, at least:
> The teacher says one day he'll give a quiz and it will be a surprise. So the kids think "well, it can't be on the last day then—we'd know it was coming." And then they think "well, so it can't be on the day before the last day, either!—we'd know it was coming." And so on... and they convince themselves it can't happen at all.
> But then the teacher gives it the very next day, and they're completely surprised.
The students convince themselves that it can't happen at all... and that's well and good, but once they admit that as an option, they have to include that in their argument—and if they do so, their entire argument falls apart immediate.
Consider the first time through: "It can't be on the last day, because we'd know it was coming, and so couldn't be a surprise." Fine.
Now compare the second time through: "If we get to the last day, then either it will be on that day, or it won't happen at all. We don't know which, so if it did happen on that day, it would count as a surprise." Now you can't exclude any day, the whole structure of the argument fell apart.
Basically, they start with a bunch of premises, arrive at a contradiction, and conclude some new possibility. But if you stop there, you just end up with a contradiction and can't conclude anything.
So you need to restart your argument, with your new possibility as one of the premises. And now you don't get to a contradiction at all.
I can't help but think the "surprise examination paradox" rests too much in English equivocation for it to be a properly logical paradox. In particular, the fact that "surprise" changes over time, and the fact that if I've logically deduced that it is "impossible" for the test to occur on the last day then it is ipso facto a surprise if it happens then.
Sit down and make the argument really rigorous as to the definition of "surprise" and the fuzz disappears. You can get several different results from doing so, and that's really another way of saying the original problem is inadequately specified and not really a logical conundrum. As "logical conundrums" go, equivocation is endlessly fascinating to humans, it seems, but any conundrum that can be solved merely by being more careful, up to merely a normal level of mathematical rigor, isn't logically interesting.
Those are the instructions he was given: he won’t know the date of his execution until he is told. He performs some reasoning, and concludes that he can’t get executed any day that week: therefore he will go free.
But if “he will go free” is a possibility, then his chain of reasoning falls apart. Previously he had argued “if I survive to the last day, I will be executed today. That won’t be a surprise. Therefore I can’t be executed on the last day.”
But once he has “…or I won’t get executed at all” as an option, then his reasoning would begin “if I survive to the last day, then either I’ll get executed today, or I won’t get executed at all” … and that’s as far as he can go. He can’t use that to conclude he won’t get executed on the last day, and he can’t then use that to conclude he won’t be executed on the second last day, and so on. The entire argument breaks apart immediately.
I'm not sure the "..." is sloppy notation—it can be made rigid pretty easily. The surprise is that students' expectations that if two decimal expressions are distinct, that the real number they correspond to must be distinct also. (Even there, students have already gotten used to trailing zeros being irrelevant).
Yeah; students often conflate numerals with numbers and assume that if two numerals are distinct then the numbers they represent must also be distinct. It's not trivially true! You need to prove that distinct numerals lead to distinct numbers and as 0.999... vs 1.000... demonstrates it is not always true for various kinds of numerals and numbers!
I think teasing apart numerals and numbers is a good first step on one's journey in mathematical logic.
I would encourage anyone who's intrigued by this paradox to read Timothy Chow's comprehensive paper on the subject (https://arxiv.org/abs/math/9903160).
In particular, he discusses what he calls the meta-paradox:
> The meta-paradox consists of two seemingly incompatible facts. The first is that the surprise exam paradox seems easy to resolve. Those seeing it for the first time typically have the instinctive reaction that the flaw in the students’ reasoning is obvious. Furthermore, most readers who have tried to think it through have had little difficulty resolving it to their own satisfaction.
> The second (astonishing) fact is that to date nearly a hundred papers on the paradox have been published, and still no consensus on its correct resolution has been reached. The paradox has even been called a “significant problem” for philosophy [30, chapter 7, section VII]. How can this be? Can such a ridiculous argument really be a major unsolved mystery? If not, why does paper after paper begin by brusquely dismissing all previous work and claiming that it alone presents the long-awaited simple solution that lays the paradox to rest once and for all?
> Some other paradoxes suffer from a similar meta-paradox, but the problem is especially acute in the case of the surprise examination paradox. For most other trivial-sounding paradoxes there is broad consensus on the proper resolution, whereas for the surprise exam paradox there is not even agreement on its proper formulation. Since one’s view of the meta-paradox influences the way one views the paradox itself, I must try to clear up the former before discussing the latter.
> In my view, most of the confusion has been caused by authors who have plunged into the process of “resolving” the paradox without first having a clear idea of what it means to “resolve” a paradox. The goal is poorly understood, so controversy over whether the goal has been attained is inevitable. Let me now suggest a way of thinking about the process of “resolving a paradox” that I believe dispels the meta-paradox.
That’s not a paradox, though, that’s just impossible instructions. There’s nothing paradoxical about impossible instructions. The teacher should have stipulated “You’ll have a test within the next X days but won’t know which day it’ll be on, or else it won’t happen at all.” The students realize that no test is a possibility, but they wrongly conclude that it’s what will happen, instead of realizing that that possibility merely makes the teacher’s instructions valid.
Well, yeah, a resolution to something that's called a paradox means it's no longer a paradox. It seems we agree that the original instruction as stated,
"You’ll have a test within the next X days but won’t know which day it’ll be on"
> The students convince themselves that it can't happen at all... and that's well and good, but once they admit that as an option, they have to include that in their argument—and if they do so, their entire argument falls apart immediate.
Your critical thinking is bad. The first paradox happens when the prisoner concludes that the judge lied, using a rational deduction. A second paradox happens when it transpires the judge told the truth.
The prisoner concludes that the judge’s instructions were impossible, which is true. Their conclusion was that there’s an additional possibility: that they don’t get hung at all. Which is also true. Their mistake is believing that this new possibility will come to pass, instead of realizing that the new possibility means that the judge’s instructions aren’t impossible after all.
So in the end, the judge was telling the truth, and the prisoner was mistaken, and then dead.
Another weird one related to Gödel’s theorems is Löb’s theorem: given a sound formal system F and a sentence s, if F proves that “if s is provable in F, then s,” then F also proves s. That is:
F ⊢ (Prov_F(“s”) → s) → s
Which is strange because you might think that proving “if s is provable, then s” would be possible regardless of whether s is actually provable. But Löb’s theorem shows that such self-referential statements can only be proven when s itself is already provable.
> F ⊢ (Prov_F(“s”) → s) → s
This is called "Löb’s hypothesis" and it's an incredible piece of logical machinery[1]. If you truly understand it, it's pretty mind-blowing that it's actually a logically sound statement. It's one of my favorite ways to prove Gödel's Second Theorem of Incompleteness.
[1] https://categorified.net/FreshmanSeminar2014/Lobs-Theorem.pd...
"If this sentence is true, then Germany borders China."
not yet
The final bit of Baez's article has an interesting bit here:
> So, conceivably, the concept of 'standard' natural number, and the concept of 'standard' model of Peano arithmetic, are more subjective than most mathematicians think. Perhaps some of my 'standard' natural numbers are nonstandard for you! I think most mathematicians would reject this possibility... but not all.
It's probably worth elaborating why the majority of logicians (and likely most mathematicians) believe that standard natural numbers are not subjective (although my own opinion is more mixed).
Basically the crux is, do you believe that statements of the form using all/never quantifiers such as "this machine will never halt" or "this machine will always halt" have objective truth values?
If you do, then you are implicitly subscribing to a view that the standard natural numbers objectively exist and do not depend on subjective preferences.
Natural numbers are "natural" natural numbers. That is, I would argue, that the Peano definition is the most simple, straightforward and obvious (not that I would have come up with Peano's axioms) and therefore "natural" way to define something like whole numbers. And interestingly it is the prototypical way to define an (countably) infinite set, which is the actual cause of the trouble. If you limit everything to finite sets of stuff, all the Goedel evilness goes away. But then, math would be boring. If you admit non-finite sets, you will have countably infinite sets, and those will always be isomorphic to natural numbers. I'd bet good money that if there was an alien civilization, there would be an alien named Peano.
Isn't this a wittgensteinian problem? ie how you interpret language is inherently subjective. But regardless of what language you use, or how you intend to bind the terms to reality semantically, apriori truth is still apriori truth. It's the only basis of objective "truth" we have access to. Throwing that out the window feels like tossing the baby out with the bath water.
Imo, this just comes down to the fact that most people would consider "standard" to be a floating signifier. I don't think the idea that mathematical concepts changing definitions when you change axioms is at all controversial in itself.
One of the huge sources of map/territory relation issues is that we use systems that let us avoid the problem discussed here
Definitely worth spending time on.
I would have thought that the proof shows this problem is unavoidable. How in your view do we avoid this problem?
You don't avoid it, you realize that there are fundamental limits and try to find ways to still get work done.
While you can't have the territory road, plat, and topographical maps may be incomplete, but all have their uses.
I'm currently a machine learning grad student taking a meta-complexity class and came across this blog post. I found the whole thing very interesting. In particular the idea that some things are uncomputable seems fundamentally unaddressed in ML.
We usually assume that (a) the entire universe is computable and (b) even stronger than that, the entire universe is _learnable_, so we can just approximate everything using almost any function as long as we use neural networks and backpropagation, and have enough data. Clearly there's more to the story here.
> We usually assume that (a) the entire universe is computable and (b) even stronger than that, the entire universe is _learnable_, so we can just approximate everything using almost any function as long as we use neural networks and backpropagation, and have enough data.
I don't think the assumption is that strong. The assumption is rather that human learning is computable and therefore a machine equivalent of it should be too.
While I am to old to say what is taught today, this is exactly the map vs territory issue I mentioned in my other comment.
It is all there in what you would have been taught, but hidden because we tend to avoid the hay in the haystack problems and focus on the needles, because we don't have access to the hay.
As an example that can cause huge arguments, if for analysis you used Rudin, go look for an equally binary operator in that book, as every construction of the reals is a measure zero set, it is actually impossible to prove the equality of two real numbers. ZFC uses constructablity, Spivik uses Cauchy sequences etc...
If you look at the paper[1], about the equivalence of PAC/VC dimensionality it is all there, framing learning as a decision problem, set shattering etc.
Richardson's theorm, especially with more recent papers is another lens.
With the idea that all models are wrong, but some are useful, it does seem that curriculums tend to leave these concepts to postgraduate classes, often hiding them or ignoring them, in descriptive complexity theory they should have been front and center IMHO.
Assuming something is computable or learnable is important for finding pathological cases that are useful, don't confuse the map with the territory.
We have enough examples to know the neuronic model is wrong, but the proposed models we have found aren't useful yet, and with what this post provided shows that may always hold.
Physics and other fields make similar assumptions, like physics assumptions that laplacian determinism is true, despite counterexamples.
Gödel, Rice, Turing. etc... may be proven wrong some day, but right now Halt ~= open frame ~= symbol grounding ~= system identification in the general case is the safe bet.
But that doesn't help get work done, or possibly find new math that changes that.
[1] https://dl.acm.org/doi/10.1145/76359.76371
the decimal point is a lie, big whoop
p-adic numbers will save us. the alternative completion of the rationals we did not know we needed
the only problem is making sense of p-adics in terms of counting seems rather difficult
I don't think you need anything fancy to tackle the "surprise examination" or "unexpected hanging" paradox. This is my take on it, at least:
> The teacher says one day he'll give a quiz and it will be a surprise. So the kids think "well, it can't be on the last day then—we'd know it was coming." And then they think "well, so it can't be on the day before the last day, either!—we'd know it was coming." And so on... and they convince themselves it can't happen at all.
> But then the teacher gives it the very next day, and they're completely surprised.
The students convince themselves that it can't happen at all... and that's well and good, but once they admit that as an option, they have to include that in their argument—and if they do so, their entire argument falls apart immediate.
Consider the first time through: "It can't be on the last day, because we'd know it was coming, and so couldn't be a surprise." Fine.
Now compare the second time through: "If we get to the last day, then either it will be on that day, or it won't happen at all. We don't know which, so if it did happen on that day, it would count as a surprise." Now you can't exclude any day, the whole structure of the argument fell apart.
Basically, they start with a bunch of premises, arrive at a contradiction, and conclude some new possibility. But if you stop there, you just end up with a contradiction and can't conclude anything.
So you need to restart your argument, with your new possibility as one of the premises. And now you don't get to a contradiction at all.
I can't help but think the "surprise examination paradox" rests too much in English equivocation for it to be a properly logical paradox. In particular, the fact that "surprise" changes over time, and the fact that if I've logically deduced that it is "impossible" for the test to occur on the last day then it is ipso facto a surprise if it happens then.
Sit down and make the argument really rigorous as to the definition of "surprise" and the fuzz disappears. You can get several different results from doing so, and that's really another way of saying the original problem is inadequately specified and not really a logical conundrum. As "logical conundrums" go, equivocation is endlessly fascinating to humans, it seems, but any conundrum that can be solved merely by being more careful, up to merely a normal level of mathematical rigor, isn't logically interesting.
You did not understand the paradox.
The word "surprise" here means that the prisoner won't know his date of execution until he is told.
[Edited]
I did understand the paradox.
Those are the instructions he was given: he won’t know the date of his execution until he is told. He performs some reasoning, and concludes that he can’t get executed any day that week: therefore he will go free.
But if “he will go free” is a possibility, then his chain of reasoning falls apart. Previously he had argued “if I survive to the last day, I will be executed today. That won’t be a surprise. Therefore I can’t be executed on the last day.”
But once he has “…or I won’t get executed at all” as an option, then his reasoning would begin “if I survive to the last day, then either I’ll get executed today, or I won’t get executed at all” … and that’s as far as he can go. He can’t use that to conclude he won’t get executed on the last day, and he can’t then use that to conclude he won’t be executed on the second last day, and so on. The entire argument breaks apart immediately.
I agree. The premise itself is spurious. I've never liked this paradox because I don't think it makes sense from the get go.
It is like the infamous 0.999999... = 1. That one uses sloppy notation (what is "..."?) to make students think and talk about math.
It's not sloppy notation. It's an unambiguous infinite series of the form sum_n=1^infinity 9/10^n that converges to 1.
It's the same reason that 0.333... = 1/3. It's an infinite series that converges on 1/3.
Students learn repeating decimals before they understand infinite series.
I'm not sure the "..." is sloppy notation—it can be made rigid pretty easily. The surprise is that students' expectations that if two decimal expressions are distinct, that the real number they correspond to must be distinct also. (Even there, students have already gotten used to trailing zeros being irrelevant).
Yeah; students often conflate numerals with numbers and assume that if two numerals are distinct then the numbers they represent must also be distinct. It's not trivially true! You need to prove that distinct numerals lead to distinct numbers and as 0.999... vs 1.000... demonstrates it is not always true for various kinds of numerals and numbers!
I think teasing apart numerals and numbers is a good first step on one's journey in mathematical logic.
Thanks for the reminder of the trailing zeroes.
is also a big number.I would encourage anyone who's intrigued by this paradox to read Timothy Chow's comprehensive paper on the subject (https://arxiv.org/abs/math/9903160).
In particular, he discusses what he calls the meta-paradox:
> The meta-paradox consists of two seemingly incompatible facts. The first is that the surprise exam paradox seems easy to resolve. Those seeing it for the first time typically have the instinctive reaction that the flaw in the students’ reasoning is obvious. Furthermore, most readers who have tried to think it through have had little difficulty resolving it to their own satisfaction.
> The second (astonishing) fact is that to date nearly a hundred papers on the paradox have been published, and still no consensus on its correct resolution has been reached. The paradox has even been called a “significant problem” for philosophy [30, chapter 7, section VII]. How can this be? Can such a ridiculous argument really be a major unsolved mystery? If not, why does paper after paper begin by brusquely dismissing all previous work and claiming that it alone presents the long-awaited simple solution that lays the paradox to rest once and for all?
> Some other paradoxes suffer from a similar meta-paradox, but the problem is especially acute in the case of the surprise examination paradox. For most other trivial-sounding paradoxes there is broad consensus on the proper resolution, whereas for the surprise exam paradox there is not even agreement on its proper formulation. Since one’s view of the meta-paradox influences the way one views the paradox itself, I must try to clear up the former before discussing the latter.
> In my view, most of the confusion has been caused by authors who have plunged into the process of “resolving” the paradox without first having a clear idea of what it means to “resolve” a paradox. The goal is poorly understood, so controversy over whether the goal has been attained is inevitable. Let me now suggest a way of thinking about the process of “resolving a paradox” that I believe dispels the meta-paradox.
That sounds interesting—thanks for sharing, I'll check it out.
But it's stipulated that the test will happen on one of the days - it's not a possibility that it won't happen at all, hence the paradox.
One resolution is that what the teacher stipulates is impossible. It should really be
"You'll have a test within the next x days but won't know which day it'll be on (unless it's the last day)"
That’s not a paradox, though, that’s just impossible instructions. There’s nothing paradoxical about impossible instructions. The teacher should have stipulated “You’ll have a test within the next X days but won’t know which day it’ll be on, or else it won’t happen at all.” The students realize that no test is a possibility, but they wrongly conclude that it’s what will happen, instead of realizing that that possibility merely makes the teacher’s instructions valid.
Well, yeah, a resolution to something that's called a paradox means it's no longer a paradox. It seems we agree that the original instruction as stated,
"You’ll have a test within the next X days but won’t know which day it’ll be on"
is impossible
> So you need to restart your argument, with your new possibility as one of the premises. And now you don't get to a contradiction at all.
It’s amusing that you stopped here without giving an actual solution. Please do tell us, which day is the test on?
It could be on any day—even the last—and would be a surprise.
Really? Kids show up on the last day and know with certainty that the test is coming that day.
Not if they consider “we won’t have the test at all” as a possibility.
> The students convince themselves that it can't happen at all... and that's well and good, but once they admit that as an option, they have to include that in their argument—and if they do so, their entire argument falls apart immediate.
Your critical thinking is bad. The first paradox happens when the prisoner concludes that the judge lied, using a rational deduction. A second paradox happens when it transpires the judge told the truth.
The prisoner concludes that the judge’s instructions were impossible, which is true. Their conclusion was that there’s an additional possibility: that they don’t get hung at all. Which is also true. Their mistake is believing that this new possibility will come to pass, instead of realizing that the new possibility means that the judge’s instructions aren’t impossible after all.
So in the end, the judge was telling the truth, and the prisoner was mistaken, and then dead.