I am interested in not just what this toy theory reproduces, but what it lacks. Without the Kochen-Specker theorem, there is no Free Will Theorem. Additionally, the theory works with permutations of data, which lend it a reversibility. I think that these are connected: The fact that contextuality requires some sort of question-and-answer conversation with particles being measured is related to the fact that we cannot reverse real-world processes in the laboratory.

There are lots of fun small observations, too. For example, the author notes that pairs of classical bits are used to implement the data for qubits, which is analogous to the Chu construction for linear logic, where one reversible datum is implemented as two data, one traveling “forward” or “truthward” and one traveling “backward” or “contradictionward”.

The central mystery of the paper is a deep and good one: What is it about reality that, due to this essential epistemic incompleteness of qubits, we will never be able to directly observe? Does it have term-rewriting co-hygiene over some sort of rewriting grammar? Does it implement gravity?

To add something not present in the paper, let’s quickly analyze Newcomb’s Paradox in this setting. I am fond of a recent realization, which is that the Free Will Theorem demolishes Newcomb’s Paradox by, in a nutshell, forcing the Predictor to predict the outcome of a coin flip, and ensuring that the coin flip cannot be predicted by letting it exercise some Free Will. Here, our toy theory doesn’t have the Free Will Theorem. Instead, note that we are manipulating epistemic states, but the Predictor’s claimed data is about ontic states, which is forbidden by the theory’s rules. So, just like with the flipped coin, the Predictor can never have better than a 50/50 chance of correctly predicting which ontic state corresponds with a given epistemic state, which is a total defeat.

I am interested in not just what this toy theory reproduces, but what it lacks. Without the Kochen-Specker theorem, there is no Free Will Theorem. Additionally, the theory works with permutations of data, which lend it a reversibility. I think that these are connected: The fact that contextuality requires some sort of question-and-answer conversation with particles being measured is related to the fact that we cannot reverse real-world processes in the laboratory.

There are lots of fun small observations, too. For example, the author notes that pairs of classical bits are used to implement the data for qubits, which is analogous to the Chu construction for linear logic, where one reversible datum is implemented as two data, one traveling “forward” or “truthward” and one traveling “backward” or “contradictionward”.

The central mystery of the paper is a deep and good one: What is it about reality that, due to this essential epistemic incompleteness of qubits, we will never be able to directly observe? Does it have term-rewriting co-hygiene over some sort of rewriting grammar? Does it implement gravity?

To add something not present in the paper, let’s quickly analyze Newcomb’s Paradox in this setting. I am fond of a recent realization, which is that the Free Will Theorem demolishes Newcomb’s Paradox by, in a nutshell, forcing the Predictor to predict the outcome of a coin flip, and ensuring that the coin flip cannot be predicted by letting it exercise some Free Will. Here, our toy theory doesn’t have the Free Will Theorem. Instead, note that we are manipulating epistemic states, but the Predictor’s claimed data is about ontic states, which is forbidden by the theory’s rules. So, just like with the flipped coin, the Predictor can never have better than a 50/50 chance of correctly predicting which ontic state corresponds with a given epistemic state, which is a total defeat.