I find the omitted lines of thought as interesting as the included ones.
I’ll point out that my personal belief is that reality is hypothetical, and that in order to formulate that hypothesis, mathematical objects and logic are needed; thus, maths is more fundamental than reality. Further, logic is metaphysics and abstract algebra is ontology.
I wonder what the author would think of the Armchair Knowledge Problem. It is possible to sit in quiet self-reflection and, as a result, become convinced of mathematical truths that were not previously known to be true. How? Sociology provides a hint with the concepts of parochiality and redundancy. Many different mathematicians study many different branches of maths, but maths is largely self-compatible, so that social interchange of memes between mathematicians suffices to generate new theorems of maths by synthetic and hybridized arguments. However, there’s a problem, still; how do we account for mathematicians like Gauss, Euler, Galois, Poincaré, Noether, Einstein, etc. being able to come up with wholly new perspectives? We can still imagine that they were influenced by their peers, but to what extent does the existence of a social pressure translate into maths theorems? Can we get more theorems from mathematicians just by applying more social demands?
The “philosopher’s take” is quite interestingly wrong. I will gladly admit that I have no idea why cooking rocks ought to lead to computation, and hardware is indeed mysterious. But software is treatable using standard information theory, talking of bits and leading up to formal abstract machines, and so there is no category mistake. Perhaps the mistake is in imagining that we can “formally verify a program”; indeed, what we formally verify is a property of the program, relative to some axioms and context. I don’t have access to the cited works, but the argument seems to be dismissable based on a surface-level reading. The cutting-edge ontology for correctness of formal proofs is based upon weird machines and correctness of embedding. The bugs that a compiler can have are precisely the extra functionality introduced by a weird machine; the user’s buggy code provokes the compiler into entering weird states not in the original specification.
The “sociologist’s take” is specifically wrong, and not many lessons can be learned. The author simply hasn’t run across a table like this table of type and category theories which shows that the claimed lone correspondence is actually only one element of a much bigger connection. Indeed, it is routine for category theorists to demonstrate the entirety of the connection by defining first a category or logic or language, and then completing the correspondence. As an example, this paper on relational logic builds the type system, category, and even suggests a toy syntax.
The “cognitive scientist’s take” feels hopelessly lost. Here’s an aspect of lambda calculus not mentioned in the article. First, the informal version. Suppose that I can build widgets for you, but I need both a left piece and a right piece. If you give me a left piece, and then return later and give me a right piece, then I can build the widget when I get the second piece. Alternatively, you could wait until you have both pieces, and then give me both pieces at once, and I’ll build the widget at once. The formal version:
(A × B) → C ≅ A → (B → C)
Is this physical experience needed to construct the lambda calculus? It’s certainly a physical experience which affirms the lambda calculus, but it also affirms the underlying Cartesian closed category.
Finally, what about anthropic bias? We cannot imagine what it is like to think with any hardware besides our own, and this limitation prevents us from conceptualizing some things in a way that leads to recursive unknowability.
Was going to make a joke about Lisp already being alien technology, but the article is actually seriously good.
I don’t know man… It’s already too hard for many humans, including me :x
Suggest cogsci, math