1. 15
  1.  

  2. 12

    So there are like these two schools of thought, the Turing School of Thought that says: “Well the computer is just a device that has like a sort of like a printing head and the reader and so on, and it just moves the tape and infinite tape and moves them.” So this is a very mechanical kind of approach. And then there is this very abstract approaching mathematics.

    Oh come on now. Saying the schools of thought are “Turing machines vs mathematics” is silly. Automata theory and formal languages are both topics in math, just as category theory is.

    But things in imperative programming are not very well defined. It’s like they don’t really have this nice mathematical structure.

    This is also silly. Dynamic logic is part of math. Temporal logic is part of math. Most formally-verified production code uses some form of state machine formalism. State machines are also math.

    Anyways if I call something, you know like in object oriented programming, if I call something an object that’s so meaningful, right? Just try to define what you mean by an object in a programming language. Just because you took a word from English language that everybody thinks they understand, that doesn’t mean that an object in C++ for instance, is immediately obvious.

    That’s actually often better for beginners. The technically complete answer can wait until they’ve built a mental model.

    For a programmer for instance? Like it doesn’t make sense to learn category theory. Will it make you a better programmer? I think it will. It will sort of make you a higher level programmer. You know, it’s like being able to lift yourself above your program. It’s like otherwise you, you know, you’re just like a little ant working in an anthill, right? And the only things you see are the things that are close around you, right? It’s like your never able to like lift yourself above the anthill and see how it’s related to the rest of the world and so on.

    Honestly this is what bothers me most about this kind of advocacy. Sure, CT might make you a better programmer… but so do a lot of things. Juggling makes me a better programmer. I can list, off the top of my head, five or six programming lessons I learned from juggling. But I’m not applying juggling techniques when I program. I don’t think anybody who knows CT is applying CT techniques when they program.

    When we equate CT with math, we’re saying “learning math makes you a better programmer because it changes how you think”. And sure, that happens. But there’s also a lot of math that’s not CT that, in addition to making you think differently, also gives you powerful techniques to deal with code. Some I’ve run into:

    • With first-order logic you can really easily express complex specifications at the design level
    • Breaking tests into equivalence classes improves your test coverage
    • Analyzing metrics and log data with statistics
    • Combinatorics for estimating state space
    • Knowing how to read BNF context-free grammar

    That’s just the ones I’ve run into directly, where I had the relevant skills. I’ve definitely smashed into probability and graph problems where I knew it would have been much easier by knowing the math, but didn’t. You can learn mathematical thinking and build directly useful skills. So why do we keep talking about math as if it’s just CT?

    Oh, and the first example of the benefit of CT? “using Semigroups to represent the general notion of collapsing two entities into one”? You don’t need CT for that. That’s first-semester abstract algebra.

    1. 4

      I liked your post even though I somewhat disagree with it. I don’t know too much about CT, but it does seem like CT is one level of abstraction above all those other fields of mathematics you mentioned. If your fields are mathematics libraries with classes in them, CT is a library of interfaces. CT is the study of things that compose in myriad ways, and I think that does quality it as a candidate for the calculus of software engineering.

      Just a side note; I think it’s cruel to quote a non-native speaker’s transcript, it makes them sound silly :) When you listen to Bartosz, you don’t mind all the “like”s and he’s very careful in writing, so he writes with perfect English. So these quotes don’t do him justice. This is not a criticism towards you, BTW, you obviously can’t quote sound in text. Just an observation.

      1. 2

        Agreed on CT. People communicate better when they share models. CT, even if never explicitly used, is a robust shared understanding for software design. You know that you have that understanding when people bring up an instance of something that is, say, monoidal, and they can call to mind other instances and see the connection.

        1. 1

          We (the SE world) did attempt pattern languages before. Do you think it has helped?

          1. 1

            Yes, but less than it might have. Our discipline doesn’t do as well as it could with cross-generational knowledge transfer. A lot of this is due to rapid growth and fragmentation of education.

        2. 1

          Just a side note; I think it’s cruel to quote a non-native speaker’s transcript, it makes them sound silly :)

          To be honest I don’t think anybody, ever, has sounded good in a transcript :P

          1. 2

            I think Carmack is one exception. He speaks as well as many people write.

        3. 3

          But I’m not applying juggling techniques when I program. I don’t think anybody who knows CT is applying CT techniques when they program.

          I disagree; many CT techniques are directly applicable to programming. Functors, applicative/monoidal functors, monads, comonads, &c are the standard examples. My personal favorite is the categorical semantics of the simply-typed lambda calculus, which I use when writing compilers; it corresponds pretty directly to Oleg’s “final style”. But there are a few caveats.

          First, you don’t have to learn CT concepts from a math perspective; you can learn them from a programming perspective. I’ve gone both ways; I learned Haskell functors before category-theory ones, but denotational semantics before final style. Nor are these ideas necessary to write any particular program.

          Second, some of these techniques can be pretty niche (although not all; functors are everywhere). Denotational semantics, for example, is probably only useful if you’re implementing your own language or DSL. That said, many problems can be fruitfully viewed from this angle if you try. Conal Elliott, for example, has made a cottage industry out applying simple category theory & denotational semantics to all sorts of problems, from animation to compilation to neural nets to fourier transforms.

          But there’s also a lot of math that’s not CT that, in addition to making you think differently, also gives you powerful techniques to deal with code.

          This is certainly true!

          However, calling any particular piece of math “not CT” is dangerous; almost any math can be viewed from a categorical lens. I think this is part of why category theory can feel so evangelical: for some people, the categorical lens just clicks really well. They apply it to everything, and because it works so well for them, they promote it as “the key” to understanding math and programming (and thought and…).

          (The way category theory “just clicks” for some people reminds me of how some functional programmers get annoyed that Leibniz and big-O notation aren’t clear about variable binding structure and wish everybody just used λ-notation.)

          But the reverse is true as well! Almost any use of a CT idea can be viewed from a non-categorical angle. Category theory is relatively recent mathematically; one of the things that mathematicians like about it is that it gives a unifying framework for understanding many patterns that cropped up in earlier mathematical work. But just as these earlier results were discovered without CT, they can be understood without it; there are relatively few results which were discovered only by categorical means.

          So it’s hard to give concrete examples of the “advantages” of using CT, because any concrete example is subject to quibbling about whether it’s really category theory or not.

          1. 1

            Oh come on now. Saying the schools of thought are “Turing machines vs mathematics” is silly. Automata theory and formal languages are both topics in math, just as category theory is.

            To reiterate your point, Turing was familiar with set theory. The celebrated halting problem was inspired by Cantor’s diagonal argument.

            Also, Turing’s oracle machines connect to both first order arithmetic and set theory. The connection to first order arithmetic is via the Kleene–Mostowski hierarchy. The connection to set theory is via the Borel hierarchy. These connections are the topic of descriptive set theory.

            So why do we keep talking about math as if it’s just CT?

            Category theorists have long hoped for CT to be the foundation of mathematics. Paul Taylor and William Lawvere have advocated this point of view extensively.

            FWIW, in my amateur work in formally verifying my proofs, I’ve never needed CT. I just used Isabelle/HOL to formalize sentential logic and measure theory. I’ve occasionally needed to reach for Zorn’s lemma.

            In my professional work in Haskell, I try to avoid category theory. This is because it makes the code hard to read.

            Sometimes I explore CT in Haskell. I don’t often find conventional CT helpful. In conventional CT, categories are enriched in Set. In Haskell, it’s better to think of instances of Category to be enriched in Haskell itself.

            In addition, textbook CT constructions often fail.

            For instance, I have explored implementing exponentials in the Category of Functors in Haskell. An approach I’ve seen cited is to use this construction from Mathoverflow. AFAIK Phil Freedman’s implementation is the only one that works in Haskell.

          2. 1

            I respect Bartosz’s work, but I think that Wadler is right on this one. Sure, mathematics is a set of models but the models have uncanny correspondence to phenomena that we can measure independent of particular observers. The discovery of complex numbers way before their mapping to electomagnetism and fluid dynamics is one example; the discovery of non-Riemann geometries before the realization that they were descriptive of space-time in relativity is another.

            More can be found on this idea here. I like Max Tegmark’s take: https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences

            1. 0

              He is just trying to say that he believes in God.

              God := Omnipotent entity

              This is isomorphic to being able to prove False or finding an inhabitant of Void. Basically, the fundamental assumption of logic is that there is no God. Bartosz believes in God and so he thinks that Category Theory (or any form of logic) will never be able to describe the world (because God can make that description of the world meaningless on a whim, “oh you proved this can’t exist, well snaps fingers now it does).

              I tend to think of mathematics as this epic experiment to prove the existence of God by showing that logic can’t describe reality. (edit: Either that or we actually explain the whole world using logic, which would be… I don’t even know how to think about that kind of world)