1. 77
  1.  

  2. 17

    Very insightful. I do find type level programming in Haskell (and, to a lesser extent, Rust) to be a confusing nightmare. Nonetheless, Rust could not exist without traits (i.e. without bounded polymorphism). The Sync and Send marker traits (combined with borrow checking) are the basis of thread safety.

    I think Zig takes an interesting approach here, with it’s compile-time programming (i.e. type level programming with the same syntax and semantics as normal programming), but it suffers from the same typing issues as C++ templates, i.e. types are only checked after use and monomorphization. Rust’s bounded polymorphism can and is type checked before monomorphization, so you know if there are type errors in general. In Zig (and C++), you only know if there are type errors with a particular type, and only after using a function (template) on that type.

    I think there’s room for an approach that’s more like Zig’s, but with sound polymorphic typing, using dependent type theory. Coq, Agda, and Idris include type classes (aka implicits, bounded polymorphism), but it doesn’t seem like type classes should be necessary in a dependently typed language. In particular, it doesn’t seem like they should provide any increase in expressiveness, though perhaps they reduce verbosity.

    1. 5

      Fwiw, even in Haskell you only really need one extension to obviate type classes in terms of “expressiveness,” namely RankNTypes. See https://www.haskellforall.com/2012/05/scrap-your-type-classes.html

      …though it doesn’t solve the verbosity issues. But I suspect that a language with better support for records might make this a pretty good solution (I have a side project where I am working on such a language).

      1. 2

        RankNTypes is my top pick for something to add to Haskell. however, for common cases type classes have the advantage of having decidable inference.

        1. 3

          Note that in the context of replacing type classes, the usual decidability problem with inference doesn’t really come up, because either way the higher rank types only show up in type definitions. E.g.

          class Functor f where
              fmap :: (a -> b) -> f a -> f b
          

          vs.

          data Functor' f = Functor'
              { fmap' :: forall a b. (a -> b) -> f a -> f b
              }
          

          In the latter case, the problems with inference don’t come up, because the higher-rank quantifier is “hidden” behind the data constructor, so normal HM type inference can look at a call to fmap' and correctly infer that its argument needs to be a Functor' f, which it can treat opaquely, not worrying about the quantifier.

          You can often make typchecking advanced features like this easier by “cheating” and either hiding them behind a nominal type, or bundling them with a other features as a special case.

          (N.B. I should say that for just Functor' you only need Rank2Types, which actually is decidable anyway – but I don’t think GHC actually decides it in practice, so it’s kindof a nitpick).

          Of course this is talking about type inference, whereas type classes are really more about inferring the values, which, as I said, this doesn’t solve the verbosity issues.

      2. 5

        Type classes aren’t just about verbosity, global coherence is a very important aspect. Any decision on whether to use a type class vs. explicit dictionary passing needs to consider the implications of global coherence. I think Type Classes vs. the World is a must-watch in order to engage in productive discussion about type classes.

      3. 14

        This is gold, bookmarked! I have been trying to articulate a bunch of these tradeoffs, and apparently not being very convincing.

        Another big one which I don’t see mentioned – from a language design point of view – is the bifurcation of the language, with the compile time language starting out weaker, and gradually falling down a slippery slope (e.g. what happened to C++).

        For example Rust appears to be following in C++’s path – gradually opening up more of the language to constexpr. It appears that a huge part, or even a majority, of C++ 11, 14, 17, 20 actually relates to increasing the increasing the scope of the compile-time language (constexpr), not the runtime language.

        Rust is in the earlier stages of this process, i.e. I expect const contexts to get more powerful as people run into its limitations.

        I think Zig got it right with full compile-time metaprogramming from the beginning – it’s one language, and not two (or three, as Rust as multiple kinds of compile time metaprogramming).


        I’m not necessarily saying this is bad, but it’s certainly more complex.

        related threads:

        https://news.ycombinator.com/item?id=26378141 and https://lobste.rs/s/lng3c0/announcing_rust_1_50_0 (Rust following C++)

        https://lobste.rs/s/9rrxbh/on_types#c_qanywm and https://news.ycombinator.com/item?id=26378696 (Type checking vs. metaprogramming)

        https://news.ycombinator.com/item?id=27026990 (debugging and profiling macros, Andy Kelley chimes in on the “1000 backward branches” rule in Zig)

        https://news.ycombinator.com/item?id=21310029 (MyPy inhibits metaprogramming; it’s basically a different language than Python. I use it in Oil for predictable speed via translation to C++, not really for correctness)


        (This issue is related to the compile-time debugging issue: because you have 2 languages, you would need essentially different debuggers for both compile time and runtime. This isn’t theoretical; plenty of programmers avoid C++ template metaprogramming simply because they do not know how to debug it with the only tool they have available: the compiler’s error messages. Well, I think compile time asserts have helped but I think those didn’t even exist when template metaprogramming became a thing – there was a trick to roll your own!)

        Also Syme is making a “slippery slope” argument here too … I think this is an issue that fundamental to computation itself. Maybe it’s a “Turing tarpit”, although I’m not sure that’s the exact concept and/or the best name. I’ll have to think about it.

        1. 12

          I don’t want F# to be the kind of language where the most empowered person in the room is the category theorist.

          Which sort of mathematician should be most empowered by F♯? The discrete mathematician and set theorist working in combinatorics? The computer scientist working in data structures and algorithms? The data scientist working in linear algebra and topology?

          It’s like they are prioritising type-level programming over actual programming.

          Ah, yes, “actual programming”, the thing done by “real programmers”.

          1. 17

            The tools used by industry and research generally are different in most disciplines. Researchers in concrete physics generally don’t own any 500 ton rock crushers or building-sized kilns, and industrial creators and users of concrete often don’t have any electron microscopes. Not even one!

            1. 14

              It doesn’t matter what the most empowered mathematician is, as long as it doesn’t come at the expense of empowering the enterprise programmer.

              1. 12

                I agree except I would leave off “enterprise”:

                It doesn’t matter what the most empowered mathematician is, as long as it doesn’t come at the expense of empowering the programmer.

                I use math when it supports working software, which happens all the time. Parsing is math; algebraic data types are math, graphics and audio use tons of math, statistics is math, etc.

                Type level programming can be useful, but in the extreme it suffers from “the map is not the territory” problem. Types are a map (model) for runtime behavior, and nothing more than that. The tail shouldn’t wag the dog, etc.

            2. 7

              Working in a mobile-and-otherwise dev company of many projects and teams, the anecdote about Swift protocol-based code rings very true. Many enthusiastic Swift developers are tempted to use the most exotic corner of the language to accomplish anything. I’ve observed the same happening in TypeScript. It’s a struggle to get people to consider all their options and pick the simplest one (“Could this just be a function?”). What those interesting features exist to do is solve the problems that plain functions and objects don’t already solve just as well.

              There was once a WWDC talk that included the advice, “start with a protocol.” Later, when it was clear that had created a mess, there was another WWDC talk exhorting Swift developers not to take that line so literally.

              1. 2

                Yeah, I was coming here to say the same thing. Coming from a recently-mostly-Swift perspective, the part about Swift sounded familiar. I don’t think protocols are quite a feature of last resort, but most of the time you get by just fine without them, and at the same time you get to not worry about a bunch of surprises and design traps.

                1. 3

                  Protocols are indeed useful at least as frequently as interfaces in regular OO languages. Just, not every function needs to be in a protocol extension, or a system framework type extension for that matter. Often all that does is change which argument gets to be self. Many discussions in our Slack have begun with a question about how to handle some difficult corner of protocols with associated types, and we’ve then discovered it wasn’t necessary to go down the road that led there. I sometimes challenge people to make it a free function, get it working, and only then determine which type would most idiomatically contain it as a method.

                  1. 1

                    I often find this rule is useful: protocol is only needed if you have two things want to conform to it.

                    Swift tries to orient you to struct / protocol based programming, but mostly, it is still quite OO and thinking through protocol / struct lens can be counter-productive.

                    At the end of the day, if you need to use open class with generics, just go for it, write it down, and if you are not happy, can always improve from there with fancier things (protocols, PATs, property wrappers, function builders …).

              2. 7

                You can have type level programming without distinguishing between type level programming and value level programming. Dependent types achieves this, by making types and values live in the same namespace.

                1. 1

                  Do you have any favorite examples?

                  1. 6

                    Dependent types make stuff like type families, GADTs and kinds obsolete, which are all special-cased in Haskell today.

                    So since we’re talking about removing code, I it is weird to bring an example. But really just take any arbitrary type family, and that can serve as an example.

                    A function like this has to be written using a type family or GADT in Haskell:

                    the (Bool -> Type) (\x => if x then Float else Int)
                    

                    (this is Idris syntax)

                    1. 2

                      Does that mean Zig has some form of dependent types?

                      fn the(x: bool) type {
                          return if (x) f64 else i64;
                      }
                      
                      pub fn main() void {
                          @import("std").debug.warn("wee " ++ @typeName(the(true)));
                      }
                      
                      1. 4

                        I would say that Zig does have dependent types, but it’s dependent type system is “unsound”, in the sense that a function call can type check in one use case but might fail type checking in another use case, due to how types are only checked after use and monomorphization. It’s arguably better than not having them at all, though, and I think that’s born out in practice.

                        It certainly limits the value of dependent types, though. You can’t reliably use Zig’s type system as a proof assistant in the way that you can use Coq’s, Agda’s, or Idris’s.

                        1. 2
                  2. 7

                    It’s like they are prioritising type-level programming over actual programming. Indeed I believe psychologically this is what is happening - people believe that programming in the pure and beautiful world of types is more important than programming in the actual world of data and information.

                    This is so good. It’s exactly how I was 5–10 years ago. The whole post is gold and puts into words things that I’ve felt since recovering from my Haskellitis, but that I’ve not been able to articulate.

                    1. 6

                      that was a really good explanation of the potential downsides to adding more type-level features; i hope he writes it up as a blog post or something at some point, rather than leaving it buried in a github issue comment.

                      along similar lines, we recently added a typing faq to pytype’s documentation, because there are always people who think “hey, if you’re adding types to python, why not add these features from my favourite typed language!” (usually ML or haskell), and then act like not doing so is a bug or missing feature.

                      1. 2

                        But I really like my statically-typed heterogeneous lists. :’-(