1. 18
    1. 9

      For more background, I believe this pattern comes from natural deduction. Following natural deduction, every ‘connective’ (i.e. type) should usually follow the pattern of having:

      • formation rules (how the type of a connective is formed)
      • introduction rules (how elements of a type are constructed)
      • elimination rules (how elements of a type are deconstructed)
      • computation rules (how introduction and elimination forms interact)

      Some connectives might not have all of these - for instance the empty type (i.e. never or void) won’t have any introduction rules. This is why you can’t express uninhabited types in languages where there must always be a default value (for example in Go).

      1. 1

        Well stated. To provide entirely too much detail, there’s an instance of Yoneda’s lemma lurking here, and we can prove all of this; a type really is defined (up to unique isomorphism) by the forms which mention it. For example, an initial object, which corresponds to the empty type, has no incoming arrows other than its identity arrow; this corresponds to the fact that an initial object can’t be constructed given a collection of non-initial objects.

    2. 4

      This is a very type-theoretic way of looking at types, the sort of thing you do if you want to take the maximalist approach of defining all computation as simply relaxing expressions to their normal form. Does this approach actually lead to ergonomic code, though? It was certainly useful for learning about dependent typing as presented in books like The Little Typer, but I’m doubtful expressing everything about types in terms of intro and elim forms will help you do things with a programming language. Do any extant languages actually do this?

      1. 9

        I would argue that any language in the ML family works this way. When you define a type, the only thing you can do with it is introduce it (via construction) or eliminate it (via pattern matching). In some sense, both ML and Haskell are about relaxing expressions to normal forms.

      2. 5

        Yeah, it’s fine I think - you don’t necessarily need to used this as a guide for writing code… more as a way of understanding the deeper things going on with types. For example, you can see pattern matching as being a convenient way of building up elimination forms. You may or may not implement it this way (there are pros and cons to elaborating to elimination forms - some languages do it, others don’t), but I do think it’s useful to think in those terms as a language designer.

      3. 4

        Types in Lean often have a method called elim as a primitive pattern matching deconstructor, but it probably wouldn’t be seen in idiomatic code.

        https://leanprover-community.github.io/mathlib_docs/data/option/defs.html

      4. 4

        Sure, an extremely simple example that shows up all over the place are “smart constructors”. In Rust:

        pub struct NonNegativeF32(f32);
        
        impl NonNegativeF32 {
          pub fn introduce(value: f32) -> Option<NonNegativeF32> {
            if !(value < 0.0) { Some(NonNegativeF32(value)) } else { None }
          }
        
          pub fn eliminate(&self) -> &f32 { &self.0 }
        }
        

        From a perspective outside of this module (which thus respects pub declarations) the properties of the intro ensure that for all x: NonNegativeF32, x.eliminate() >= 0.0. The properties of the type are exactly specified in the intro/elim rules.

      5. 2

        It’s worth noting that NBE isn’t the only way that types apply to computation. There’s also native type theory: take the values which arise on your concrete machine, collect them into types, and then describe the behaviors of your machine using functions on those types.

        Edit: Changed second link; “previously, on Lobsters…”

    3. 1

      Nice article. Pretty clever cpp example at the end.

    4. 1

      This model doesn’t seem to account for mutable state at all. For instance, the Java interface is treated as an elimination, but it’s a void function: what exactly is it eliminating to? I think noting that the interface doesn’t capture introduction is worthless, because this model doesn’t successfully describe the interface to begin with.

    5. 1

      For example, in Java there’s basically no way to abstract over how a type is created

      interface Foo {
          void foo();
      }
      

      It’s true that there’s no built-in way to abstract over how a type is created in Java, but I don’t think this example explains the problem. Supposing that Foo had method Self foo(); which returned an instance of the same type: you would already have to have an instance of that type in hand in order to call foo, so you haven’t really restricted the intro form, because there must have been another intro form somewhere to give you the instance of foo.

      In other languages, you might be able to dispatch on the type directly as part of the “abstraction”. In Rust:

      trait Factory {
          fn new() -> Self;
      }
      
      struct Foo;
      impl Factory for Foo {
          fn new() -> Self {
              Foo
          }
      }
      
      fn use_foo<F: Factory>() {
          F::new();
      }
      

      But this isn’t really a matter of abstracting over intro forms, but dispatching on the type alone, which isn’t possible in languages where type-directed compilation is not available (Java, OCaml, etc.), and is orthogonal to the issue of controlling the intro form.

      The footnote remarks that Typescript lets you abstract over the type by adding new to the interface, but if you’re allowed to introduce values into the construction process as proxies for types, then you can certainly do the same thing in Java, e.g.

      interface Factory<T> {
          T create();
      }
      
      class Foo {
          private Foo() {}
          public static Factory<Foo> factory() {
              return new Factory<Foo>() {
                  public Foo create() {
                      return new Foo();
                  }
              };
          }
      }
      

      And now you can create an instance of a “constructor” of Foo while still strictly controlling the intro form.


      As has been noted elsewhere in the thread, intro forms can be interpreted from the proofs-as-programs viewpoint: when the intro form is strictly controlled, you can establish that an arbitrary runtime invariant has been satisfied as part of the creation of that value. Having a value of that type is proof that the invariant was upheld. This means that you can verify properties about the value in question, such as that a number is non-negative, or that a list is non-empty, or that an index into a list is a valid index for that list. You can go farther and use values as capabilities to represent invariants outside of the value in question.

      Having constructed a value of a certain type means that an invariant has been true at some point in the past. For data structures where the entire invariant in question is held inside the value, you can continue to uphold the invariant by exposing only a safe interface to interact with that value. In a language like Rust, you can bound the lifetime of the value to ensure that an invariant is true for the entire lifetime of that value, not just at one point in the past, which is pretty nice in practice. For example, holding a MutexGuard means that the corresponding Mutex has been locked and you have exclusive access to it (or, more mundanely, holding a reference to a value means that the value is valid).


      If you want to get very into the weeds, the idea that all types are defined by introduction and elimination forms assumes you’re approaching type theory from an operational semantics viewpoint.

      The links might be a little misleading: the linked operational and denotational semantics mostly refer to runtime semantics rather than the semantics of the terms in the type system. You can probably speak of the denotational semantics of a type (like taking the derivative of a type), but I don’t think the articles will help you get there.