1. 6

This is a description of a puzzle that I think could use a better solution. But I’m missing the pieces.

I think that there’s a clear mathematical side to the problem in whole, but the rest of it is like a production of a typesetting system or something else that requires creativity. How do you extend the set of rules such that the type promotion still forms a lattice, and that the extensions people can introduce are meaningful for solving their problems?

It would be surprising if someone figures this out. It may require that I’ll play with lattices a bit more and actually experiment with it a whole lot. But maybe there are some people who are smarter than me and see a solution to this straight away. You don’t know if you don’t try showing it to them.


  2. 4

    This is a pretty good summary of the problem and solution space known so far. However, despite mentioning Julia, you didn’t mention the fact that Julia has a user extensible promotion system!

    It’s a little ad-hoc, since it’s built on top of arbitrary parameter combinations + a fallback case that does the promotion. So users can add rules where types don’t match. However, you could look at that community a bit to see what the fallout has been from offering this functionality.

    As for your unsolved problem about enforcing the lattice: That is indeed a hard problem! I saw these “verified” classes in Idris. The idea is that types as proofs are members of the class, so that you can require proofs of the lattice properties. As a dynamic typing fan myself, I find that approach a little heavy handed for most users. However, the same idea could be applied more gently. For example, you could provide a library of properties that are readily usable with a “generative testing” framework.

    1. 3

      Erm… Sorry to be that guy who keeps hanging on to their “Blub”, but is this something that could be implemented in Haskell? For instance, I think it might be made possible within a type-class by requiring newly created supersets of pre-existing instances to define casts from their immediate subsets. For example, when defining Gaussian integers within Num you’d have to define an instance of the type-casting function that casts from regular integers (and possibly a previously defined subset of imaginary integers), appending an imaginary part of 0 to construct a Gaussian pair. This would keep the implementation cost to a minimum while allowing any two instances to be mixed and matched using a polymorphic type-casting (instance-casting?) function special to the type-class.

      Just a thought, not sure whether it can actually be implemented, especially since in cases where one type is not a strict superset of the other, the type system would have to automagically figure out the smallest superset that contains two instances—the promote() function as found in the article’s pseudocode. The type hierarchy might have to be hard-coded in order to attain that capability.

      In short: I think this might be possible in Haskell, since there exist type-casts that might be made polymorphic in both domain and range. However, finding which type to promote to may or may not be possible within the Haskell type system.

      1. 2

        It looks to me like your promotion is subtyping - the relation between fractional and complex is that fractional is a subtype of complex, in the Liskov sense. So what you’re looking for is the ability to add new supertypes to existing types.

        The Haskell answer is that you write everything in terms of typeclasses rather than directly, so your “fraction” was never really a fraction, rather it’s an a :: FromFractionalLiteral a or the like, and your bidirectional type inference eventually concludes that it always needed to be a Complex and was actually a Complex the whole way through.

        1. 2

          I’ve never been really sold on operator overloading, even for things where it is really convenient, like vector math in C++.

          Certain math operators, for example, aren’t even guaranteed to function on a field for certain datatypes (consider: a point subtracted from a point yields a vector, which is not a point). Having an overload that takes a Customer and adds a Line Item and a Shipping Item to product an Order seems a bit silly.

          We almost always want to explicitly enumerate our operators and functions, since they are all just mapping one dataspace to another dataspace–I just don’t think that worrying about “+” implementation in that light is valid.

          That said, I do really like your writeup @HenriTuhola. :)

          1. 2

            Operator overloading needs to be paired with a notion of which laws we expect that operator to obey - e.g. + is expected to be associative, * is expected to distribute over +. There’s nothing wrong with being able to add, say, Customers, as long as everyone agrees on how we expect “adding” to behave.