1. 19
  1.  

  2. 6

    Author here, I’m happy to receive feedback, comments and corrections (content, grammar, typos, …). Thanks!

    1. 2

      I think there is a typo in the “The Bad: Type Inference” section. As written, it is:

      val nums4: List[Double] = List(1, 2, 3) // compiles
      
      val nums5a = List(1, 2, 3)
      val nums5b: List[Double] = nums4         // fails to compile
      

      I believe it probably should have been

      val nums4: List[Double] = List(1, 2, 3) // compiles
      
      val nums5a = List(1, 2, 3)
      val nums5b: List[Double] = nums5a         // fails to compile
      

      or else I don’t know what nums5a is supposed to be there for.

      1. 3

        True, thanks! Fixed!

      2. 1

        Looks like a typo here: “In response, it was tried to put band-aid around it.”

        1. 2

          You are right, the grammar of that sentence sounds weird … do you have a suggestion on how to improve it?

          1. 2

            How about, “The response was a band-aid.” or “In response, a band-aid was applied.”

            1. 3

              Thanks, fixed!

      3. 2

        Allowing integral literals to represent floating values is perfectly fine. Haskell does this well with various number type classes (Num, Fractional, Floating, etc.). Converting between number types, which is different, is definitely bad.

        1. 3

          I think it is an orthogonal concern. The core idea of the article is to get away from an approach that has proven to be a failure.

          If Haskell’s approach is really necessary or if people can live without – that’s an interesting question though. (Even Haskell creators are not totally convinced of “pure Num by default” and added a few ad-hoc rules on top of it.)

          Experience from more recent languages shows that languages and users seem to do fine without it.

          Doing something like in Haskell would also mean introducing some kind of special type T[T: Num] for these literals, which is currently not denotable in the type-system. It was brought up when it was discussed back in 2013 and not much has changed in that regard.

          I’d rather remove broken things first and then reassess the situation again in a few years whether this feature is really needed, instead of replacing one ad-hoc design with another ad-hoc design.

          At the moment I’m unconvinced of the concept. It might be a saner approach than implicit conversions on the paper, but in the end it retains all the issue of conversions, just slightly restricted to literals.

          From there on, it’s a real rabbit hole of

          • Should conversions of integer literals to floating point values be allowed, even if the value cannot be represented precisely?
          • Should the conversion work up to the point where floating points start to show their gaps?
          • Should each literal be checked individually whether it can be represented precisely, therefore potentially allowing integer x, but rejecting integer x+1?
          • What about extensibility? How can users define their own numeric types and hook into these checks?
          1. 4

            Type defaulting rules are orthogonal to parametric literal types. You can completely get rid of type defaulting rules (which are ad hoc) and keep Haskell’s parametric literal system (which is not ad hoc).

            For anyone not familiar, the literal “1” in Haskell has type Num a => a. It can be an int, float, whatever, depending on context. The literal “1.0” has type Fractional a => a. It can’t be an int, but it can be a float, double, fixed-width, whatever. This is very nice. It doesn’t rely on implicit conversions. Every number has an explicit type. You can’t add “1.0” to an Int, or a double to a float.

            The type defaulting rules (which I’m not a huge fan of) are that if the type is ambiguous (because you didn’t put a type signature anywhere), the compiler will pick one. This, however, is not necessary to have the above literal typing mechanism. The main use case of the defaulting rules is that inside the REPL (ghci), you can type “1 + 2” and it won’t complain about the ambiguity of that statement. It will just assume they are integers.

            1. 3

              Typeclasses were invented around 1980. Maybe type defaulting is not core to the idea of parametric literal types, but the fact than the creators couldn’t come up with something less ad-hoc strongly indicates that it is an inherent limitation of the design, not a mere implementation choice.

              Maybe it’s absolutely worth the price of admission in Haskell, I’ll refrain from a judgment on that.

              My general concern though is that Haskell advocates tend to advertise the virtue of idealized concepts, but fail mention the fact that these concepts didn’t make the cut into an actual implementation unscathed.

              See typeclass coherence for the exact same issue. I think it makes people wary of Haskell’s ideas if only the convenient side is presented.

              Anyway, it’s off-topic and I’ll stop now.

              1. 3

                Perhaps I wasn’t clear.

                Maybe type defaulting is not core to the idea of parametric literal types,

                Beyond this, they are completey unrelated. Their only similarity is that they both have something to do with typeclasses.

                but the fact than the creators couldn’t come up with something less ad-hoc strongly indicates that it is an inherent limitation of the design

                A limitation of what design? Again, the ad-hoc thing you’re referring to has zero to do with the topic at hand, which is how you safely and soundly represent various number types. It applies just as much to strings and lists and stuff.

                but fail mention the fact that these concepts didn’t make the cut into an actual implementation unscathed.

                I’m really not sure what your thought process is here; parametric literals weren’t “scathed” by defaulting rules in any respect. I’m not really sure how to refute this, because it’s (as they say) not even wrong. I’m really struggling to see why you think these concepts are related; perhaps I’m just not getting your argument.

                I’d say this is very on-topic; it’s one of the first things in the article.

          2. 2

            Not always. 16777216 can be exactly represented as a float, but not 16777217—you lose precision (you only have 24 bits for a float). So integer constants between -2^24 to 2^24 can be safely turned into a float, outside that range, not at all.

            I could see a compiler generating a warning (“Precision loss in integer to float conversion”) when an integer constant can’t be represented (65345 is fine as a floating point constant).