1. 10
  1.  

  2. 3

    More troubling than the examples shown are Scala’s implicit numeric widening conversions. (The famous 123456789.round == 123456792 issue.)

    Scala took something that Java designers deeply regretted, and made it even more dangerous thanks to type inference and extension methods.

    The confusion with type inference this feature causes has probably more puzzlers than all the problems the author mentioned, and the incosistencies will only grow when future versions of Java can add larger number types due to Project Valhalla/Panama.

    Thanks to this feature all extension methods added to numbers in Scala, both in the standard library as well as user-defined ones, are wrong and broken, and it cannot be fixed.

    Oh, and you can’t opt out of it.

    1. 3

      Run-time subtyping is a seriously complicated thing to have in a language. It’s under appreciated the complexity it adds.

      1. 3

        Especially if you have a global common supertype (e.g. Object, Any, void *), in which case without a lot of care, type inferencing gone amok can decide everything is of that supertype and turn your statically-typechecked language into a unityped, dynamically-typechecked one (as happened, repeatedly, in the article).

        Type inference is a powerful, dangerous tool. It’s awesome to see it in action and it can save significant programmer effort, but I find most of the time I’d rather explicitly annotate anywhere the inference isn’t trivial to make sure the typechecker and I don’t have divergent views of what type something is.

        1. 2

          It seems that one of the problems here is that the type inference isn’t powerful enough.

          In the first example (match + if), a more reasonable thing to infer than “Object” or “Any” would be a structural type for an object with the relevant unapply method signature.

          In the second example, a heterogenous list is a perfectly reasonable thing to want. The problem seems less about sub-typing and more about bad language syntax design.

          A core reason for the usefulness of type systems is that a little bit of mutual exclusion and a lot of bit of information redundancy enables good error detection and recovery. I mean that in the information theoretic notion. The overloading of the ' character is a problem because no redundant information (ie an explicit type hint) prevented the jump up the hierarchy. But more importantly, forgetting or mismatching a ' is an obvious mistake that the designers should have anticipated, so they should have selected a mutually exclusive syntax for symbols!

          The same problem applies to the class/method body issue. The desire for syntactic concision omitted “unnecessary” delimiters, and created a situation where small edit distances produce syntactically valid code. Whoops.

          1. 2

            It seems that one of the problems here is that the type inference isn’t powerful enough.

            The problem is that, as far as I’m aware, powerful enough type inference is neither understandable for humans nor tractable for computers.

            1. 2

              Union types are easy to understand and would help a lot here.

              (Although it seems as if future Scala won’t leverage them in some very crucial cases.)

              1. 2

                I’m aware of that perspective, but remain unconvinced.

                The tradition of type systems prefers constructive data which can grow to unwieldy large expressions for flexible/precise type systems. However, viewing a type as an id with an open set of known facts about that data, then static information becomes a database. Querying the contradictions in that dataset becomes a UX problem beyond the traditional syntax design problem. At the same time, this reframes the problem as one of general constraint satisfaction rather than being built on a basis of unification, and so many new analysis become tractable for computers.

        2. 2

          Example 2 will not compile with the options -Xlint -Xfatal-warnings, which I recommend everyone use. We have a quite large codebase, and it hasn’t been arduous to keep these settings on. Failed exhaustiveness checks are sadly only usually a warning, so I highly recommend people keep this setting turned on.

          Both wartremover [https://github.com/wartremover/wartremover] and scapegoat [https://github.com/sksamuel/scapegoat] can prevent example 1, by preventing Serializable (or Product or AnyRef) from being inferred. I understand that “use a third-party linter” isn’t the answer a lot of people want, but it’s quite easy to set up and integrate into an SBT workflow.

          As far as the other problems, they are all syntax-related, and I grant Scala has some annoying ambiguities there.