1. 12

  2. 12

    This post is kind of all over the place. Statically typed languages are, IMO, much more diverse than dynamically typed languages when it comes to this post. For example, Ocaml has a very powerful type system and very fast compilation relative to C++ which has a less powerful type system and much slower compilation. Go has very fast compilation but when it comes to generic code it’s not much better than a dynamic type system as one has to drop to interface{}.

    It also leaves out the work being done to bridge the two. Remember that a dynamic type system can be represented in a static type system. C# has had a dynamic type for years. GHC has gotten better support for type-safe-dynamic-types in the latest releases. One can express, less pretty than in GHC, dynamic types with GADTs in Ocaml. Languages like Idris and GHC also have type-holes which let one incrementally develop code even if it doesn’t type-check yet. These advances are really where the “bending” is, IMO.

    1. 4

      This article is very Silicon Valley, by which I mean the confidence in its assertions outpaces the depth of knowledge.

      I’m surprised that nothing was said about runtime performance. That was, historically, a reason for using statically-typed languages and I think that it applies today. Dynamically typed languages, when used idiomatically, are slow. Yes, Clojure and Python can be fast, but typically this involves writing core libraries in other languages (Java or C) and writing code that isn’t idiomatic. That’s not a deal-breaker because (a) for many applications, reasonable performance is good enough and dynamic languages are fine, and (b) high-performance code (while not that hard to write) isn’t idiomatic in Haskell either.

      Next, most modern languages are strongly typed (static vs. dynamic is not the same thing as weak vs. strong). This includes Python and Clojure. They are dynamically typed languages with strong typing, enforced at runtime (hence “dynamic”). Weak typing is when you can cast a double to an int64 or void * to int * and it’s legal. C is weakly typed by design and has to be that way (malloc shouldn’t care what you’re using the block of memory, starting at the void *’s value, for). So you can get correctness checking in dynamic languages. The difference is that it doesn’t happen automatically. You either need runtime assertions (at a cost to performance) or an inclusive test harness. Given that most corporate code has behind it the bare minimum of effort necessary to get it to work, those tools are rarely used to even a fraction of their capacity. And the people who would be most insistent about using them tend to prefer static typing when possible, because we recognize that we’re imperfect.

      “Dynamic vs. static” is about whether type errors are caught at compilation or during runtime. And this would be a settled question (it’s always better to find errors earlier) except for the fact that static typing requires a compromise (in my opinion, a slight compromise that is often negligible in comparison to the amount of debugging time that it saves) to the programmer in terms of how easy it is to write programs.

      The issue around static typing is that there are a variety of type systems, each with different strengths. OCaml has fast compiles but its type system isn’t as rich as Haskell’s and it doesn’t have (as of my knowledge, which may be a few years out of date) the same convenience that you get from type classes. You have to use functors to get that functionality. Haskell has a great type system overall but compiles can be slow and laziness isn’t universally beloved. Scala is a mess but it interoperates with the JVM. Languages like Idris allow dependent types and bring more power into the type system, but the best way to do this is still an area of active research (for example, it’s possible, with dependent types, to make type checking undecidable; however, this is also possible with Scala, so it’s not necessarily a deal-breaker).

      We’re now at a point where the productivity cost of static typing is generally quite low. There are usage patterns that require a bit of wizardry (try reinventing printf in idiomatic OCaml) but they’re uncommon and usually indicative of a design flaw. Best possible performance is better in statically-typed languages than dynamic ones, but still hasn’t beaten optimized C. What level of compile-time type discipline is “the best” is subjective and so is how to achieve that (e.g. Haskell’s approach vs. OCaml’s) and still a matter of debate and research.

      I do think that dynamic languages have a place in the world. For one thing, there’s a conceptual beauty to Lisps (computation over language at a user level, rather than wizardry done by the compiler) that is worthy of study. Secondly, highly-interactive work like exploratory data science often benefits from being done in a dynamic world (data-dependent manipulations on R-style data frames are a pain to statically type).

      1. 3

        Stepping aside: Python, Ruby, Go, C++, Java, etc, and focusing on Clojure (as dynamic lang) and Haskell (as static lang) :


        • Iteration Speed: YES
        • Correctness Checking: NO
        • Concise syntax : YES
        • Editing support : YES (spacemacs / cider)
        • Debugging support : YES


        • Iteration Speed: YES (faster and better than Clojure)
        • Correctness Checking: YES
        • Concise syntax : YES
        • Editing support : YES (spacemacs / intero or ghc-mod)
        • Debugging support : YES

        Nice language features forgotten in the blog post:

        • Profiling: Clojure YES, Haskell YES
        • Fun : Clojure YES, Haskell YES
        • Fast startup time: Clojure NO, Haskell YES
        • Frontend Programming: Clojure YES, Haskell YES
        1. 4

          A few, quick counter to this article whose author hasn’t researched much about programming languages.

          “Correctness checking. This is where dynamically typed languages fall flat on their faces. ”

          The implication is no types whatsoever. Dynamically-typed languages can be strongly-typed underneath where they basically do typing on values rather than variables. Makes those similar to basic, static types.

          “To use opposite extremes here, the difference here is pretty clear.”

          The difference is clear that opposite “extremes” are extremely opposite. There’s certainly less to type if we remove type information and have no tests for such failures built-in. However, typed languages can be very concise with type words being only extra thing. Few of those if inferred from value which author discovers in a later section. The author’s example is just dumb syntax.

          “I needed to make some changes to WebKit. The compile time on my Macbook was ~15 minutes for every change.”

          That means C++, the compiler, and what WebKit uses are collectively slow. It says nothing about static types in general. The typed BASIC’s and Wirth’s Oberons could compile thousands, tens of thousands, of lines a second on older hardware right to assembly. You use compilers like that for fast iterations. Author realizes this later, though, with the Go language section. You do ultra-optimized build later or in the background on a per-component basis with caching.

          “but for the most part have been displeased by my debugging experiences in statically typed languages. ”

          Statically typed languages should have the same debugging experience as a dynamic counterpart targeting the same execution environment. Just more data for debugger to work with. That’s an advantage here. The author is actually comparing a specific stack targeting native code with their experience using dynamic languages that target virtual machines. And without any 3rd-party tooling that makes that native situation easier to deal with. And extrapolates that bad situation to all debugging with static types.

          “to debug is complex compile errors”

          Compiler/interpreter errors can happen with any language. Now we’re talking the quality of the compiler/interpreter. The author still extrapolates that to static typing in general. If compiler quality was a measure, I hit them with CompCert, drop the bat, and casually run through all the bases.

          “So for an era of programming, it felt like you were kind of stuck between two worlds, each of which had pretty crappy tradeoffs.”

          I never felt stuck. I just used different tools when current ones that were popular sucked. Neat thing is many had a C FFI and were callable from C. That ranged from static languages that were fast-to-compile, fast-to-run, and easy-to-debug to dynamic languages with optional typing or ability to DSL typed language in them (eg sklogic’s embedding of ML in LISP).

          “Linting is a form of static analysis ”

          The fields of static and dynamic analysis are way, way past Lint right now. Javascript and C++ that author uses have code analyzers these days.

          Closing Thoughts: author has identified some truths but is too inexperienced to write an accurate post on the topic. Author was also severely limited in thinking by equating C++ experience with static typing. Article was a waste of online bandwidth.

          1. 2

            I think interpreter-vs-compiler is also something compatible with «best of both worlds» approach. There is JIT, and then there are interpreters implemented by compiling statements one-by-one as they are entered in REPL (that’s definitely what SBCL does for Common Lisp; I think Nemerle used the same approach; there are probably more languages which do this).

            1. 1

              Good overview, and the worst of both worlds section at the end is good. I have seen both: slow static type systems that fail to catch trivial errors, and dynamic languages with a convoluted workflow in the name of “safety”, but which negates their main advantage.

              I’d be interested in hearing people’s experiences with optionally typed languages like TypeScript, Dart, Hack, Julia, Perl 6, Python 3, etc. People seem to like TypeScript, but I’ve heard the third party typings can be a pain.

              1. 6

                The Python type annotations are largely useless because they cannot cope with anything dynamic. For example, if you’re attempting to use annotations with a Pyramid project, the request object is fundamentally uncheckable because it will have things dynamically attached to it that are request-scoped: database sessions, user objects, and so on. This problem is so pervasive that you’ll spend a lot of time adding # nevermind lol annotations everywhere or simply using Any, which is the equivalent of casting to Object in Java. What’s the point?

                Furthermore, in what I can only describe as a baffling decision, user-defined types have run-time cost:

                from typing import *
                T = TypeVar('T')
                U = TypeVar('U')
                TU = Union[T, U]

                These are real, allocated objects. With no runtime benefit, naturally, just cost.

                1. 1

                  Yup this was my experience with Python type annotations too… It seems like it’s for big copmanies who have a huge amount of Python code that looks like Java? But even the most basic Python code uses things you can’t express in Java. Especially libraries and frameworks – just like libraries and frameworks are the main users of C++ metaprogramming, they’re the main users of Python runtime reflection/metaprogramming.

                  It is a little odd. It is interesting that the language gets neutered pretty fast.

                2. 4

                  In SBCL you ignore typing until either you really want to improve quality of the generated code in some place (so you add type declarations), or compiler gets confident that your code will cause a runtime type error and shows you the place it doesn’t like (it does compile-time type inference to generate good code even without type annotations). That’s nice.

                  Julia is quite good at optional typing, but it is a bit overoptimistic with typing arrays sometimes, which can lead to funny outcomes in some corner cases when you use map with a non-type-stable function.

                  1. 4

                    I worked a bit with the Checkers Framework in Java, and also with TypeScript, and found them ultimately not very useful. Optional types are necessarily unchecked at runtime, so type errors propagate a long way before you get an actual error (similar to null), and the types can be misleading when debugging because the error you see is “impossible” (a problem that occasionally happens with JVM generics too). Type checking is fundamentally garbage-in garbage-out, and when the types of the libraries you’re using are unchecked (and often third-party) and the language doesn’t have a culture of type safety it’s far too easy to have garbage sneak in.