What compromises does the language I’m using now make?
This is the big one. Or perhaps I’d rephrase it as, what mistakes do I find myself repeatedly making? It’s probably not worth switching languages or processes to fix a few one off errors.
I think when you rephrase it to center around mistakes, you lose out on the cost of the process and ecosystem. For example, scripting languages have the benefit of not having long compile times before you start testing your code. To me, this means I don’t have to recompile the same piece of software for multiple architectures with varying dependencies. Choosing a compiled language may add to your testing and development times if it takes a long time like I’ve sometimes experienced in Android. Dependency management systems are pretty aweful in C as compared to languages like Java and Python. These things aren’t necessarily tied to the language specification or theory but greatly affect my productivity in some situations. It leads to the question of how much I’m affected by these pros/cons which I cannot say.
Oh, yes. Mistakes is too limiting a word. Perhaps “hurdles” is better. My intention was to refocus the question from abstract compromises to pain points felt on a regular basis. Compilation time is a good example of that. Some people it bothers. Others find ways to cope. It may take six hours to build everything, but that’s not a pain point unless the process regularly requires such builds.
It sounds like you’re describing Dart. (https://www.dartlang.org/)
I’m not sure if it’s in vogue (dipped out of front-end development a while ago), but I remember it being announced a few years ago.
Why so many upvotes for such a terrible article?
Elm’s type system acts as a coach to help the programmer complete her work, but doesn’t really affect performance.
Java’s type system is a verbose impediment to code readers and writers, but improves performance by, for example, supporting many different primitive numeric types.
An implementation detail, not a defining characteristic of the type system.
If both languages are called “statically typed” and yet the two languages’ type systems do such different things
They do the same thing - offer compile/transpile time guarantees, which is exactly what static typing does. The author is way off the mark here.
So in this case the distinction between statically and dynamically typed has more to do with the tooling I might be using than the language itself. That’s interesting!
It’s called “static analysis” and it’s inferior to static typing because the inferred types might not be what the programmer intended, and not all types can be automatically specialized, so you get false positives and false negatives that you wouldn’t get with static typing.
Whatever “Static” and “Dynamic” Are, Production Languages Often Have Both
Not often at all. It’s extremely rare to work with gradual typing in production.
One can consider a “dynamic language” as a language which has fewer statically checked types (namely, one) than a “static language.”
Also, water is wet at normal temperature and pressure. Yes, dynamic typing is actually using only one type. No, that does not make it the equivalent of static typing for any practical purpose.
The “benefits and drawbacks” analysis is hilariously wrong. I don’t know if “static vs. dynamic” is the wrong question, but I’m sure this article is the wrong answer. One should at least try to understand the problem domain before offering critique or trying to simplify things for the “me too” audience (or as the author calls them, “excited newbies”).
Only in the same sense that performance is not the a characteristic of a language. Technically true, but in practice, the implementation matters almost as much as the language.
What the author is pointing out is that they use these guarantees for different purposes; correctness isn’t the only feature of type systems.
It’s not inferior, just different.
True, but gradual typing is not the only way to mix static and dynamic. For example, Haskell’s Data.Dynamic.
The “benefits and drawbacks” analysis is hilariously wrong.
How so? It seemed pretty on point to me.
About the only thing in the article I strongly disagree with is
Given some programming language, you can think of “static typing” as a feature (or, more properly, a family of features) the language designer could add to an otherwise untyped or dynamic language
and that’s because my research involves designing a programming language from scratch to try to get expressiveness typical of dynamic languages without sacrificing the type system’s ability to reason about the code.
One purported benefit of type systems (for use in actual languages, not in an abstract, theoretic sense) has been performance.
No, not at all. Haskell struggles to get close to C/C++ performance most of the time and it has a much stronger type system. Maybe you’re confusing this with the advantage of AOT compiled (to assembly) languages vs. interpreted/JIT-ed languages?
my research involves designing a programming language from scratch to try to get expressiveness typical of dynamic languages without sacrificing the type system’s ability to reason about the code
That kind of generalized type inference overlaps with static analysis, doesn’t it?
The “benefits and drawbacks” analysis is hilariously wrong.
How so? It seemed pretty on point to me.
You really think that “Automatic and machine verified documentation” is a characteristic of the type system? The kind of info you get from GHCi’s “:type” is by no means “documentation”.
“Improved runtime performance” is not a guarantee. Static typing is not the magical bullet that gives you performance. Take a look at how contract-checking overhead can bring Typed Racked to a crawl: https://github.com/michaelballantyne/typed-racket-performance
“Better tooling support” - that’s just moving the benefits of the language to the tools through which you access them. It’s not the compiler’s merit that some errors are now caught by the type system, is it?
“Rejection of otherwise correct programs” - that’s a bug, not a drawback of type systems. If you think your program is correct, but it fails a type check, then either your program is faulty or the type checking is.
“Slower programmer iteration (possibly lengthy compile/run cycles)” - what does compiling/interpreting have to do with type systems? The slowing down comes from having to actually design the software in the type creation phase, and it pays off in the long term by reducing the maintenance/blind-debugging work.
“A need for the developer to learn “static typing” language feature (through she still must understand types to some degree regardless)” - Really? The drawback of something is that you might not already know it? But you should learn it anyway, so is it still a drawback? At which point do you admit that it’s simply a silly article that offers no insight and confuses instead of clarifying?
Haskell struggles to get close to C/C++ performance most of the time and it has a much stronger type system.
True, though its performance is hampered by its disconnect from bare metal (i.e. being functional and lazy). The claim as I have heard it is not that more powerful type systems have better performance but that having any types obviates some dynamic checks. You could compile a dynamically typed language AoT, bug you’d have to pay for the dynamic checks.
No, but type signatures do provide machine-checked documentation, especially in more powerful systems (e.g. dependent types, refinement types)
Not a guarantee, no, but eliminating dynamic checks improves performance. Typed Racket contracts tend to introduce dynamic checks.
Type-driven auto-complete is a tooling improvement which is only possible in a typed language; improved tooling support relies on knowing types at various points in the program whether or not it helps you catch errors.
if True then 1 else () () and if True then "" else 'a' are two programs which are correct (in the sense that they do not go wrong at runtime), but would be rejected by most type systems.
if True then 1 else () ()
if True then "" else 'a'
Even when working in GHCI, type checking can take a significant amount of time.
“A need for the developer to learn “static typing” language feature (through she still must understand types to some degree regardless)” - Really? The drawback of something is that you might not already know it? But you should learn it anyway, so is it still a drawback?
Understanding Haskell’s type system to the extent you need to be able to debug type errors is much harder than gaining an informal understanding of types sufficient for good design.
At which point do you admit that it’s simply a silly article that offers no insight and confuses instead of clarifying?
I don’t know, but it is not now.
I disagree with the relativization of static typing value. In my opinion, explicit encoding of our assumptions about the code is a good thing. And unlike dynamic typing, static typing can validate whole program before running it and thus prove our assumptions to be absolutely correct.
Multiple researchers are currently working on static contract verification, which might end this debate once and for all. But until it has been deployed, I would suggest giving statically typed languages a chance.
And it is not (only) about speed.
[Comment removed by author]
If you’re into reading research papers, this is probably a good starting point.
tl;dr: contracts are tool that can be used to enforce pre- and post-conditions on data or functions at runtime; (this particular approach to) static contract verification uses abstract interpretation (a form of static analysis) and an SMT solver to try to verify or refute the contract at compile time.
Took a quick look at the abstract, but it seems like dependently typed languages would provide a more “integrated” way of doing this? I’m thinking mostly of idris here, and that would allow the compiler to know about things much better and not relegate as much to runtime.
Well, yes. But then you need to buy into dependent types wholesale. This is closer to refinement types and gradual types.
Ah sure, so basically like liquidhaskell then?
Yes. Liquid Haskell is an example of refinement types. One major difference is that static contracts degrade to dynamic checks while LH fails to compile.
Cool thanks! I need to look into liquidhaskell more and I’m guessing this paper would help out for the general idea a bit more as well for foundations.
I’m not aware of multiple researchers (unless you mean one research group, which happens to have multiple members); do you have references?
I first encountered it in Racket SCV but a cursory search returned at least one paper from Microsoft Research with different authors, so I assumed the “multiple”. Specifically PLT and SPJ’s students.
Aside: How many research clusters are there anyway? Haskell, Racket, OCaml, .NET… anyone else?
Lots. I definitely don’t know anywhere near all of them, and most of them don’t fall into any of the above (either working on a more esoteric language, e.g. Coq, Agda, Idris, Kitten, or not focusing on any particular language). Also, within the language groupings, there are typically many clusters.
Author here. This post was in a draft form when posted here, and I’m still tweaking things as I’m not entirely satisfied with it. I appreciate all of the feedback here! It’s helpful to me to know where I’ve expressed myself clearly and where I still need to rewrite. Thanks!