1. 25
  1.  

  2. 17

    Types let you represent how much information you have. If you want to deal with “arbitrary JSON which might be missing information at various points” there’s a type for that called Json and it supports automatic merges. If you want to inspect that type and learn small bits of information about it you have types for that as well, for instance

    isTopLevelObject :: Json -> Maybe (Map Text Json)
    isTopLevelArray :: Json -> Maybe [Json]
    isString :: Json -> Maybe String
    

    This “attempt to move up the information hierarchy to something more strict but possibly fail” is a very general pattern that usually goes by the name of “parsing”. In a program that uses types well you are constantly “parsing” your way to more information and typically should do so incrementally. If you “parse” all the way to a “concretion” then you’re asserting that your program is strict/brittle. That might be right… but if it’s not then you’ve just made a design mistake.

    ADTs are too heavily used. Row types are clearly a win for many sorts of systems letting types flow more freely when the time is right.

    That said, typed languages contain within them pretty much all gradations of untyped languages as well if you want to use those. This is the pragmatic heart of Harper’s much maligned “dynamic types are monotypes” argument.

    If you find (static) type information is never useful then you shouldn’t use a statically typed language. If you find it’s sometimes useful then you should use one and build types which represent your uncertainty. All of the “dynamic language tricks” will be available to you to use atop those “uncertain” types.

    1. 3

      tel: First, let me say that I mostly agree with your comments on this page, even though I’m a fan of dynamic languages over static ones.

      This notion of the transition from looser to stricter types as “parsing” deeply matches my intuition. However, my experience has been that most type systems are quite bad for this. TypeScript has been the first type system that I’ve used that isn’t totally unbearable for this task, because of support for associative data, optional keys, union and intersection types, subtyping, etc. This stuff is important when the results of parsing contains substantial partiality and inherit domain complexity.

      I’d also like to note that defaulting to any subset of static, nominal, closed, positional, etc. creates a path of least resistance that produces brittleness. You can say “you’ve just made a design mistake”, but even knowing better, I make that design mistake every time and it’s physically arduous to change my program when I need to. Furthermore, the power of a type declaration to constrain my program’s behavior great enough that it often leaves only little holes. It becomes a puzzle to start filling in those holes. That temptation is so strong that even I can’t resist it sometimes, losing hours of work, only to give up and add a cast with a dynamic guard. At least with increasing experience, I can predict that outcome earlier and earlier each time.

      1. 2

        I would like to second that TypeScript has one of the best type systems for the sort of messiness that arises in the outer layers of programs.

        It has extremely powerful tools that let your merge types, easily construct union types without having to add a boilerplate ADT layer, and has a very smart resolver for letting you then drill down to the right types without a bunch of superfluous casts. And it has a la carte exhaustiveness checks! Sometimes you have to trick the compiler since its structural typing for the most part, but you can find safe ways of doing it.

        We’re still not to the dream of a type system that lets you easily jump between value and type and handling partial information, but Typescript is getting us there.

        1. 1

          dream of a type system that lets you easily jump between value and type and handling partial information

          What is this dream?

        2. 2

          I think I’d love to understand your experience with Typescript more. I used it a while back a touch and bounced off (not using types to attack nulls seemed a bridge too far to my intuition, but that may have changed or improved since I last looked).

          Generally, I find that a set of lenses atop of a general Json type is an extremely nice place for dealing with things like missing data and the like in a sensible and convenient way. That said, I really like what row types and open sums/products can do and these open doors to very nice structural subtyping similar to what I think you’re discussing w.r.t. Typescript.

          So, yeah, can I encourage you to write about this topic through the lens of Typescript? I’d love to better see your perspective here :)

          1. 1

            not using types to attack nulls seemed a bridge too far to my intuition

            You can union null (or undefined) in to any type like this: T | null. However, it’s often preferable to model optionality at the level of the aggregate: {foo?: T} where this essentially means {} | {foo: T}. “Occurrence typing” is how you dynamically cast something like a T | null | undefined to a T: a ? a.b : 'whoops'. This also works with statements, such as an if-block. You can also use Partial<T> to explicitly decorate every key in T with ?, then massage that Partial with a union, intersection, etc.

            a set of lenses atop of a general Json type

            Lenses are a whole ‘nother topic. I think they’re a great idea, but current realizations have some usability problems that I won’t get in to here. Nor will I get in to how I think lenses could be made more accessible.

            can I encourage you to write about this topic through the lens of Typescript?

            Do you mean use of open records and subtyping specifically?

            1. 1

              Yeah, I believe that explicit sort of null handling is newer? Maybe I’m mistaken, though.

              I’d like to understand more completely how and why Typescript is good at handling input/boundary data well. I can intuit it from the feature set and believe it from the fact that it’s deployed in Javascript and frontends are full of these problems, but I’d like to understand it better from a practical POV since I think a lot of this comes down to UX.

              Maybe it’s: “how do the constellation of type system decisions made in Typescript combine to make it really good at handling messy input data?”

              1. 1

                I think the answer has to do with 1) JSON is actually quite a good data representation. And 2) The TypeScript team refuses to change the semantics of JavaScript.

                Now, JSON has a lot of problems, like the lack of extensibility (see tagged literals in Clojure/EDN); isn’t a great language for config files and such (no comments); no head/body syntactic-term type (like lists in EDN, or normal ADT constructors); and lots more problems. But overall, the presence of open associative structures makes it quite good for representing things and surviving growth: Just add some more keys! This is critically important for future-proofing systems. See Rich Hickey’s “Speculation” talk for more about “Growth”: https://www.youtube.com/watch?v=oyLBGkS5ICk

                Second, by insisting on not changing the semantics of JavaScript, the TypeScript team was forced to build type system features that support open and extensible data. For example: With associative data, there is a neat trick that unknown keys are disallowed in literals, but are perfectly allowable in dynamic values. This means that you can survive extra data. Coupled with the reflective capabilities such as Object.keys you can write code that can safely be future proof, but to new data from clients, but also to new requirements internally.

                Another thing is the inclusion of type assertions, which let you hint to the type system without being forced to reconstruct a type completely. This means that if you get JSON on the wire, you can write a predicate that asserts new static information about it:

                let isStrings = (x: any): x is string[] => {
                  return Array.isArray(x) && x.every(y => typeof y === 'string');
                };
                

                Now you can do something like:

                if (!isString(x)) {
                  throw Error('expected strings!');
                }
                x.forEach .... // this is statically-safe now, thanks to occurrence typing.
                

                To accomplish the same thing in a stricter language, I’d have to make a copy of the underlying array. If I have a deeply nested associate structure, I’d have to write a lot of code to do O(N) work to reconstruct everything. Note that I can also do potentially unsound things like this:

                let isStrings = (x: any): x is string[] => {
                  return Array.isArray(x) && (x.length === 0 || typeof x[0] === 'string');
                };
                

                This does O(1) work and defers errors to later if somebody passes ['foo', 5]. When you trust the client, this is super useful for doing the equivalent of pattern matching. And in fact, you can do switch-on-strings, since literal string types are supported:

                let Foo = A | B;
                let Bar = B | C;
                
                interface A { tag: 'A'; blah: boolean; }
                interface B { tag: 'B'; }
                interface C { tag: 'C'; }
                
                let f (x: Foo) => {
                  switch (x.tag) {
                    case 'A';
                      // here it is safe to use x.boolean
                      // I can also get exhaustiveness checks here.
                     // note that valid cases are A and B, but not C, even though I reuse the B type in two unions.
                  }
                }
                

                The syntax isn’t quite as nice as pattern matching, but combined with destructuring, it’s not so bad:

                if (x.tag === 'A') {
                  let {blah} = x;
                  ...
                }
                

                Ideally, there would be more tools to do more of this in a sound way, rather than rely on you to write correct type assertion functions, but again: They can’t change the semantics of JavaScript. That would require a fair amount of runtime library support for pattern matching and the like. Hard coding some cases like literal string types gets you pretty far.

                All this stuff (and plenty more) enables me to write data representations that make sense to a human: as simple JSON encoded data, with human-friendly grammars as types, reusing data definitions in arbitrary positions, future proofed to new data including additional keys, etc.

                Furthermore, if you believe in the types as parsing intuition, then you’d want your type system to support the same kinds of things that CFGs support: alternation, concatenation, etc. TypeScript isn’t quite there yet with cat, but at least union types let you emulate alt. Cat will probably eventually come in the form of session types in some languages.

      2. 13

        Except that, even though you knew the type, you still knew nothing about the structure of that JSON

        That’s nonsense, use Generics or TemplateHaskell and make a config that fits the pattern you want and you get automatic JSON parsing from your types in addition to getting to know the structure of the JSON based on the structure of the data type. This isn’t a property some Haskellers actually want, but the option is there if you do.

        What we wound up doing was nesting pattern matching to get at the value we wanted. The deeper the JSON nesting, the deeper our pattern matching. We wrote functions to wrap this up and make it easier. But in general, it was a slog. The tools Haskell gives you are great at some things, but dealing with arbitrarily nested ADTs is not one of them.

        This is utterly unnecessary for both the JSON handling itself: https://github.com/bitemyapp/bloodhound/blob/master/src/Database/V5/Bloodhound/Types.hs#L1951-L1965

        And for working with nested types in general: you can use lenses, but even with a very nested API like Elasticsearch’s just being able to compose accessors is generally enough.

        I’m familiar with this guys’ schtick, he did some Haskell back in like 2010 with a Happstack based application and nothing was particularly nice. If I was stuck with Happstack, I’m not sure I would want to use Haskell either.

        Rich had some good points about positional semantics. When defining an ADT, positional semantics make it hard to read the code. If you’ve got seven arguments to your ADT’s constructor, you’ve probably got them in the wrong order and you’ve forgotten one.

        First, I avoid passing a bunch of the same type in consecutive order to a data constructor.

        I avoid Text -> Text -> Int -> Int -> MyType and replace it with: HostName -> Alias -> PortNumber -> WorkerCount -> MyType or some-such, further, if there’s more a few arguments to a data constructor I’ll use record syntax to construct the value:

        MyType {
             hostname = myHostname
          ,  alias = myAlias
          ,  portnumber = myPortnumber
          ,  workercount = myWorkercount
          }
        

        Then ordering doesn’t matter. I usually set record fields strict, especially if I plan to use the record syntax, so that I can’t forget anything.

        The fact is, in real systems, these ADTs do accrete new arguments.

        I can’t say I recall the last time Value got a new constructor or one of its data constructors got a new argument, but I’d sure be grateful that my type system would be there to tell me everywhere in my project I need to fix the code if it did.

        I don’t have the patience for the rest of this.

        — Went from Clojure -> Haskell

        1. 8

          Don’t waste your breathe dude there is a much simpler rebuttal: static and dynamic types are a false dichotomy so any attempt to discount one in context of the other is an invalid argument

          Instead weight the specific features of the type systems against each other. In the case of Haskell vs Clojure, GHC gives you much more feedback than the entire Clojure ecosystem of spec/schema etc so if the criteria for choosing a language is type safety then Haskell wins.

          1. 12

            static and dynamic types are a false dichotomy

            Absolutely. People have written some useful stuff about this:

            https://existentialtype.wordpress.com/2011/03/19/dynamic-languages-are-static-languages/

            1. 2

              I actually don’t totally agree with that article. I’m channeling Foucault here: our understanding of type systems is dependent on the context of type theory. How do we know our type theory is the most true? Sure it allows us to prove lots of things, but that is a pragmatic measure. If type theory has pragmatic value, then our type systems should be measured pragmatically as well. Clojure actually wins in many respects because you get lots of monadic interfaces (the collection abstraction) that allow for reusuable code without the difficulty of the IO monad etc. This is not true in many other dynamic languages, e.g. Python.

              So the “false dichotomy” is really apparent when you compare the monadic-ness of Python, C, Haskell, and Clojure. Clearly Clojure and Haskell fall into the “monadic” category and Python and C into the “non-monadic”. This “monadic-ness” distinction tells us more about how composable functions and data structures are than the “static” or “dynamic” distinction, which means that “static” and “dynamic” categories aren’t the most useful categories to put languages into.

              1. 3

                How do we know our type theory is the most true?

                Is this really the question at stake? If a language tells me a variable is a certain type, I tend to believe it. Truth doesn’t seem to be an appropriate axis for evaluating type systems.

                There’s definitely lots of variety regarding what a type system can tell you. The static/dynamic distinction is about when you can know the type information. Static languages allow you to know the types at compile time, whereas dynamic languages force you to wait until runtime.

                1. 1

                  Is this really the question at stake? If a language tells me a variable is a certain type, I tend to believe it. Truth doesn’t seem to be an appropriate axis for evaluating type systems.

                  Within the context of your language, a variable has a type; this is the truthiness of that type. Within the context of type theory, the type system of that language has a certain truthiness to it. Within the context of the real world, type theory has a truthiness to it. The implied question in most static vs dynamic debates is, how true is my type system? Which begs the question, how true is type theory?

                  The answer is that it doesn’t matter. What matters: how useful is type theory generally? How useful is the specific type system I’m using? The static vs dynamic distinction doesn’t do much to help us answer this usefulness question. Understanding when types become relevant in a language definitely does help you, so I agree with you on that, but there are other more interesting aspects of type systems too.

                  1. 3

                    Maybe it’s just me but I really can’t follow.

                    Within the context of your language, a variable has a type;

                    Sure, no qualms yet.

                    this is the truthiness of that type.

                    Completely lost. What is the truthiness? I don’t know what your pronoun “this” is referring to.

                    I think I will continue to be lost until you can explain what it means for a type system to be false. “True” does not signify anything to me here.

              2. 1

                That is the most absurd thing I’ve read. Static typing is necessarily more restrictive because you have to express yourself in a way that can be verified by the type checker at compile time. Thanks to Godel, we know that the set of provable statements is a subset of all valid statements. Since dynamic languages do not attempt to prove correctness, they allow you to make many interesting statements that are impossible to make in a static language. Here’s a trivial example:

                (eval ’(defn add-one [x] (inc x)))

                (add-one 10)

                You should also read this in depth rebuttal to the link you posted http://tratt.net/laurie/blog/entries/another_non_argument_in_type_systems.html

                1. 14

                  That whole rebuttal rests on a conflation of dynamic types and static types. This is the standard stance for folks arguing from the dynamic camp (specifically, they’re two of the same sort of thing) and the opposite is so basic to the mental space of the static camp that it almost always goes unspoken.

                  In summary, if Bob’s article had explicitly stated that he was solely considering static expressivity… I would have agreed with it.

                  So, yes, Bob did not state that explicitly but it is taken without need for explicitness within static typing literature.


                  My take is that there are two “kinds of power” any formal language may be judged upon and that they are in tension. They are: the power to create and the power to analyze (deconstruct).

                  The more power to create a formal system affords the greater richness of expression exists and the more complex of “values” (whatever that might mean) can be constructed.

                  The more power to analyze a formal system affords the greater richness of semantics the values of the language have and the greater information can be extracted from each one.

                  Types are an alternative semantics to runtime evaluation. Thus, asking for types is asking for greater power to analyze and comes at a cost of power to construct.

                  From here, an argument must move on to tradeoffs between power to analyze and power to construct. Why wouldn’t you want to just side entirely with power to construct?

                  Most arguments of the static typing apologist end up being arguments for the value of power to analyze. Essentially, having more parallel semantic interpretations makes the system you’re working with more stable (fewer bugs, easier refactoring, partial verified documentation) and also opens up doors for other systems to exploit those other interpretations (typeclass resolution, servant, dependent types at large).

                  So which is better?

                  This is just a value judgement now. Nobody can answer for anyone else.

                  But I think that power to construct/power to analyze tradeoffs happen constantly so it’s a good concept to have an eye on.

                  1. 1

                    I pretty much agree with what you’re saying, and that’s ultimately the tradeoff between static and dynamic typing. Depending on your situation one or the other might be preferable.

                    1. 4

                      I think an extension on this is that the technology already exists to make those tradeoffs locally within a single program. Haskell excels at this

                      • the ability to embed DSLs and effect contexts lets you pick and choose this power tradeoff
                      • the existence of the IO monad (as much as it is sometimes maligned) means that there’s an “escape hatch” to compare all of your more analyzable structures against
                      • the existence of the Dynamic type as well as other “semi-dynamic” types like Json gives you opportunity for greater power at the cost of analyzability

                      I think what’s most interesting is that the third point about is rarely explored. The real place where this exploration seems to happen is in row typing/structural typing a la Purescript or OCaml, but we could be building more and more sophisticated ways of working with the Json type without constraining its type.

                      I think the lack of this development is why Haskell is really bad for exploratory data analysis, but it’s mostly the lack of exploration/development rather than a total failure, I believe.

                      On the other hand, gradual typing systems also are an attack at the middle ground but I’d place a smaller bet on them. Ultimately, I think it’s easier to “forget” analytical power and regain constructive power than it is to go the other way. I’d be willing to put some money on this being a very fundamental logical tradeoff.

                      So: where are the research projects for EDA in Haskell? I’d love to see them.

                      1. 2

                        While this is all true, the actual workflow is quite different than it is in a language like Clojure. A lot of the time I don’t know the solution up front, and I don’t know what the types are going to be in the end. It’s actually valuable to me to be able to work with an incomplete solution. With Clojure, I can use the REPL to explore the shape of the solution. I can also do a workflow that’s very similar to type driven development with Spec as seen here.

                        Personally, I find that I generally want a specification at the API level, as opposed to having to structure all my code around types. For me, Clojure default is simply more ergonomic. Meanwhile, Spec provides me both with a sufficient guarantees regarding what the code is doing, and a meaningful semantic specification.

                  2. 3

                    Any dynamic procedure can be implemented in any turing-complete static language; you just have to express your uncertainty about its runtime behavior in the type system. Even in a total language, you can build your entire program out of partial functions just by modeling that partiality with Maybe return values. Even in a pure language, you can build your entire program out of impure functions just by modeling that impurity with an IO monad or something similar. Even in a language with strong data types, you can write all the dynamically-typed code you want just by creating a large sum type of possible cases and using those overly-broad types everywhere (which will almost certainly require modeling some partiality as well). In general, you can write arbitrarily dynamic code if you’re willing to get very few useful guarantees from your type system. That may sound bad, but it’s no worse than writing in a dynamic language, where you get no static guarantees at all from your language (except maybe some very broad invariants about, for example, memory safety).

                    1. 1

                      And everything you can do in a language like Haskell, can be done using assembly. Yet, I think we’ll agree that ergonomics of writing assembly are not the same. It’s simply not interesting to discuss what’s possible in principle, it’s the actual workflow that the language facilitates that matters. The way you express yourself in a dynamic language is very different from the way you express yourself in a static one.

              3. 7

                I watched Rich’s talk a couple of times, and I think that positional issues can’t be dismissed. Even if you have record syntax for data, to my knowledge you don’t have this for types. Every time I pull up one of the more complex Haskell libraries, I first have to spend a bunch of time deciphering what I need to put in each position of the types. You end up having to rely on convention and hoping people don’t reorder.

                named arguments are somewhat self-documenting and help with the debugging scenario as well.

                I like that ADTs end up breaking all over the place in code when I change things, but I also like how my Python code and things likes dictionaries mean that I can work on objects by concentrating only on the parts that “matter”. It’s a tension, but I don’t think it’s a dicothomy and we can have solutions.

                ADTs that represent objects (Money or Complex) are definitely of the kind where you want to reconfirm everything, but what about ADTs for configuration objects? Does adding another configuration parameter really require auditing everything? The canonical solution might be to decompose that a bit further to avoid this, then you run into the acretion problem. I’m writing wrappers upon wrappers for things.

                It’s a shame that Rick decided to be so rant-y, because there were some valid points in there. The common ADT patterns force us down a very specific kind of static verification that end up shoehorning us into verbose patterns. I disagree about overall maintenance, but it does lead us to boilerplate.

                1. 7

                  The issue with losing positional arguments in types is that it’s desirable to be able to talk about partial application without creating new type names (as equality in types gets more complex you run into hairy, very difficult problems quickly, so: if you solve partial application with aliases, how do aliased types equate?).

                  For instance, Scala has the worst of both worlds: positional types and very, very painful partial application.

                  With respect to your earlier statements though I agree a lot. ADTs are overly strict in that they represent a demand of both existence and completeness. Most Python idioms rely on an informal “structural subtyping” formalism (what people call duck typing sometimes) which is verrrry nice.

                  This works very well in the statically typed world: see OCaml or Purescript’s record types (and open variants).

                  1. 1

                    Most Python idioms rely on an informal “structural subtyping” formalism (what people call duck typing sometimes) which is verrrry nice.

                    I guess you’re referring to stuff like __iter__ and __str__ etc? It’s been a long time since I used Python, but I did like those “duck typed interfaces”.

                    But did you know there’s something similar in Clojure, with “protocols”? :)

                    1. 2

                      I meant to point more at the heavy use of dictionaries in Python, though the more I think about it the less it seems like a clear example.

                      Clojure really is one of the clearest examples. Idiomatic Clojure just involves passing dicts around and assuming you’ve got the right keys, hoping that there’s a solution (is there an intersection between the keys produced and the keys needed across all points in dataflow?). This is quintessentially “duck typing” and I think it’s a practice that’s well explored in the world of row types.

                  2. 2

                    It’s a shame that Rick decided to be so rant-y

                    How exactly was he ranty?

                2. 3

                  I found it really hard to track what the author really tries to explain here at first. But…

                  It’s easy to hear critiques of static typing and interpret them as “static typing is bad”. Rich Hickey was certainly not doing that. He had a particular problem to solve and was noting how static typing was not particularly well-suited to that problem.

                  What I gather so far is that somebody/some people became agitated because they absolutely love static typing and love towards dynamic typing is inexcusable to them. This post attempts to keep up relations and rebuild the burnt nerves.

                  I find the whole discussion retarded. Idiotic. And I’ve participated in it many times defending my favorite and demoting the opponent. I’ve been pretty dumb in this sense, and I will probably still do something equally as dumb.

                  Static vs. dynamic typing debates are harmful because it forms an irrational bias depending on which group you feel to belong into. It leads to early dismissal of ideas that could otherwise fly.

                  Static vs. dynamic typing debates are almost always centered about languages each one participant use to participate with. If you look at the selection that is available, you can find an awful specimen of a dynamically typed language as well you can find one for statically typed languages. “That’s not really a statically/dynamically typed language” Congratulations for a yet another great sample of “no true Scotsman” argument, your very little own inbred brainchild nobody shares with you!

                  “But it’s not type safe.” – No type checking before runtime, which means no type safety. Of course not! That’s kind of type safety is consequence of static typing. Btw. if you do this you usually have an axiom that safety is paramount and cannot be compromised. Ironically you compromise it yourself because type safety is not same as safety, and you deny yourself the discussion around that subject.

                  “This is too restrictive!” – Well.. um. What? Do you know how restrictive do I find the lack of operator overloading in Javascript? Restrictivity is not strictly a property of static typing. It’s a result from a dull mind and opponent’s belief you took as your weapon. The belief that the most annoying restrictions in his type system are a protection against misuse.

                  Every other babble and argument you usually throw up with Static vs. dynamic is emotional in nature. It is rare to see someone admire and observe the differences to learn something. It’s more like you’re trying to protect your tribe from some imaginary conclusion, or an imaginary opinion in worst!

                  Take the The dark side of java (1997) and replace the “Java” with your language of choice and the story holds true. This is all about that, nothing else.

                  1. 1

                    “But it’s not type safe.”

                    I agree that this is not an argument, but for a different reason. What do people mean by type safety? The original definition is about progress and preservation. Under this definition all dynamically typed languages are type safe because they do not get stuck.

                    Sometimes people actually mean memory safety, but dynamically typed languages are memory safe as well.

                    The safety the static camp desires is actually about the contract between caller and caller. Which parameters are allowed and which are not. I do not know a term for that, though.

                  2. 4

                    The premise of this article is invalidated by the observation that “static vs dynamic” types are a false dichotomy

                    1. 1

                      Why is that a false dichotomy?

                      1. 6

                        There are many ways you could put a language’s type semantics into a category. You could say it is based on Hindley-Milner inference like Haskell, or sequent calculus like Shen. You could say it allows arbitrary side effects like OCaml or it supports managed side effects like Haskell. Even the differences between Purescript and Elm are enough to be two different categories of type systems. You could say it supports custom types like Clojure (deftype), or it supports classes like Python. You could say it supports objects as types like Ruby or as prototypes like Io.

                        Given the multiplicity of forms that a type system can take, why do we put all type systems into either a “static” or “dynamic” category? Are these specific categories useful? I really don’t think so. I love writing both Clojure and Haskell because I think both languages are well designed for very different reasons. Both languages have monadic abstractions as part of the standard library (which Python/Ruby/JS do not have), and when look at it from that perspective Clojure and Haskell seem a lot more similar than they do different.

                        1. 2

                          I definitely buy that there are other dimensions for evaluating languages, but existence of a rich static type system is a giant one. That said, I basically don’t see substantial similarity between static types and dynamic types. To my experience and understanding, types solve very different problems than classes or “tags” (like deftype).

                          Maybe not “false” dichotomy, more “one of many”.

                          1. 2

                            Whatever you want to call it I still don’t think “static” vs “dynamic” is that useful, it just doesn’t tell me much about the language. Also consider my second paragraph here: https://lobste.rs/s/l9foze/clojure_vs_static_typing_world#c_htqfpw

                            1. 1

                              We’ll have to agree to disagree. Fwiw, I don’t think Clojure captures a monadic interface at all. You can do it a little bit (see funcool/cats) but it’s difficult and doesn’t really capture the abstraction well. I’ve never seen a dynamic language talk about monads effectively.

                          2. 2

                            Certainly there are many different kinds of type systems. But that doesn’t mean static/dynamic is a false dichotomy. What it’s really getting at is “Can I know the type of this variable at compile-time? Or do I need to run the program in order to find out?”

                            There is some crossover. From the dynamic camp, Python and Clojure are experimenting with gradual type systems. From the static camp, C# has added the dynamic keyword.

                            1. 4

                              As long as you can construct a type hole, or use a similar trick, Dynamic can almost always be built in a “static typed” language even without a keyword.

                              1. 2

                                Okay but, unless I’m reading you incorrectly, your implied definition of “type system” derives from assuming that static types are the “true” types, and dynamic types are just “static types that we call eval on” or something like that.

                                This is why I mentioned Foucault in the other comment because the trueness of your static type system is only relevant in the context of your type theory. If you have a different type theory, your type system won’t have as much truthiness to it. This is why Shen is so interesting because the type system subsumes Hindley-Milner. Most static-type-fanatics only know HM type systems, and they perceive HM as the one true type system. If Shen’s sequent calculus type system can subsume HM, when what does that say about the truthiness of HM?

                                The problem is the truthy claim of any type system over another, hence the false dichotomy. So forget about static or dynamic. Your question, “Can I know the type of this variable at compile- or run-time?” is a better question, though I would replace it with “When do types become relevant in this language? Is it only compile-time (Haskell), only run-time (Python), or both (Shen)?” In addition, I would encourage considering other aspects of type systems, such as whether or not they allow arbitrary side-effects, whether or not they are built around monads. Don’t just jump to “is it static or dynamic?”

                                tl;dr focus on the functional qualities of type systems in languages, not whether they conform to the “true type theory”.

                                1. 2

                                  I think there’s a huge distinction because “interpretation” or “reduction” or “runtime” has wildly different properties than static analysis. Even dependent types which “blur the boundary” don’t really, they bring small amounts of computational power to static analysis.

                                  What makes runtime so much more complex? The easy answer is “side effects”, but we can go further and consider the question even for entirely pure languages: what makes runtime so different?

                                  I like to view it in terms of abstract interpretation. Outside of runtime there are many sorts of approximations you could run and a wide variety of interpretations and expenses that are possible. Runtime collapses all of the “abstract” interpretation into a single concrete one. It feels topological: we’ve moved from discussions of opens to a single point.

                                  From one perspective this is nice or perhaps even not such a big deal, but those two worlds are different. The topology of abstract interpretations is rich with structure and can be related to other structures (model checkers, logical statements, etc). Runtime kills off all of this structure in favor of concreteness: this is the answer. Of course, as programmers we’re interested in the answer (but not everyone is: see proof checkers).

                                  It’s on basis of this that I see there to be a huge distinction between static and dynamic. Within the world of “static” there is a wide variation in capability (the entire topology of abstract interpretations) but it is all separated from the world of “runtime”. Within “runtime” there’s also a whole world of power, but each thing behaves totally differently from the elements of the static world.

                                  In some sense, I visualize this space as a giant general cone: http://philschatz.com/precalculus-book/resources/CNX_Precalc_Figure_10_04_001.jpg

                                  The spaces of static interpretation and dynamic interpretation are totally distinct touching only at the point of compilation. Each is rich, plausibly unboundedly so, and you can relate the two structures (hyperbola), but they also have their own unique perspectives (circles, ellipses).

                                  1. 1

                                    I don’t know what you mean by truth in this context. Are there any false type systems? If Javascript says typeof(2) is “number”, then that’s true in the Javascript context.

                                    Most static-type-fanatics only know HM type systems, and they perceive HM as the one true type system. If Shen’s sequent calculus type system can subsume HM, when what does that say about the truthiness of HM?

                                    In this case I would describe Shen’s type system as “more powerful” or perhaps “more descriptive”. It doesn’t make HM false by any means.

                                    I would encourage considering other aspects of type systems, such as whether or not they allow arbitrary side-effects, whether or not they are built around monads. Don’t just jump to “is it static or dynamic?”

                                    The trouble with not knowing your types until runtime is they are sometimes path-dependent. A dynamically-typed function might return a number now, and a string later. It’s much harder to analyze the program for problems. When types are known at compile-time, you have the opportunity to check for correctness, and it will apply to all possible runtime configurations.

                                    1. 2

                                      In this case I would describe Shen’s type system as “more powerful” or perhaps “more descriptive”. It doesn’t make HM false by any means.

                                      Yes, this is exactly my point. Truth and false aren’t a useful distinction for type systems. The problem is that the way that most static vs dynamic arguments are framed is by referent to type theory as “the one truth”. Then, a “good” type system is one that adheres to the One True Type Theory. Thinking about type systems in this way severely limits your perspective.

                                      The trouble with not knowing your types until runtime is they are sometimes path-dependent. A dynamically-typed function might return a number now, and a string later. It’s much harder to analyze the program for problems.

                                      Right, this is a problem that certain type systems have. Notice that, in the next sentence, the only solution you can think of comes directly from the static vs dynamic dichotomy that you already setup in your head.

                                      When types are known at compile-time, you have the opportunity to check for correctness, and it will apply to all possible runtime configurations.

                                      What if there was another way to check for correctness that doesn’t depend on static analysis? Clojure uses spec and test.check to do automated property testing; is this not correctness-checking? What if there was a different way to guarantee safety without analysis? I don’t know the answer, I’m just saying that your framing of the problem is limiting your possible solutions.

                                      1. 2

                                        Clojure uses spec and test.check to do automated property testing; is this not correctness-checking?

                                        This is testing. In contrast, a static type checker proofs correctness. Testing is a proof if it is exhaustive, but that is not feasible for most programs.

                                        1. 1

                                          Static type checkers don’t prove correctness of the code, only the correctness of the code’s types. It’s telling that one of the big innovations in testing, generative/property-based testing, came from Haskell.

                              2. 3

                                Because it’s a continuum. Forth has one a total of one type; C pretends to have more than one type but isn’t particularly convincing about it; Ruby has dynamic types and nils are everywhere; Erlang has dynamic types with no nils; Racket can have dynamic or static types without any nils to be seen; Java has static types but every single type is unioned with null so you don’t get very good guarantees about runtime behavior, etc.

                                1. 2

                                  The static v dynamic types themselves are still dichotomous: information exists either at compile time or runtime. Dependently typed langs screw with that but it mostly holds.

                                  A whole language can take part in both sides and use either side to try to give various kinds of guarantees of various strengths, but behind it all there’s still a strong distinction between dynamic and static.

                                  1. 2

                                    Yeah, that’s basically what I was saying. It’s a false dichotomy when you use it to categorize languages into one of two buckets, but it’s a valid criteria for describing a given language, as long as the description has room for more than a single bit.

                            2. 1

                              Interesting perspective on Clojure vs Haskell, and to some extent dynamic vs static typing.