1. 17

    Nothing was explained. The author seems to think that they’re demystifying the topic, but they’re just showing how to implement monads in Rust. If I were to try to explain monads by posting Monte code implementing monads, then nothing would be revealed or communicated, other than that monads are possible in Monte. But monads are possible in most environments, and are usually only impossible because of language expressivity problems; for example, neither the author’s Rust presentation nor my Monte presentation enforce the monad laws.

    I think the author understands this criticism. When they say:

    monads exist in functional programming to encapsulate state in a way that doesn’t explode functional programming

    I think that they know exactly how wrong this is. Monads are natural structures in both senses of the word; they arise out of endofunctors, which themselves are extremely natural and common structures in programming languages because we often want to repeat actions (which leads to the list monad and non-determinism), pass arguments to subfunctions (which leads to the state monad), or add to a mutable accumulator (which leads to the writer monad). Even more advanced patterns, like inversion of control, lead to continuation-passing monads!

    On the other side of the argument, languages like OCaml have both a modicum of functional programming and also true mutability. Monads are not only not needed to manage state, but actively frowned upon as often overcomplicating otherwise-simple code; the community seems to only use monadic styles when doing asynchronous or networking work, as in Lwt.

    Honestly, this last point could be expanded upon, because I could give an entirely different justification for monads which ignores the entire “functional programming” context: Monads are the simplest abstraction which precisely describes asynchronous promise pipelining behaviors while preventing plan interference. OCaml’s Lwt, Python’s Twisted’s Deferred, Ruby’s EventMachine, and the entire design of Node.js and Deno all revolve around behaviors which turn out to look an awful lot like some sort of asynchronous error-handling monad.

    I found it useful to contrast this article with this one and that one which are also both focused on implementing monads in Rust. However, neither of them are trying to explain what monads are. I think that this artcile would have been sharper if it had focused either on how monads could be implemented in Rust, or instead on the mathematical foundations of monads.

    1. 3

      I do like how you’ve separated out two things that are another regular cause of Monad Tutorialitis: what they are and when they’re the right choice.

      Monads are a way of modeling call-and-response effects into an environment without needing RTS support. So they let you port different kinds of RTS behavior in wherever you want. This leads to lots of fun attributes like failure, nondet, streaming, and, of course, asynchrony.

      If your RTS supports all the effects you need then monads will seem like outrageous overkill. They’re also not the only way to achieve this modeling (you can twist exceptions very far, e.g.) but it turns out that few people are familiar with data flow languages or want JS to be one, so the monad pattern shows up again and again as an implementation of first-order asynchrony semantics.

    1. 6

      I feel like this is one (of many) critical realizations you pick up when you start to learn mathematics more seriously. Mathematicians are very serious about taking care to distinguish representation and object, often because much of their mechanism arises out of proving that a representation is (sufficiently) faithful to the genuine object and thus useful. Alternatively, because so much of generality arises out of finding new ways to dispense with burdensome representation.

      An example people likely have experience with is linear algebra. Honestly, LA is about linear mappings between vector spaces, but we regularly spend our time working with matrices. Matrices are just representations of those mappings that are available in a certain—highly useful!—subset of possible linear mappings. Matrices enable great algorithms and give insight into properties of linear maps. Remembering that matrices are just representations lets us port knowledge to, say, infinite dimensional vector spaces where the representations no longer quite make sense.

      1. 4

        So…. is this different from structural typing / static duck typing?

        1. 12

          Good question. I guess it adds compile-time quack-checking?

          1. 9

            structural typing

            You could have structurally typed records without row polymorphism. You might say that { x: int, y: float } is the same type as { y: float, x: int }, but it’s not the same type as { x: int, y: float, z: string }.

            That’s how SML works. You can have two different record types with some fields in common:

            let a = { x = 1, y = 2.0 };
            let b = { x = 3, y = 4.0, z = "hello" };
            

            and then you can use the same field selector on both:

            let ax = #x a;
            let bx = #x b;
            

            But you can’t take #x and package it up into a function, because the type system has no concept of “any record type with an x field”:

            fun getX r = #x r;  // Error: unresolved flex record (can't tell what fields there are besides #x)
            

            static duck typing

            Are you thinking of Go interfaces, or C++ templates? (Or something else? :D) I think either one could plausibly be called compile-time duck typing.

            In Go you can use interfaces to have a function accept any struct that meets some requirements. But I don’t think you could express a function like this:

            // addColor has type { ...fields } -> { color: string, ...fields }
            let addColor (c : string) r = { color = c, ...r };
            let bluePoint2d = addColor "blue" { x = 1, y = 2 };
            let redPoint3d = addColor "red" { x = 3, y = 4, z = 5 };
            

            addColor can add a “color” field to any record type, without knowing which concrete type it was given. But the caller knows which record type it’s using, so it doesn’t need to cast the result.

            In C++ you can write getX as a templated function:

            template<class T, class S>
            S getX(T r) { return r.x; }
            

            But now the compiler doesn’t typecheck the definition of getX: it just waits for you to use it, and typechecks each use. That’s like duck typing (just pass it in, and see if it works), but at compile time. Just like with run-time duck typing, the problem is you can’t look at the signature of getX and understand what its requirements are.

            1. 1

              Are you thinking of Go interfaces, or C++ templates?

              Mostly TypeScript actually :) I’m not sure where the line is because TS calls what they do “structural typing” but I’m having a hard time seeing where the differences lie.

              1. 1

                Type inference for SML code using field selectors is hilariously bad, precisely because the natural and obvious principal types (having row variables in them) are inexpressible in SML.

                Out of curiosity, how would you expect your addColor function to work on records that already contain a color field? The purist’s answer would be to use something like Ur/Web’s disjointness witnesses, but, gawd, the syntax is super awful.

                1. 2

                  Here’s an example in purescript that will only add color to records that don’t already have a color key:

                  module Rows where
                  
                  import Prelude
                  import Effect (Effect)
                  import Effect.Class.Console as Console
                  import Prim.Row (class Lacks)
                  import Record as Record
                  
                  addColor ::
                    forall r.
                    Lacks "color" r =>
                    String ->
                    { | r } ->
                    { color :: String | r }
                  addColor color record = Record.union { color } record
                  
                  main :: Effect Unit
                  main = do
                    Console.logShow $ addColor "red" { foo: 1 }
                   {- this won't compile
                      Console.logShow $ addColor "red" { color: "blue" }
                   -}
                  
                  1. 1

                    Fascinating. How is this Lacks type class implemented? Compiler magic?

                    1. 2

                      It’s part of the Prim.Row builtin.

              2. 4

                It’s a kind of structural typing, so, I suppose it’s a kind of static duck typing.

                In general, with all of these, the more flexible the system the harder it is to work with and the less capable the inference is. Row typing is one way to weaken general structural subtyping by limiting the subtyping relationship to that which can be generated by the record/row type. So Row types are a bit easier to infer.

                1. 3

                  I wrote a post a long time ago about differences between row polymorphism and (structural) subtyping: https://brianmckenna.org/blog/row_polymorphism_isnt_subtyping

                  1. 1

                    Oh I just realised my post was mentioned at the bottom of the posted one. Awesome!

                1. 27

                  So AFAICT this is the tradeoff the author consciously rejects, and the one that Svelte consciously chooses:

                  • Choose writing in a JS framework that is compiled ahead of time, over writing in a JS framework that is interpreted at runtime.

                  The disadvantages of this tradeoff that weigh on the author’s mind:

                  • If your language is compiled, debugging is harder because the compiled code that is run does not resemble the source code you have to fix.
                    • They also make the point that it’s confusing that Svelte code is Javascript, but it needs to be compiled to run it, which may change its behaviour. (To me that’s not that different from frontend framework code that is valid JS/HTML, but needs the framework runtime to run it, which may change its behaviour.)
                  • If in the future more front-end-systems compile to Javascript instead of writing in it, it becomes harder to glue them together.

                  I think it’s interesting to look how Elm solved these, because like Svelte, it is compiled ahead of time to small and fast JavaScript that doesn’t resemble the source code.

                  Elm’s solution to ‘you have to choose between debugging the runtime JS or the source code’ is to go all-in on making it easy to debug the source code. In Elm’s case, it is an ML-family language with a type system that guarantees zero runtime errors (but won’t save you from domain mistakes, obv.), and with compilation error messages that are so helpful that they have inspired many other languages.

                  Svelte, presumably, wants to remain Javascript, so a lot of error prevention becomes harder. They mentioned they want to add Typescript support. Or they could add source maps that relate compiled JS to the original Svelte code? Also, debugging compiled projects is a very old craft, it only really gets onerous if the problem is low-level or compilation is slow. I also note that Svelte compilation has a ‘dev’ flag that produces named functions, and also extra code that performs runtime checks and provides debugging info.

                  Elm’s solution to the interoperation problem: an Elm module can expose ports (blog post, docs that external JS can send messages into, or that JS can subscribe to. So the ports form an Elm module’s public API.

                  That still leaves the interop problem of styling the created components. If it’s you writing the Svelte, you can let Svelte do the styling (if Svelte is the whole system), or specify the right class names on the created DOM (if you’re writing the Svelte component as a subsystem). But if you’re reusing sombody else’s Svelte component, I’m not sure how easy it is to pass in the class names you’d like the component to use. Perhaps ‘support for caller-specified class names’ is even an open problem / blind spot in frontend frameworks in general?

                  1. 8

                    Good summary.

                    In one sense, all of the hard-fought web knowledge that people may subconsciously pride themselves on knowing now becomes a stumbling block. Technologies like Svelte treat the underlying web runtime as something to be papered over and ignored. Much like compiled C, being able to debug the generated code is a needed skill, but the 1-to-1 correspondence between source code and generated code is not a guarantee, and it can be disconcerting to let go of that.

                    I’m all for it. We largely ignore x86/x64 by using higher level languages and our code is better for it, even if slightly inefficient.

                    Web devs love to talk of developer experience and progress in tooling. Something something…Cambrian explosion? ;)

                    1. 10

                      I think the author’s problem isn’t so much with it being compiled, but the fact that the source code looks like JS, but your assumptions don’t hold because there’s a lot happening to that JS so the end result isn’t anything like what you typed.

                      1. 4

                        Yes, I agree. Elm is a language that has its own semantics which are adhered to by its compiler. But Svelte takes the semantics of an existing language (JS) and changes them.

                        I have that concern about Svelte too, though it’s not strong enough to change the fact that I’m still a fan and excited to see how Svelte evolves.

                        1. 3

                          Reminds me very much of the Duck Test https://en.m.wikipedia.org/wiki/Duck_test. Svelte walks like JS and talks like JS but isn’t JS. This is typically seen as a positive for those who judge their tools, at least partly, based on familiarity.

                          1. 1

                            That makes the article make more sense. That would be difficult to reckon with.

                        2. 5

                          Or they could add source maps that relate compiled JS to the original Svelte code?

                          Svelte creates JS and CSS source maps on compilation - https://svelte.dev/docs#svelte_compile

                          There’s also the @debug helper for templates - https://svelte.dev/docs#debug

                          In practice I’ve found debugging Svelte to be mostly trivial and sometimes difficult. Dev tools will help close the gap but they’re not mature.

                          For styling, style encapsulation is what I’ve seen the devs recommend, but nothing stops you from passing classes as props to components that accept them. (I do that a lot because I like utility CSS libraries like Tailwind) The biggest open RFC right now is about passing CSS custom properties (CSS vars) to components. https://github.com/sveltejs/rfcs/pull/13

                          1. 1

                            I think the tradeoff here isn’t about source mapping and the sort, but instead that if you take it as given that you’re going to compile your language, then you might as well through more language safety features in (a la Elm).

                            That might be true, but the other sacrifice is familiarity. Svelte can be learned by a frontend dev very quickly and without much “relearning” fear. Instead, you get the cognitive dissonance problem of it being almost what you expect but, then, not quite.

                            1. 2

                              if you take it as given that you’re going to compile your language, then you might as well through more language safety features in (a la Elm).

                              There’s a big leap from Svelte to Elm beyond just compiling the language. Elm has tremendous benefits, definitely, but it gives up seamless interop with the DOM, mutable web APIs, JS libraries, and future web standards. Elm has JS interop but only asynchronously through its ports. Purity and soundness are great but at what cost? (that’s a rhetorical holy war question, not worth discussing here IMO!)

                              I think TypeScript has been gaining so much adoption largely because it makes pragmatic compromises everywhere, not just because people are resistant to learning Elm/Reason/PureScript/Haskell/etc, and when support lands in Svelte I’ll be less shy about recommending it to people.

                              1. 2

                                Yeah, I think for most people in most cases, familiarity and ease are the bigger win. I’m not arguing one should use Elm, just laying out that continuum.

                                1. 2

                                  Thanks for emphasizing that point. I think I underestimate the impact of familiarity and ease for many people.

                                2. 1

                                  By seamless interop do you mean synchronous? I started trying out Svelte yesterday and found the dom interop to not be seamless as I was confused by the difference between <input value={val} /> and <input on:value={val} />

                                  I think from memory that’s how you get an interactive input.

                                  1. 1

                                    I meant seamless but that’s overstating it. (except for mutable web APIs and JS libraries - interop there is generally seamless because of how Svelte extends JS) Anything that isn’t plain HTML/CSS/JS is going to have seams. Svelte minimizes them to a degree that some other frameworks don’t, like React and especially Elm. Vue is on similar footing as Svelte.

                                    The nice thing about Svelte’s seams is they often reduce the complexity and verbosity of interacting with the DOM. In your example, it sounds like you want:

                                    <input bind:value={val} />

                                    (or simply <input bind:value /> if that’s the name)

                                    At the same times Svelte gives you the flexibility to optionally use a “controlled input” like React:

                                    <input value={val} on:input={updateValue} />

                                    The equivalent in plain HTML/JS is not as pleasant. Elm abstracts away the DOM element and events.

                          1. 4

                            A shift in perspective that I enjoy is that it’s not that the world is as it is and equality is broken, but instead that picking an equality tells us what the world is. Your choice of equality nails down the finest distinction you’re allowed to make.

                            So, practically speaking, tossing away access to reference equality is a necessary part of abstracting away our notion of PLs operating on a computer atop a pool of memory. If we retain it, we can make distinctions which violate that model of the world.

                            Often the trouble is that IEEE floats are designed such that they cannot be abstracted from their memory representation. On the one hand, NaN != NaN. On the other hand, Floats are supposed to represent Real numbers when you throw away the memory model, but Real numbers don’t support computable equality at all!

                            So IEEE floating point is pretty incompatible with a language model that tries to hide memory representations. But it’s critical for fast math. Now, the correctness of your language has to do not just with equality but how it allows these two incompatible worlds to live together as harmoniously as possible.

                            1. 2

                              Your choice of equality nails down the finest distinction you’re allowed to make.

                              I love this way of putting it!

                            1. 10

                              Types help reduce the burden of unit tests. I believe this without a doubt.

                              The reasoning isn’t so much that a type replaces a test but instead that the discipline of types helps minimize the number of possible states of the system and helps define a limited set of inputs and outputs. In other words, types do something a little orthogonal to testing: they minimize surface area.

                              The upshot is that you over time discover that a lot of your unit testing effort ended up being burned on handling edge states and semi-appropriate inputs. Those tests just vanish—there’s no need.

                              That said, many other behaviors you’d want to validate with a unit test will remain. Types can help clear up interfaces to make property-based/fuzz testing more appropriate, though. YMMV.

                              The approach of trying to encode more invariants into the types themselves is super interesting. It’s clear that you can use this method to outlaw even more obviously bad behaviors, but in most languages there’s a hefty cost for introducing these techniques. They’re much more complex and can cause mental model load in working out a program that satisfies the types. On the other hand, there’s a lot of satisfaction in playing “type tetris” where the moment you assemble a program that checks, you have some confidence that it’s sane (if not totally correct).

                              Another thing you may notice as you make the transition is that type-driven development is probably 2-3x faster than test driven development while obtaining similar results. You can often fluently describe a whole interface, library, or even program just considering the types while obtaining a very decent skeleton of the program. You can iterate through designs extremely quickly since (a) the type checker is very fast and (b) all you’re writing are specifications, not tests and implementations.

                              Finally, most critically, we often rely on tests as a safety net to enable refactoring. Types don’t fully replace this use case either, but they make refactoring so much easier and joyful that it boggles my mind to consider refactoring without types nowadays. With a good type system, you just make the breaking change, hit compile, and then work through the list of 5, 10, 60 places that things have broken. It’s nearly mindless, as it should be.

                              1. 7

                                Probably one of the best use cases for functional package managers right now is for creating reproducible development environments on any machine. I am looking to set up Nix on both my Mac and Linux servers so my toolchain remains consistent between environments.

                                1. 7

                                  I looked into it last night and it has a serious issue on Catalina. One hopes they resolve it soon.

                                  1. 4

                                    There are relatively straightforward fixes to create a synthetic drive at /nix. It’s not official yet, but it works just fine.

                                    https://github.com/NixOS/nix/issues/2925#issuecomment-539570232

                                    1. 3

                                      This is what I use and what I’m going to include in my tutorial post that I’ll be writing in the next week.

                                1. 6

                                  I didn’t know about niv, and am a bit curious what it actually does.

                                  E.g., $ niv add adisbladis/vgo2nix is given a github project name, and apparently provides the vgo2nix tool. But how is this built? Using the included default.nix? The niv README shows niv add stedolan/jq as an example, a project which doesn’t appear to bundle any .nix files.

                                  1. 7

                                    It depends on what you do with the source. The source itself is just a fixed-output derivation, which can represent e.g. a file or directory. In the post @cadey uses import. import parses and returns the Nix expression at the given path. So e.g. import ./foo.nix will parse and return the expression in foo.nix. If the argument is a directory, e.g. bar in import ./bar, import will parse the default.nix file in the given directory. So, summarized,

                                    import sources.vgo2nix {}
                                    

                                    from the blog post means: parse and return the path sources.vgo2nix evaluates to. Since this results in a directory, it will parse default.nix in that directory. This (presumably) evaluates to a function / closure, so then this fragment evaluates to:

                                    <some closure> {}
                                    

                                    So {} is then the argument to the closure. {} is an empty attribute set (the closest equivalent in other languages is a dict/hash table).

                                    But since a niv source can be any fixed-output derivation, it does not actually have to be a Nix expression (stored as a file). You can do pretty much anything with it that you want. E.g. I also use niv to fetch (machine learning) models and set environment variables to their paths in the Nix store:

                                    https://github.com/stickeritis/sticker-transformers/blob/master/nix/models.nix

                                    1. 3

                                      Dang, that’s exactly it. I was hunting through the niv documentation because it seemed to be magically doing something more, but it just relies on that default.nix. Solid, thanks!

                                    2. 2

                                      I don’t get it either. The github page says “Easy dependency management for Nix projects” but isn’t that what Nix was for?

                                      1. 5

                                        niv makes it easier to combine stuff from different repositories. Say that you want to pin nixpkgs in a project, you could use fetchFromGithub or fetchTarball to fetch a specific revision. If you ever wanted to update to a newer version of nixpkgs, you would have to bump the revision and update the sha256 hash by hand. niv automates these things. E.g.:

                                        # Add nixpkgs, latest revision from the master branch.
                                        $ niv add NixOS/nixpkgs
                                        # Later: update `nixpkgs` to a later revision.
                                        $ niv update nixpkgs
                                        

                                        niv overlaps somewhat with the proposed flakes support in Nix:

                                        https://github.com/tweag/rfcs/blob/flakes/rfcs/0049-flakes.md

                                      2. 1

                                        vgo2nix is built using its default.nix, yes. I don’t completely understand the point of adding things like jq to projects yet, but I assume that the understanding will come to me at some point. I can be patient.

                                        1. 4

                                          Its convenient to put that stuff in shell.nix and use it in conjunction with lorri.

                                          For instance, if you are developing in python you might want to use mypy for testing but not require it as a build dependency. It makes sense to throw it in shell.nix. Another use case I’ve found is writing proofs in Coq. This is especially true because opam modules for Coq don’t work properly when globally installed.

                                          Maybe if you are using jq all the time in some project, it makes more sense to throw it in shell.nix than do a global install…

                                        1. 4

                                          You can encode pretty much whatever you want into functions. Here’s the natural numbers, for instance

                                          function fromNat(n) {
                                            if (n == 0) { (s, z) => z} 
                                            else { (s, z) => s(fromNat(n-1)(s, z)) }
                                          }
                                          
                                          function toNat(n) {
                                            n( (x) => x + 1, 0 )
                                          }
                                          

                                          There’s even an automatic way to do this encoding to any algebraic data type.

                                          1. 2

                                            There’s something off here. Processes sending one another bytes is quite simple, but it’s built up a lot of cruft. We also expect various kinds of delimitations of those bytes, for complex terminal behavior, quotation, etc.

                                            The complexity here comes to exist because the simple interface didn’t meet the user’s needs, but the promises of the simple interface are still being met.

                                            User needs are complex. Achieving simple solutions to them is incredibly hard. Simple technologies are great and lead to simple compositions, but… there are still a lot of simple legos that must be stacked to reach complex user-desired behaviors.

                                            1. 2

                                              As I pointed out in reddit, this seems like an appropriate use case for the dependent-map library.

                                              1. 1

                                                There are simpler ways, though that one isn’t bad if you hide it behind a nicer API.

                                              1. 3

                                                Some things are a bit unclear to me with constructions like in that “Glimpse of Dependent Haskell”:

                                                We’ve got three integer types

                                                • data N: inductively defined natural numbers
                                                • data Fin n: inductively defined bounded natural numbers
                                                • Int: regular built-in integers

                                                I don’t really see how we can bridge between these at run-time. E.g., lookupS requires a Fin n argument, and the call only typechecks if it’s a valid index. But type-checking happens at compile time, so if we read a list from somewhere and an index from somewhere else at runtime, I expect it would be impossible to even make use of lookupS. Is that correct?

                                                In other words, (how) is it possible to get runtime values to the type level? I’m quite confused…

                                                1. 6

                                                  It’s always a weird trick. You never really get runtime values at compile time, you instead get code which magics up the properties you need. This code is composed of two pieces: a piece which includes a runtime verification of a property and results in a fancy type, and a piece which relates those fancy types together into a verified algorithm.

                                                  In the case of indexing a list of known length, you may have a type Vec n Int for some n. You can write code which reflects the compile-time n down to the runtime and evaluates a check inBounds :: Int -> Maybe (Fin n). This can be applied to integers received at runtime and results in either nothing, or an integer with a property. The codepath following the successful results can be shown to be safe.

                                                1. 3

                                                  Intuitively a program written this way allows us to observe specific traces and swap-in and out modules to implement a change without having to deeply understand the structure of the program: because the changes don’t depend on the structure but on the combined behavior of the modules.

                                                  This feels awful. Combined states are multiplicative. Having to know the exhibitor of the whole system to make any change means you’re managing a huge number of possible states.

                                                  While there is no longer flow control in these modules, what’s worse is that there is implicit flow control in the timing and flow of Globally namespaces events. Modules have subtle pre- and -post-conditions in these events. How do know you need to block “bad”? Because (a) other upstream modules may be sending it and (b) other downstream models may react to it differently than “good”. Now multiply that against every potential state in the system.

                                                  (A) and (B) above are integration points just like any other, but they’re totally implicit.

                                                  You could solve this by hiding parts of the event space from certain modules. That feels like a necessity. But I don’t see how you do this without again integrating these modules.

                                                  1. 11

                                                    Huzzah, more spooky action at a distance, just what programs need. The points of contact between modules become your messages, which are essentially global in scope. And the rules may contradict each other or otherwise clash, and understanding what’s going on requires you to go through each module one by one, understand them fully, and then understand the interactions between them. This isn’t necessarily a deal breaker, but it also isn’t any simpler than any other method.

                                                    Interesting idea, but I’m deeply unconvinced. It seems like making an actual complex system work with this style would lead to exactly the same as any other paradigm: a collection of modules communicating through well-defined interfaces. Because this is a method of building complex machines that our brains are good at understanding.

                                                    1. 7

                                                      IMO this comes from the fact that the act of writing/extending software easily that you’ve spent N years understanding and reading software later are two entirely different activities, and push your development style in different directions.

                                                      The ability to write software that integrates easily pushes folks to APIs that favor extension, inversion of control, etc. This is the “industrial java complex” or something like it - and it appears in all languages I’ve ever worked on. I’ve never seen documentation overcome “spooky action at a distance”.

                                                      The ability to read software and understand it pushes you to “if this and this, then this” programming, but can create long methods, lots of direct coupling of APIs etc. I’ve never seen folks resist the urge to clean up the “spaghetti code” that actually made for delicious reading.

                                                      It’s my opinion that this is where we should build more abstractions and tools for human software development, similar to literate programming, layered programming, or model oriented programming. One set of tools are for writing software quickly and correctly, and another set of tools for reading and understanding, i.e. macroexpand-1 or gcc -E style views of code for learning & debugging, and a very abstract easy to manipulate view of code that allows for minimal changes for maximal behavioral extension.

                                                      ¿por qué no los dos?

                                                      1. 2

                                                        The points of contact between modules become your messages, which are essentially global in scope.

                                                        This was exactly my thought, too. It reminds me of a trade-off in OOP where I think you had to decide whether you want to be able to either add new types (classes) easily or add new methods easily. One approach allowed the one, the other approach the other. But you could not have both at the same time. Just can’t wrap my head around what exactly was the situation… (it might have been related to the visitor pattern, not sure anymore)

                                                        In this case, the author seems to get easy addition/deletion of functions by having a hard time changing the “communication logic” / blocking semantics (which operation blocks which other operation, defined by block and waitFor). While in the standard way the “communication logic” is easy to change, because you just have to replace && by || or whatever you need, but the addition of new functions is harder.

                                                        1. 3

                                                          That’s sometimes known as the “expression problem”.

                                                          https://eli.thegreenplace.net/2016/the-expression-problem-and-its-solutions/

                                                      1. 22

                                                        The “O” part is consistently omitted by all comparisons. The most unusual thing about it is that the object system is structural rather than nominal, and there’s type inference for it. Classes are just a handy way to make many objects of the same type.

                                                        For example, the type of a function let f x = x#foo () is < foo : unit -> 'a; .. > -> 'a, which means “any object that provides method foo with type unit -> anything:

                                                        # let f x = x#foo () ;;
                                                        val f : < foo : unit -> 'a; .. > -> 'a = <fun>
                                                        
                                                        # let o = object 
                                                          method foo () = print_endline "foo" 
                                                        end ;;
                                                        val o : < foo : unit -> unit > = <obj>
                                                        
                                                        # f o ;;
                                                        foo
                                                        - : unit = ()
                                                        
                                                        # let o' = object
                                                          method foo n = Printf.printf "%d\n" n 
                                                        end ;;
                                                        val o' : < foo : int -> unit > = <obj>
                                                        
                                                        # f o' ;;
                                                        Line 1, characters 2-4:
                                                        Error: This expression has type < foo : int -> unit >
                                                               but an expression was expected of type < foo : unit -> 'a; .. >
                                                               Types for method foo are incompatible
                                                        
                                                        1. 17

                                                          I regularly recommend people to the OCaml object system. Not only is it decently nice (but underused because other parts of the language are more popular and cover most of what you might want) but it also challenges your ideas of what “OO” is in very credible and nice ways.

                                                          Classes are clearly separated from object types and interfaces. Inheritance clearly relates to classes as opposed to objects. Structural typing, which exists throughout the OCaml universe, makes clear what subtyping is offering (e.g., a more restricted sort of subsumption than structural classes will give you).

                                                          It’s just an alternate universe Object system which is entirely cogent, replicates most of the features you might expect, but does so with a sufficiently different spin that you can clearly see how the difference pieces interact. Highly recommended!

                                                          1. 6

                                                            OCaml’s object system reflects what type theorist sees in an object-oriented language. You know, the kind of people who write things like TaPL’s three chapters on objects. What these people see is static structure, which, from their point of view, is fundamentally flawed, e.g., “it is wrong to make either Circle or Ellipse a subtype of the other one” and “inheritance does not entail subtyping in the presence of binary methods”.

                                                            However, what a type theorist sees does not always match what a programmer sees. A conversation between a type theorist and a programmer could go this way:

                                                            • Type theorist: Here, look! I have finally devised a way to adequately describe object-oriented languages!
                                                            • Programmer: By “adequately describe”, do you mean “give types to”?
                                                            • Type theorist: Of course. Sound types describe the logical structure of programs, in particular, your object system.
                                                            • Programmer: I do not see how or why you can make such a claim. The object systems that I use are dynamic and programmable, so that I can adapt the system to my needs, instead of getting stuck at a single point in the language design space. Are you saying that your type system changes the type of every object every time I reconfigure the system?
                                                            • Type theorist: No. I just describe the parts that will not change, no matter how you reconfigure the system.
                                                            • Programmer: Then you are not describing much that is useful, I am afraid.
                                                            • Type theorist: By the way, I have concluded that your system has the grand total of exactly one type.
                                                            • Programmer: sigh

                                                            And another conversation could go this way:

                                                            • Programmer: (to a newbie) The internal state of an object is hidden from the rest of the program. Objects communicate by sending messages to each other.
                                                            • Newbie: What is the point to this?
                                                            • Programmer: This helps you compartmentalize the complexity of large software. Each object manages a relatively small part of the program’s state. Small enough to understand.
                                                            • Newbie: Ah, okay. I can see why you would want to work this way.
                                                            • Type theorist: Objection! You claim the state of an object is encapsulated, but your development tools are fundamentally based on the ability to inspect the internal state of every object, even when it is running.
                                                            • Programmer: Right. In an object-oriented system, encapsulation is not enforced by the language, but is rather a property of how the system is designed.
                                                            • Type theorist: So, in other words, encapsulation is a property of how you program the system, rather than of the system itself.
                                                            • Programmer: That is not quite precise. Encapsulation is a property of the system. However, the system is more than just the language it is programmed in.
                                                            1. 8

                                                              So are you saying that the only real-world usable object systems are dynamically-typed ones? One of the most respected OOP teachers and language designers (Bertrand Meyer, of Eiffel and Design by Contract fame), disagrees with you on that one: https://en.wikipedia.org/wiki/Eiffel_(programming_language)

                                                              1. 3

                                                                In retrospect, my comment was much harsher than I wanted it to be. What I really want to see is a CLOS-like object system, embedded in a static language, preferably ML.

                                                                In practice, the most popular object systems are not fully static. Java and .NET rely on runtime class metadata for lots of things. And, even if they didn’t, I doubt their designers would trust their own designs to be safe enough to remove all the runtime type checking.

                                                                Eiffel would be a much better argument for your position if it were actually type safe. Pony is, to the best of my knowledge, the most widely used fully type safe object-oriented language.

                                                                1. 2

                                                                  Hmm, Ada certainly takes a lot of inspiration from Eiffel. And I’d say Ada is probably more popular than Pony. Actually I’m not even sure if Pony is ‘OOP’ in the sense that the other mainstream languages are. AFAIK it is actor-based.

                                                                  1. 2

                                                                    Oops, you are right there.

                                                                    1. 2

                                                                      At least according to Wikipedia, Ada precedes Eiffel in time.

                                                                      Certainly one of the few things I know about Ada is that it supports programming by contract.

                                                                      1. 3

                                                                        But I think Ada’s contract system is inspired by Eiffel’s though?

                                                                        1. 5

                                                                          Correct, Eiffel is where design-by-contract originated. Ada added support in the 2012 spec.

                                                                          See the comparison here, where it refers to “preconditions and postconditions” (since “design-by-contract” is now a trademark owned by Eiffel Software.

                                                                          1. 1

                                                                            Thanks for clearing this up for me!

                                                                2. 4

                                                                  I feel like you’re arguing a lot for dynamic types as opposed to static types. That’s totally fine. I’m speaking pretty much entirely to the static components and trying to talk about something pretty different.

                                                                  There’s a different discussion, not really related at all to OO, about how the things you’re discussing here can be interpreted via types. I’m personally of the belief that most programmatic reasoning could be modeled statically, though perhaps not in any system popularly available. To that end, speaking using types can still be important.

                                                                  Which, to the point here, we can model a lot of these dynamic interactions using types at least temporarily. And there remains a big difference between the way that “factories for creating objects” can inherit from one another and the way that “one object serves as a suitable stand-in for another” is a different, if sometimes related, concept.

                                                                  1. 1

                                                                    I feel like you’re arguing a lot for dynamic types as opposed to static types.

                                                                    More precisely, I’m arguing for the use of dynamism in object systems, even (or perhaps especially!) if they are embedded in languages with static types. Multiple static types are already useful, but they would be even more useful if the dynamic language’s unitype were one of them, in a practical way.

                                                                    Not everything needs to be object-oriented or dynamically typed, though. For example, basic data structures and algorithms are best done in the modular style that ML encourages.

                                                                    1. 2

                                                                      That’s a reasonable thing to ask for. I think many statically typed languages do offer dynamic types as well (in Haskell, for instance, there’s Dynamic which is exactly the unittype you’re looking for). Unfortunately, whenever these systems exist they tend to be relegated to “tricking the type system into doing what you want” more than playing a lead role.

                                                                      Personally, this makes sense to me as when 80% of my work is statically typed, I tend to want to extend the benefits I receive there more than introduce new benefits from dynamicism.

                                                                      I’d be interested to see if there were some good examples of how to encode an interesting dynamically typed idiom into something like Haskell’s Dynamic.

                                                                      1. 1

                                                                        Haskell’s Dynamic would not work very well in ML. Usually, the runtime system can only determine the concrete type of a given value. But the same concrete type could be the underlying representation of several different abstract types! Haskell gets away with having Dynamic, because it has newtype instead of real abstract data types.

                                                                        The unitype embedded in a multityped language should only be inhabited by dynamic objects (whose methods are queried at runtime), not by ordinary values that we expect to be mathematically well-behaved, such as integers, lists and strings.

                                                                        1. 2

                                                                          I may not fully understand, but that feels doable in both ML and Haskell.

                                                                          In ML, when we work dynamically we’ll be forgetting about all of the abstract types. Those have to be recovered behaviorally through inquiring the underlying object/module/what-have-you.

                                                                          In Haskell, we can have something like newtype Object = Object (Map String Dynamic) possibly with a little more decoration if we’d like to statically distinguish fields and methods. You could consider something like newtype Object = Object { fields :: Map String Dynamic, methods :: Map String (Object -> Dynamic) } for a slightly more demanding object representation.

                                                                          1. 1

                                                                            In ML, when we work dynamically we’ll be forgetting about all of the abstract types. Those have to be recovered behaviorally through inquiring the underlying object/module/what-have-you.

                                                                            This is precisely what I don’t want. Why do I need to give up my existing abstract data types to work dynamically? I have worked very hard to prove that certain bad things cannot happen when you use my abstract data types. These theorems suddenly no longer hold if the universe of dynamic values encompasses the internal representations of these abstract data types.

                                                                            Instead, what I’m saying is: Have a separate universe of dynamic objects, and only then embed it into the typed language as a single type. If you don’t mind the syntactic inconvenience (I do!), this can already be done to some extent, in an ugly way, in Standard ML (but not Haskell or OCaml), using the fact that exception declarations are generative. But this is a hack, and exceptions were never meant to be used this way.

                                                                      2. 1

                                                                        C# 4.0 introduced dynamic type, and it pretty much works that way.

                                                                        1. 2

                                                                          The whole C# language is dynamic. It has downcasts, the as operator, runtime reflection for arbitrary types, etc.

                                                                          What I want is a language that has both a static part, with ML-like “guaranteed unbreakable” abstractions (so no reflection for them!), and a dynamic part, which is as flexible and reconfigurable as CLOS.

                                                                          1. 1

                                                                            Having a function or operator that consumes an “obj” doesn’t make an entire language “dynamic”, by this definition literally every language is ‘dynamic’. I don’t love C# but its important to keep the discussion honest.

                                                                            If you’re looking for one that does dynamics “as good as CLOS” while also doing static types you won’t ever be happy. It’s like saying I want something that is completely rigid, but also extremely flexible. If you have access to both extremes in one environment, your types will become a dead weight and your flexibility will be useless. If you permit a compromise then you can get what you want, but rightly you don’t want a compromise.

                                                                            If you start with the solution instead of describing your actual needs you’re not going to find what you desire. What I found is that I value being able to express a wide variety of constructs statically, and with a degree of flexibility in my consumption of types. We can get close to how that might feel in F# through sum types, bidirectional type inference, operator overloading, and statically resolved type parameters. This may or not fit your needs, but it made me happy.

                                                                            1. 1

                                                                              Having a function or operator that consumes an “obj” doesn’t make an entire language “dynamic”, by this definition literally every language is ‘dynamic’.

                                                                              I can guarantee you, Standard ML is not.

                                                                              If you’re looking for one that does dynamics “as good as CLOS” while also doing static types you won’t ever be happy. It’s like saying I want something that is completely rigid, but also extremely flexible.

                                                                              This is a mischaracterization of what I said. Re-read the technical specifics: the dynamic universe should start completely separate from the static one, and only then should we inject the former into the latter as a single type. In other words, the dynamic universe does not need to include literally every value in the static universe. It only needs to contain object references. A value whose static type is dynamic could be a file, a window, a game entity, but not an integer, a string or a list - not even a list of dynamic objects!

                                                                              If you start with the solution instead of describing your actual needs you’re not going to find what you desire.

                                                                              I think I stated my needs in a pretty clear way. I want (0) static stuff to be static, (1) dynamic stuff to be dynamic, (2) static stuff not to compromise the flexibility of dynamic stuff, (3) dynamic stuff not to compromise the proven guarantees of static stuff. The way C# injects literally everything into the dynamic universe compromises the safety of static abstractions. Sadly, this is not possible to work around, except by hiding the entire .NET object system, which defeats the point of targeting .NET as a platform. Java suffers from similar issues.

                                                                    2. 2

                                                                      I am just a programmer, and know (very) little of type theory. I’m sorry, but I didn’t really understand your point(s?). Could you explain it in a way that doesn’t assume I understand the perspective of both parties (or just ELI5 / ELI-only-know-JS)?

                                                                      1. 7

                                                                        In the first conversation, the type theorist begins by being proud that he came up with a sound static type system for an object-oriented language. (Basically, OCaml, whose object system is remarkable in that it doesn’t need any runtime safety checks.) Presumably, before this, all other object-oriented languages were statically unsafe, requiring either dynamic checks to restore safety, or simply leaving open the possibility of memory corruption.

                                                                        The programmer’s reply is the same point that Kiczales et al make in the introduction of The Art of the Metaobject Protocol. A completely fixed object system (e.g., those of OCaml, Eiffel and C++) cannot possibly satisfy the needs of every problem domain. So he uses object systems that allow some reconfiguration by the programmer (e.g., CLOS), to adapt the object system to the problem domain, rather than the other way around. Hence, by design, these object systems offer few static guarantees beyond memory safety.

                                                                        In the second conversation, the programmer talks about the benefits of encapsulating state in an object-oriented setting. The type theorist retorts that many object-oriented languages don’t really do any sort of encapsulation, because you can just up and inspect the internal state of any object anytime. (By contrast, in a modular language, like ML, you really can’t inspect the internal representation of an abstract data type. The type checker stops you if you try.) The programmer acknowledges that this is indeed how object-oriented languages work (at least the ones he uses), but then says that encapsulation is still a property of the systems he designs, even if it is not a property of the languages he uses to implement these systems.

                                                                        The point of the second conversation (along with other points) is better made in this essay by Richard Gabriel about incommensurability in scientific and engineering research.

                                                                        1. 1

                                                                          Got it, thanks!

                                                                  2. 3

                                                                    How does OCaml’s structural record system compare with row types e.g. in PureScript or Elm or Ur? (Note: Structural (subtyping) is distinct from row types.) Does OCaml avoid the soundness issues of subtyping, e.g. writing an object of two fields into a mutable cell and then reading back an object of one field (subtype), thus accidentally losing a field (but which may or may not be printed if we print the object)? This is something that row types avoid.

                                                                    1. 3

                                                                      What you describe doesn’t seem like a soundness issue. It seems like normal upcasting and virtual dispatch. Btw OCaml doesn’t have structural records, it has structural typing of objects (i.e. OOP) using a row type variable to indicate structural subtypes of an object type. E.g. here’s a general description of entity subtypes:

                                                                      type 'a entity_subtype = < id : string; .. > as 'a
                                                                      

                                                                      And here’s a specific entity subtype:

                                                                      type person = < id : string; name : string > entity_subtype
                                                                      

                                                                      The .. above is a row variable: https://v1.realworldocaml.org/v1/en/html/objects.html#idm181614624240

                                                                      1. 3

                                                                        Thanks. My best effort to parse your post yields the following observations:

                                                                        type 'a entity_subtype = < id : string; .. > as 'a
                                                                        

                                                                        is OCaml’s way to write the equivalent of

                                                                        type EntitySubtype r = { id :: String | r }
                                                                        

                                                                        in PureScript. The r is a type variable referring to an unspecified record. Then

                                                                        type Person = EntitySubtype { name :: String }
                                                                        

                                                                        which, by replacing the r with { name :: Sting }, yields { id :: String, name :: String }.

                                                                        So, based on the documentation, OCaml has normal row types for its record system. It’s sometimes hard to parse OCaml discussion due to the “OO” syncretism.

                                                                        1. 2

                                                                          Right! PureScript record row types, and merging them together, has a really nice syntax. OCaml’s is comparatively more primitive and explicit. It causes a little frustration when modelling hierarchies of JavaScript types in the BuckleScript/ReasonML community. We usually recommend using modules and module types as much as possible though. They’re also super powerful, and target JavaScript classes quite well by using some BuckleScript binding constructs.

                                                                  1. 3

                                                                    Presentation and expectations about this aside, it teaches a really important technique that took me way to long to pick up in mathematics: guess at what the solution looks like and work backwards.

                                                                    It’s based on knowing the fundamental theorem of algebra to come up with the (x - A) (x - B) formulation, but you can suspect that will be the shape of solutions to problems like this long before you get around to proving the fundamental theorem.

                                                                    1. 2

                                                                      You don’t even need to assume the FTA – you go through the derivation with the assumption that it’s only valid if the quadratic does have two roots, but once you have the expression for the roots, you can substitute the expression for any quadratic to show that they’re indeed both roots.

                                                                      Alternatively, you can show that any (real or complex) quadratic has a complex root, which is much simpler than the full FTA. You have to be careful to avoid a circular argument, though, since the quadratic formula already implies this!

                                                                    1. 9

                                                                      This is sometimes called unfold as it “unfolds” a stream of values from a seed.

                                                                      Interestingly, you can unfold many kinds of structures, not just streams. A stream occurs when a seed value turns into the next seed. A list occurs when the seed turns into either 1 or 0 new seeds. A (potentially infinite) binary tree occurs when a seed value turns into two subsequent seeds. More than this, we can return data structures containing new seeds and this will result in a unique kind of “stream” composed of “layers” of these data structures.

                                                                      1. 7

                                                                        Unfold is mentioned in the bug-tracker discussion, but they sought a different name as Ruby does not use the term “fold” either.

                                                                        It’s nice to see more functional concepts emerge in established languages, so I thought I’d share that here. :)

                                                                        1. 3

                                                                          Good call! I think it’s great for Ruby to adopt Rubyish names. I didn’t want to suggest that the implementers weren’t aware of the theory, just to share the connection.

                                                                      1. 2

                                                                        I don’t understand arguing against the notion that grouping functionality together according to abstractions that are understandable to human readers does not make sense. I don’t particularly care about the syntax for doing this in a particular language but OO is the neatest way I’ve seen, in my limited experience.

                                                                        Could other people show me some examples of how the same type of abstraction and organising is done in non-OO languages? Is the argument that you can achieve abstraction and code organisation well without needing OO to be a rigid part of your language, and actually you gain flexibility without it?

                                                                        This notion was <mindblown.gif> for me:

                                                                        But, a method is nothing more than a function that takes the object as a (hidden) parameter. Everything is syntactic sugar.

                                                                        1. 9

                                                                          Chapter 1 of “The Joy of Clojure” does a pretty good job of explaining why “you can achieve abstraction and code organisation well without needing OO to be a rigid part of your language, and actually you gain flexibility without it”.

                                                                          Can be read for free here, and doesn’t assume any prior knowledge of Clojure, nor does it really focus on teaching Clojure. It mainly lays the philosophical foundation for why a language organized around functions and immutable state can be just as powerful, if not more powerful, than a language organized classes, methods, and object-bound state.

                                                                          https://livebook.manning.com/book/the-joy-of-clojure-second-edition/chapter-1/

                                                                          What’s also nice about Clojure is that it has tended to decompose every “unique” OO concept from Java into an equivalent (often bytecode-equivalent!), and more flexible, Clojure concept, and they can be adopted optionally and a-la-carte as needed. For example, interfaces become “protocols”, POJOs become “records”, method dispatch becomes “multimethods”.

                                                                          Even what you described as a “mind-blowing concept” is there: in Clojure, any Java object can be invoked using a functional syntax, for example obj.method(arg) literally becomes (.method obj arg), with no loss in generality, but a gain in composability!

                                                                          Less far afield, in Python, code is organized using modules, not classes, as the primary mechanism. As I wrote in my Python style guide:

                                                                          You should usually prefer functions to classes. Functions and modules are the basic units of code re-use in Python, and they are the most flexible form. Classes are an “upgrade path” for certain Python facilities, such as implementing containers, proxies, descriptors, type systems, and more. But usually, functions are a better option. Some might like the code organization benefits of grouping related functions together into classes. But this is a mistake. You should group related functions together into modules.

                                                                          1. 7

                                                                            I think ML modules are strictly better at this for three reasons: (1) they allow encapsulation over multiple types at once, (2) the allow definitions of values in unbounded ways as opposed to always being tied to one type/object, and (3) they have a specialized language for information sharing which allows for richer composition.

                                                                            To just give a brief idea of this, consider a key/value database interface (and it’s been a while, so I’m just going to pseudocode this ML)

                                                                            type KV_DATABASE = sig
                                                                              type connection_info
                                                                              type db
                                                                              type key
                                                                              type value
                                                                              
                                                                              connect : connection_info -> db
                                                                              close : db -> unit
                                                                              store : value * db -> key
                                                                              fetch : key * db -> value option
                                                                            end
                                                                            

                                                                            This interface richly describes how 4 types interact. Particular instantiations can make certain parts public and certain parts private: for instance, you might want to publicize connection_info and value, but hide key and db. Using the information sharing language, you can build bigger systems which share private information such as the true nature of the db type and work together at once.

                                                                            Finally, Scala is a good playground here because it’s clear that Odersky was heavily inspired by ML modules (some of his earliest Scala examples clearly use Scala’s advanced type member features to attempt to achieve this sort of ML-style modularity while pretending to be compatible with Java-style OO). At the same time, you can clearly see how this style runs contrary to the Java-like OO style also available in that language. Ultimately, most people go with Java-style OO as it’s more familiar and works better with the majority of the ecosystem and thus ML-style modularity is quite rare in practice.

                                                                            The key distinguishing characteristic is: “does your object represent the type of interest itself, or does it represent the API?”. In other words, are you writing class Database or are you writing class KVDatabaseApi?

                                                                            1. 4

                                                                              Tel hit the nail on the head. The ‘notion [of] grouping functionality together according to abstractions that are understandable to human readers’ is really modularity. Object-orientation is just one way of achieving that, and from our best understanding today the fact that it’s the most popular way is mostly an accident of history (see Richard Feldman’s excellent talk on this, https://youtu.be/QyJZzq0v7Z4 ).

                                                                              Modularity predates OOP: see e.g. Parnas’ seminal paper ( https://blog.acolyer.org/2016/09/05/on-the-criteria-to-be-used-in-decomposing-systems-into-modules/ ). And of course one of the big breakthroughs in abstraction was the notion of ‘abstract data types’, discovered by Liskov & Zilles.: https://blog.acolyer.org/2016/10/20/programming-with-abstract-data-types/

                                                                            1. 2

                                                                              This is a great example of the super interesting concept of infectiousness. We tend, as software designers, to seek modularity in designs so that changes to one part do not propagate at all in the best cases, or at least only to all of that parts callers in the worst case.

                                                                              Some designs and systems allow for this sort of property, but as described here it’s possible for this property to be lost in complex and wildly pervasive ways!

                                                                              1. 18

                                                                                Further to this point. Strive to design your data structures so that ideally there is only one way to represent each value. That means for example NOT storing your datetimes as strings. This will imply that your parsing step also has a normalization step. In fact storing anything important as a string is a code smell.

                                                                                1. 9

                                                                                  A person’s name should be stored as a string. City names. Stock symbols. Lots of things are best stored as strings.

                                                                                  1. 3

                                                                                    Names of things are best stored as strings, yes.

                                                                                    1. 1

                                                                                      What I recommend though is to encode them in their own named string types, to prevent using the strings in ways that they are not meant to be used. We often use that to encode ID references to things which should be mostly “opaque BLOBs”.

                                                                                    2. 3

                                                                                      Should stock symbols contain emojis or newlines, or be 100 characters long? Probably not. I assume there are standards for what a stock symbol can be. If you have a StockSymbol type constructed by a parser that disallows these things, you can catch errors earlier. Of course then the question is what do you do when the validation fails, but it does force a decision about what to do when you get garbage, and once you have a StockSymbol you can render it in the UI with confidence that it will fit.

                                                                                    3. 3

                                                                                      Interesting. If you don’t mind, I’d like to poke at that a bit.

                                                                                      Why should you care what anything is stored as? In fact, why should you expect the rest of the universe, including the persistence mechanism, to maintain anything at all about your particular type system for your application?

                                                                                      1. 8

                                                                                        They can maintain it in their own type system. The issue is information loss. A string can contain nearly anything and thus I know nearly nothing about it and must bear a heavy burden learning (parsing or validating). A datetime object can contain many fewer things and thus I know quite a lot about it lowering the relearning burden.

                                                                                        You can also build and maintain “tight” connections between systems where information is not lost. This requires owning and controlling both ends. But generally these tight connections are hard to maintain because you need some system which validates the logic of the connection and lives “above” each system being connected.

                                                                                        Some people use a typed language with code generation, for instance.

                                                                                        1. 3

                                                                                          @zxtx told the reader trying to leverage type systems to design their own program a certain way that derives more benefit from type systems. The rest of the universe can still do their own thing.

                                                                                          I’m not sure the string recommendation is correct. There’s several languages that are heavily based on strings powering all kinds of things out there successfully. I’ve also seen formally-verified implementations of string functionality. zxtx’s advice does sound like a good default.

                                                                                          We probably should have, in addition to it, verified libraries for strings and common conversions. Then, contracts and/or types to ensure calling code uses them correctly. Then, developers can use either option safely.

                                                                                          1. 3

                                                                                            Sure some things inevitably have to be strings: personal names, addresses, song titles. But if you are doing part-of-speech tagging or word tokenization, an enumerative type is a way better choice than string. As a fairly active awk user I definitely sympathize with the power of string-y languages, but I think people new to typed languages overuse rather than underuse strings.

                                                                                            1. 2

                                                                                              Unfortunately, even folks who have used typed languages for years (or decades) still overuse strings. I’m guilty of this.

                                                                                          2. 3

                                                                                            I admit to going back and forth on this subject….

                                                                                            As soon as you store a person name as a PersonName object…… it’s no longer a POD and you’re constricted to a tiny tiny subset of operations on it…. (With the usual backdoor of providing a toString method)

                                                                                            On the other hand Bjarne Stoustrup’s assertion that if you have a class invariant to enforce… that’s the job of an object / type.

                                                                                            Rich Hickey the clojure guy has an interesting talk exactly on this subject with an interesting different take….

                                                                                            Instead of hiding the data in a type with an utter poverty of operators, leave everything as a pod of complex structure which can be validated and specified checked and asserted on using a clojure spec.

                                                                                            ie. If you want something with a specific shape, you have the spec to rely on, if you want to treat it as ye olde list or array of string….. go ahead.

                                                                                            1. 12

                                                                                              I stuck to simple examples of the technique in my blog post to be as accessible as possible and to communicate the ideas in the purest possible way, but there are many slightly more advanced techniques that allow you to do the kind of thing you’re describing, but with static (rather than dynamic) guarantees. For some examples, I’d highly recommend taking a look at the Ghosts of Departed Proofs paper cited in the conclusion, since it addresses many of your concerns.

                                                                                              1. 1

                                                                                                Ok. That took me awhile to digest…. but was worth it. Thanks.

                                                                                                For C++/D speakers it’s worth looking at this first to get the idea of phantom types…

                                                                                                https://blog.demofox.org/2015/02/05/getting-strongly-typed-typedefs-using-phantom-types/

                                                                                              2. 5

                                                                                                As someone who worked professionally with both Clojure (before spec but with Prismatic Schema) and OCaml and I have to say I utterly prefer to encode invariants in a custom type with only a few operations instead of the Clojure way of having everything in a hashmap with some kind of structure (hopefully) and lots of operations which operate on them.

                                                                                                My main issue writing Clojure was that I did apply some of these (really useful and versatile) functions on my data, but the data didn’t really match what I had expected so the results were somewhat surprising in edge cases and I had to spend a lot of brain time to figure out what was wrong and how and where that wrong data came to be.

                                                                                                In OCaml I rarely have the problem and if I want to use common functions, I can base my data structures on existing data structures that provide the functions I want to over the types I need, so in practice not being able to use e.g. merge-with on any two pieces of data is not that painful. For some boilerplate, deriving provides an acceptable compromise between verbosity and safety.

                                                                                                I can in theory do a similar thing in Clojure as well, but then I would need to add validation basically everywhere which makes everything rather verbose.

                                                                                                1. 3

                                                                                                  I’ve used Clojure for 8 years or so, and have recently been very happy with Kotlin, which supports sealed types that you can case-match on, and with very little boilerplate—but also embraces immutability, like Clojure.

                                                                                                  With Clojure, I really miss static analysis, and it’s a tough tradeoff with the lovely parts (such as the extremely short development cycle time.)

                                                                                                2. 3

                                                                                                  The ability to “taint” existing types is the answer we need for this. Not a decorator / facade sort of thing, just a taint/blessing that exists only within the type system, with a specific gatekeeper being where the validation is done and the taint removed/blessing applied.

                                                                                                  1. 3

                                                                                                    In Go, wrapping a string in a new type is zero-overhead, and you can cast it back easily. So it’s mostly just a speedbump to make sure that if you do something unsafe, you’re doing it on purpose and it will be seen in code review. If the type doesn’t have very many operators, you might have more type casts that need to be checked for safety, but it’s usually pretty easy to add a method.

                                                                                                    On the other hand, the Go designers decided not to validate the string type, instead accepting arbitrary binary data with it only being convention that it’s usually UTF8. This bothers some people. But where it’s important, you could still do Unicode validation and create a new type if you want, and at that point there’s probably other validation you should be doing too.

                                                                                                    1. 1

                                                                                                      The last one is the best.

                                                                                                      Instead of scaling out code, we should be scaling out tests. We’re doing it backwards.

                                                                                                      I’ve been meaning to put together a conference proposal on this but haven’t gotten around to it. It’s the kind of thing that blows people’s minds.

                                                                                                      1. 1

                                                                                                        Can you expand a little on this? Sounds interesting.

                                                                                                        1. 4

                                                                                                          People don’t understand what tests do. If you ask them, they might say they help your code be less buggy, or they show your business customers that your program does what they’re paying for.

                                                                                                          That’s all true, but horribly incomplete. Tests resolve language.

                                                                                                          That is, whether it’s science, programming, running a business, or any of hundreds of other areas where human language intersects science, tests are the only tools for determining what’s true or not in unambiguous terms. Come up with some super cool new way of making a superconductor? Great! Let’s have somebody go out and make it on their own, perform a test. If the test passes, you’re on to something. Yay! If the tests fails? Either you’re mistaken or the language and terms you’re using to describe your new process has holes the reproducer was unable to resolve. Either way, that’s important information. It’s also information you wouldn’t have gained otherwise without a test.

                                                                                                          In coding, as I mentioned above, we have two levels of tests. The unit level, which asks “Is this code working the way I expected it to?” and the acceptance level, which asks “Is the program overall performing as it should?” (I understand the testing pyramid, I am simplifying for purposes of making a terse point). But there are all sorts of other activities we do in which the tests are not visible. Once the app is deployed, does it make a profit? Is your team working the best way it can? Are you building this app the best way you should? Are you wasting time on non-critical activities? Will this work with other, unknown apps in the future? And so on.

                                                                                                          We’ve quantitized some of this with things like integration testing (which only works with existing apps). Frankly, we’ve made up other stuff out of whole cloth, just so we can have a test, something to measure. In most all cases, when we make stuff up we end up actually increasing friction and decreasing productivity, just the opposite of what we want.

                                                                                                          So how do we know if we’re doing the best job we can? Only through tests, whether hidden or visible. How are we doing at creating tests? I’d argue pretty sucky. How can we do tests better? More to the point, if we do tests correctly, doesn’t it make whatever language, platform, or technology we use a seconary-effect as opposed to a primary one? We spend so much time and effort talking about tools in this biz when nobody can agree on whether we’re doing the work right. I submit that this happens because we’re focusing far, far too much on our reactions to the problem than the problems themselves. If we can create and deploy tests in a comprehensive and tech-independent manner, we can then truly begin discussing how to take this work to the next level. Either that or we’re going to spend the next 50 years talking about various versions of hammers instead of how to build safe, affordable, and desirable houses, which is what we should be doing.

                                                                                                          There’s a lot missing in my reply, but once we accept that our test game sucks? Then a larger and better conversation can happen.

                                                                                                          1. 1

                                                                                                            It will take me some time to digest this properly… it’s a completely different angle to which I usually approach the matter. (I’m not saying you’re wrong, I’m just saying you coming at it from such a different angle I’m going to have to step back and contemplate.)

                                                                                                            To understand where I’m coming from let me add…

                                                                                                            I regard tests as a lazy pragmatic “good enough” alternative to program proving.

                                                                                                            If we were excellent mathematicians, we would prove our programs were correct exactly the way mathematicians prove theorems.

                                                                                                            Except we have a massive shortage of that grade of mathematicians, so what can we do?

                                                                                                            Design by Contract and testing.

                                                                                                            DbC takes the raw concepts of program proving (pre-conditions and post conditions and invariants) and then we use the tests to setup the preconditions.

                                                                                                            Writing complete accurate postconditions is hard, about as hard as writing the software, so we have a “useful subset” of postconditions for particular instance of the inputs.

                                                                                                            Crude, very crude, but fairly effective in practice.

                                                                                                            My other view of unit tests is closer to yours…

                                                                                                            They are our executable documentation (proven correct and current) of how to use our software and what it does. So a design principle for tests is they should be Good, Readable understandable documentation.

                                                                                                            Now I will shutup and contemplate for a day or two.

                                                                                                            1. 1

                                                                                                              We are saying the same thing. Heck we might even be agreeing. We’re just starting from completely opposite sides of the problem. Formally validating a program proves that it matches the specification. In this case the formal specification is the test.

                                                                                                              I think when I mention tests you may be thinking of testing as it was done in the IT industry, either manual or automated. But I mean the term in the generic sense. We all test, all the time.

                                                                                                              What I realized was that you can’t write a line of code without a test. The vast majority of times that test is in your head. Works for me. You say to yourself “How am I going to do X?” then you write some code in. You look at the code. It appears to do X. Life is good.

                                                                                                              So you never get away from tests. The only real questions are what kinds of tests, where do they live, who creates them, and so forth. I’m not providing any answers to these questions. My point is that once you realize you don’t create structure without some kind of tests somewhere, even if only in your head, you start wondering exactly which tests are being used to create which things.

                                                                                                              My thesis is that if we were as good at creating tests as we were at creating code, the coding wouldn’t matter. Once again, just like I don’t care whether you’re an OCAML person or a Javascript person, for purposes of this comment I don’t care if your tests are based on a conversation at a local bar or written in stone. That’s not the important part. The thing is that in various situations, all of these things we talk about doing with code, we should be doing with tests. If the tests are going to run anyway, and the tests have to pass for the project to be complete or problem solved, then it’s far more important to talk about the meaning of a successful completion to the project or a solution to the problem than it is to talk about how to get there.

                                                                                                              Let’s picture two programmers. Both of them have to create the world’s first accounting program. Programmer A sits down with his tool of choice and begins slinging out code. Surely enough, in a short time voila! People are happy. Programmer B spends the same amount of time creating tests that describe a successful solution to the problem. He has nothing to show for it.

                                                                                                              But now let’s move to the next day. Programmer A is just now beginning to learn about all of the things he missed when he was solving the problem. He’s learning that for a variety of reasons, many of which involve the fact that we don’t understand something until we attempt to codify it. He begins fixing stuff. Programmer B, on the other hand, does nothing. He can code or he can hire a thousand programmers. The tech details do not matter.

                                                                                                              Programmer B, of course, will learn too, but he will learn by changing his tests. Programmer A will learn inside his own head. From there he has a mental test. He writes code. It is fixed. Hopefully. Programmer A keeps adjusting his internal mental model, then making his code fit the model, until the tests pass, ie nobody complains. Programmer B keeps adjusting an external model, doing the same thing.

                                                                                                              Which of these scale when we hire more coders? Which are these are programs the programmer can walk away from? Formal verification shows that the model meets the spec. What I’m talking about is how the spec is created, the human process. That involves managing tests, in your head, on paper, in code, wherever. The point here is that if you do a better, quicker job of firming the language up into a spec, the tech stuff downstream from that becomes less of an issue. In fact, now we can start asking and answering questions about which coding technologies might or might not be good for various chores.

                                                                                                              I probably did a poor job of that. Sorry. There’s a reason various programming technologies are better or worse at various tasks. Without the clarification tests provide, discussions on their relative merits lack a common system of understanding.

                                                                                                          2. 1

                                                                                                            ADD: I’ll add that most all of the conversations we’re having around tech tools are actually conversations we should be having about tests: can they scale, can they run anywhere, can they be deployed in modules, can we easily create and consume stand-alone units, are they easy-to-use, does it do only what it’s supposed to do and nothing else, is it really needed, is it difficult to make mistakes, and so on. Testing strikes me as being in the same place today as coding was in the early-to-mid 80s when OO first started becoming popular. We’re just not beginning to think about the right questions, but nowhere near coming up with answers.

                                                                                                            1. 1

                                                                                                              Hmm… In some ways we hit “Peak Testing” a few years back when we had superb team of manual testers, well trained, excellent processes, excellent documentation.

                                                                                                              If you got a bug report it had all the details you needed to reproduce it, configs, what the behaviour that was expected, what was the behaviour found, everything. You just sat down and started fixing.

                                                                                                              Then test automation became The Big Thing and we hit something of a Nadir in test evolution which we are slowly climbing out of…

                                                                                                              This is how it was in the darkest of days…

                                                                                                              “There’s a bug in your software.”

                                                                                                              Ok, fine, I’ll fix how do I reproduce…..

                                                                                                              “It killed everything on the racks, you’ll have to visit each device and manually rollback.”

                                                                                                              (Shit) Ok, so what is the bug?

                                                                                                              “A test on Jenkins failed.”

                                                                                                              Ok, can I have a link please?

                                                                                                              “Follow from the dashboard”

                                                                                                              What is this test trying to test exactly?

                                                                                                              “Don’t know, somebody sometime ago thought it a good idea”.

                                                                                                              Umm, how do I reproduce this?

                                                                                                              “You need a rack room full of equipment, a couple of cloud servers and several gigabytes of python modules mostly unrelated to anything”.

                                                                                                              I see. Can I have a debug connector to the failing device.

                                                                                                              “No.”

                                                                                                              Oh dear. Anyway, I can’t seem to reproduce it… how often does it occur?

                                                                                                              “Oh we run a random button pusher all weekend and it fails once.”

                                                                                                              Umm, what was it doing when it failed?

                                                                                                              “Here is a several gigabyte log file.”

                                                                                                              Hmm. Wait a bit, if I my close reading of these logs are correct, the previous test case killed it, and the only the next test case noticed…. I’ve been looking at the wrong test case and logs for days.

                                                                                                      2. 1

                                                                                                        Because throughout your program you will need to do comparisons or equality checks and if you aren’t normalizing, that normalization needs to happen at every point you do some comparison or equality check. Inevitably, you will forget to do this normalization and hard to debug errors will get introduced into the codebase.

                                                                                                        1. 1

                                                                                                          Ok. Thank you. I figured out what my hang up was. You first say “Strive to design your data structures so that ideally there is only one way to represent each value.” which I was completely agreeing with. Then you said “In fact storing anything important as a string is a code smell” which made me do a WTF. The assumption here is that you have one and only one persistent data structure for any type of data. In a pure functional environment, what I do with a customer list in one situation might be completely different from what I would do with it in another, and I associate any constraints I would put on the type to be much more related to what I want to do with the data than to my internal model of how the data would be used everywhere. I really don’t have a model of how the universe operates with “customer”. Seen too many different customer classes in the same problem domain written in all kinds of ways. What I want is a parsed, strongly-typed customer class right now to do this one thing.

                                                                                                          See JohnCarter’s comment above. It’s a thorny problem and there are many ways of looking at it.

                                                                                                          1. 1

                                                                                                            I think ideally you still do want a single source of truth. If you have multiple data structures storing customer data you have to keep them synced up somehow. But these single sources of data are cumbersome to work with. I think in practice the way this manifests in my code is that I will have multiple data structures for the same data, but total functions between them.

                                                                                                            1. 2

                                                                                                              Worked with a guy once where we were going to make a domain model for an agency. “No problem!” he said, “They’ve made a master domain model for everything!”

                                                                                                              This was an unmitigated disaster. The reason was that it was confusing a people process (determining what was valid for various concepts in various contexts) with two technical processes (programming and data storage) All three of these evolved dramatically over time, and even if you could freeze the ideas, any three people probably wouldn’t agree on the answers.

                                                                                                              I’m not saying there shouldn’t be a single source of data. There should be. There should even be a single source of truth. My point is that this single point of truth is the code that evaluates the data to perform some certain action. This is because when you’re coding that action, you’ll have the right people there to answer the questions. Should some of that percolate up into relational models and database constraints? Sure, if you want them to. But then what do you do if you get bad incoming data? Suppose I only get a customer with first name, last name, and email? Most everybody in the org will tell you that it’s invalid. Except for the marketing people. To them all they need is email.

                                                                                                              Now you may say but that’s not really a customer, that’s a marketing lead, and you’d be correct. But once again, you’re making the assumption that you can somehow look over the entire problem space and know everything there is to know. Do the mail marketing guys think of that as a lead? No. How would you know that? It turns out that for anything but a suite of apps you entirely control and a business you own, you’re always wrong. There’s always an impedance mismatch.

                                                                                                              So it is fine however people want to model and code their stuff. Make a single place for data. But the only way to validate any bit of data is when you’re trying to use it for something, so the sole source of truth has to be in the type code that parses the data going into the function that you’re writing – and that type, that parsing, and that function are forever joined. (by the people and business that get value from that function)

                                                                                                              1. 2

                                                                                                                I suspect we might be talking a bit past each other. To use your example, I might ask what it means to be a customer. It might require purchasing something or having a payment method associated with them.

                                                                                                                I would in this case have a data type for Lead that is only email address, a unique uuid and optionally a name. Elsewhere there is code that turns a Lead into a Customer. The idea being to not keep running validation logic beyond when it is necessary. This might mean having data types Status = Active | Inactive | Suspended which needs to be pulled from external data regularly. I can imagine hundreds of different data types used for all the different ways you might interact with a customer, many instances of these data types created likely right before they are used.

                                                                                                      3. 1

                                                                                                        Mostly agree, but I’d like to add that the ability to pass along information from one part of the system to another should not necessarily require understanding that information from the middle-man perspective. Often this takes the form of implicit ambient information such as a threadlocal or “context” system that’s implemented in library code, but as a language feature it could be made first-class.