1. 48
    1. 31

      This is as opposed to languages where conformance must be explicitly declared somehow, either with a formal subclass relationship (the class-based portion of C++) or by explicitly declaring conformance to an interface (Java). Not needing to formally declare conformance is powerful because you can take some pre-existing object, declare a new interface that it conforms to, and then use it with your interface.

      There is a third option: things like Rust’s traits, Swift’s protocols or Haskell’s typeclasses, all of which are like a post-hoc version of Java interfaces. You’re effectively advocating for dynamic/structural typing because it addresses the expression problem. That’s not wrong, but there are ways to do it in more statically/nominally typed systems too.

      1. 4

        Even go, which is not noted for it’s expressive type system, does this.

      2. 1

        I’m not familiar with Rust’s traits or Swift’s protocol. For Haskell’s type classes, if you want to extend a predefined type to conform to a new type class, you would need to newtype it with that type class, which is still inconvenient as you need to call existing functions with that predefined type under a Monad that wraps the newtype.

        1. 14

          if you want to extend a predefined type to conform to a new type class, you would need to newtype it with that type class

          You do not need to do this at all.

          1. 3

            Seconding this, although if you didn’t define either the type or the typeclass you get into orphan instance territory.

          2. 2

            I stand corrected. Thanks. I didn’t have enough coffee. You need newtype only if you want to further customize the type.

        2. 4

          In Haskell, there are no such limitations, as others mentioned. You can define as many instances as you want, as long as they don’t clash when imported.

          In fact, the limitation you’re describing is that of OOP interfaces! It is them that require writing adapters all the time if the class itself does not implement an interface.

          Rust does have a limitation: instances must be written either alongside the type definition, or alongside the trait definition. Less flexible than Haskell, but still much better than OOP interfaces.

    2. 10

      Having spent a lot of time with Python and Typescript, I agree with the safety argument – even in completely untyped Python-django-megamodel-spaghetti code I never saw a homonym method getting called accidentally with a compatible signature.

      That said, working mostly in Rust now, I think I prefer traits to duck typing for a different reason, which is code organization. I like being able to look at foo.rs and see “here’s the data, here’s the inherent impl with methods that will only be called on this /specific/ type, and here are the trait impls that can be called generically”. Sometimes in say, Python, looking at a big class it’s unclear (unless it’s __dunder__ ofc) whether a method is part of an existing interface, or a bespoke helper for this specific class. It’s not a huge deal, but it helps me get my bearings.

      I’m not familiar with Go so I don’t know if Go’s flavor of duck-typing suffers the same organizational troubles.

      1. 6

        Duck-typing also makes it more difficult to do automatic refactoring because you can’t tell who implements what, as you mentioned in the Python example. I still love it though.

        1. 5

          You’re mentioning automatic refactoring, but this is just as applicable to manual refactoring. Modifying any API anywhere you want and then fixing the errors that the compiler outputs is very relaxing, as opposed to staring at an integration test that found breakage that happened somewhere in a piece of code not covered with enough unit tests.

      2. 2

        Traits are duck typing. Especially if you use trait objects.

        1. 11

          I don’t think so. You have to impl SomeSpecificTrait for YourType, and to be compatible with a param: impl SomeSpecificTrait it must refer to the exact same trait.

          1. 4

            If it impls WalksLike<T> and QuacksLike<T> where T: Duck it must be a duck.

          2. 4

            I think it would be better to say that traits quack like duck typing. 😜 They mostly achieve the same things. Especially in Rust where it’s easy to use the derive macro to get certain trait methods, and yet more traits have blanket implementations.

    3. 9

      I’ve used a lot of Python and a fair amount of Common Lisp, and I’ve also never seen this problem happen in real life, mostly because data tends to “stay in its lane”, I think. But I do note that the relative safety of duck typing is mostly enforced at runtime (always at runtime for Python, maybe at compile time in CL depending on details), and people who like static languages mostly like to have those constraints enforced at compile time…

      At $DAYJOB, I write a lot of C#, and its interfaces are mostly fine. The problem that the author of the article points out is that dealing with other people’s code, they might not have declared an interface when they should have, and you have to use a subclass instead, even where an interface might be more appropriate. I run into this a lot of the time. And back when I was writing Java, I often ran into the problem where not only did the library author not declare an interface where they should have, the class was also declared ‘final’ and couldn’t be subclassed. While this is possible in C# as well, I’ve never seen it in the wild.

      1. 4

        But I do note that the relative safety of duck typing is mostly enforced at runtime, and people who like static languages mostly like to have those constraints enforced at compile time…

        Is that really true? Go, Standard ML, Ocaml, Typescript and probably a lot of other languages I can’t think of perform compile time checking of structural constraints.

    4. 4

      I agree. Duck typing is indeed safe in the sense that in practice you are protected against code silently doing the wrong thing at runtime because of an accident. The real gains come from whether the type safety is enforced at compile time or runtime.

      Compile time means the compiler is a really effective robot at telling you if you forgot something. Runtime means that your customer is a really effective person at telling you if you forgot something. I personally prefer the robot but both are safe in the sense used by the article.

      1. 5

        Compile time means the compiler is a really effective robot at telling you if you forgot something. Runtime means that your customer is a really effective person at telling you if you forgot something. I personally prefer the robot but both are safe in the sense used by the article.


        To whatever extent this is true*, it also explains why duck typing is perfect for hackers because we are our own customers ♥

        We don’t need to spend sixteen hours telling the robot in excruciating, painful, inflexible, mindtrapping details that “so when I say I take a onion from the fridge, I mean a RootVegetable of the Allum variety, don’t worry, I’m not storing any parody newspapers in there” and “oh no I also need to store peanut butter in that same fridge and peanut butter is not a RootVegetable so now I’m down at the zoo with a razor and a template language trying to extend my fridge to also take peanut butter 💔

        We can be like “take onion from fridge and put it in frying pan great ok thanks” while in an annotated language I’d need to create onion, fridge, frying pan as types and and implement versions of taking and putting specific to those types😭

        *: The reason it might not be true is because type checking can only find a subset of bugs, not all bugs. So we still need testing and bug reporting and care. And at least some of the time, the thing that the type annotation tells us we messed up is… the type annotation itself.

        1. 2

          Okay, let’s play with this analogy for a bit.

          You say, “take onion from fridge and put in frying pan ok thanks.” That assumes that somewhere an onion is defined, at least in an inventory of produce as an opaque object.

          There’s also a lot elided in there: should the onion be chopped? If I chop it, do I use a chef’s knife, fork, or citrus squeezer? (They all have “break up” capabilities for food items, after all.) Should the pan be hot? Before or after I put the onion in? Is it safe to put a plastic spatula in the same pan?

          Likewise, peanut butter may not be a RootVegetable, but it is something that won’t be harmed by refrigeration. A dog named “Peanut” OTOH, isn’t.

          This seems silly because “we all know what’s safe/advisable in the kitchen” except that’s not really true, even for something this mundane. People cause fires, get sick from incorrectly-handled food, and destroy ingredients and utensils all the time because the implicit knowledge an experienced chef has isn’t well-encoded in most recipes.

          So yes, if hackers == “experienced chefs” you can leave out many of the details. Just don’t hand those instructions to any inexperienced assistant (robotic or otherwise) without a) making most of the above explicit, b) closely supervising their work, or c) both of the above.

          1. 2

            Rather than experienced chefs, I meant people programming primarily for themselves and their friends. Duck typing is great for that situation because there’s no customer.

            I meant a language like Lisp where you can just easily (fry (fridge 'onion)) and as long as the object the fridge returns on an ‘onion request message can work with whatever the fry is doing, you’re golden.

            For example, this is a complete running brev program:

            (define fridge (ctq onion alliumlicious peanut-butter crunchy))
            (define (fry trait) (print "Mmm! Frying the " trait " ingredient!"))
            (fry (fridge 'onion))
            (fry (fridge 'peanut-butter))

            Mmm! Frying the alliumlicious ingredient! Mmm! Frying the crunchy ingredient!

            It’s not only the brevity that’s good. It’s the ease of extensibility, how I can fry lists and numbers and vectors and how I can do other things to stuff in the fridge:

            (fry #(very curious))
            (fridge 'dog 'friendly)
            (define (admire quality) (print "You look " quality " today!"))
            (admire (fridge 'dog))

            You look friendly today!

            1. 2

              I like the framing of, “people programming [for] their friends”. That matches the best experiences I’ve had with Ruby or Python projects: a small crew with tons of shared context who just want to sling a bunch of code to do simple stuff quickly, or experiment with wild new approaches without much ceremony. It’s great, and much like a casual dinner party, you can get something very satisfying put together in an ad-hoc way.

              Unfortunately, it doesn’t lend itself to making recipes (a.k.a. “libraries”, or even “interfaces”) that can be consistently reproduced in someone else’s kitchen, in larger or smaller quantities, or even in places that don’t understand the same idioms and habits.

              That doesn’t mean you can’t build “serious” software in a dynamic, duck-typed language. It does mean you’re inevitably eventually going to end up with bugs, hand-rolled test cases, and more complex application code because you’re doing a ton of validation, attribute/feature detection at runtime.

              So in the spirit of, “yes, and”: for certain types of projects and teams, lightweight typing – be it method interfaces, internal structure, or optional annotations – is a totally appropriate level of formalism. Combined with a good test suite, some fuzzing/property testing, and excellent docs/training, it can get you a long way.

              I personally like stronger type systems these days because I’m old(er) and (more) conservative in how I build things. My memory isn’t what it used to be, and my tolerance for accidental fires – which I might have thought hilarious in my wayward youth – is effectively nil. ;)

              1. 1

                Mmm. I’ve used a duck language in a larger project with a schema validation library and I was like “is this just type annotation with extra steps🤔” although in the end I never made up my mind if it were or not.

                I call brev a footgun language with tons of magic🤷🏻‍♀️

            2. 1

              (fry (fridge ’onion))

              why would you fry a fridge containing an onion?

              1. 1

                Functions work from the inside out so what this line of code does is take the onion out from the fridge and then fry it.

                In a more verbose language, it’d be:

                itemToFry = fridge.getItem(onion);

                (Plus forty thousand unnecessary pairs of curly braces around every little thing but I’ll leave them to your imagination.)

        2. 2

          I sort of agree with you in theory. However, when using compile time type checking of any sort I’ve never really run into the issue you describe in practice. But a large part of my personal hacking starts with modeling data and it’s various states. If your personal hacking doesn’t typically revolve around that then your experience will be very different from mine.

          1. 1

            That makes sense to me 👍🏻

    5. 3

      welp, i ran into more or less the problem cited early this year. two things that were relatively simple (the author’s first condition, the interface must be simple enough to accidentally implement) quacked the same way for quite a long time. they were used interchangeably via different code paths, causing the author’s second necessary condition (the objects have to get mixed up). when i changed the behavior of one of them the third condition was met (something bad happened).

      the bad thing wasn’t the end of the world, and i’m not suggesting the first two conditions were the result of stellar design or whatever. i’m just saying that in my small sample (me), it’s more common than suggested in the article. i did like the article by the way.

    6. 3

      Hm good way of breaking it down … I agree, and being substantially through a programming career and not having seen the bug, that is one more data point in support :)

    7. 2

      Good read! I remember wondering about this when Go came out. I didn’t think too hard about it, but it seemed like a potential problem. But now, thinking through the things that actually have to go wrong, I absolutely agree with the author.