1. 49
    1. 25

      I’m somewhat sympathetic to this tendency for two reasons:

      1. We don’t teach modules to either students or professionals, nor are they seen as the most important building blocks of languages (as Harper argues).

      2. Lacking the vocabulary of modules, students/professionals still intuit they need something like a module, which is why they reach for a class! I think we should gently applaud the correct intuition here and show them a much better approach. The lone exception to this is are experienced users of C, who would likely opt for the module-like approach of “one exported function hiding N implementation functions.”

      They’re almost there, it’s up to us to show them something better. And make it first class in tooling. Rust does a good job here.

      1. 2

        I agree. And reading the original/early analyses of modularity in software development (David Parnas and friends back in the ’60s and ’70s) really highlighted this for me.

        (Also the Ada language is very good at separating the various aspects of modularity in its “object-oriented” features.)

    2. 17

      (What follows is a quick guess with incomplete historical knowledge, so feel free to correct me or poke holes in it.)

      I think the core truth, perhaps the grand unifying theory of programming if I may be so bold, is that programming is just the art of mapping elements of one dataspace into elements of another dataspace. Elements, data. Transforms, functions. If you want to go up a level still, you treat the space of functions as a dataspace, and recurse until you’re back at the primordial stuff of programming. This is not particularly clever or controversial, I hope.

      Late 50s and McCarthy makes a language called Lisp, asking “hey, what if we used the same expression for for like basically everything? would that let us make programs that operate on programs?” This is the beginning of modern metaprogramming.

      60s come and Dahl and Nygaard make a language called Simula, asking “hey, what can we do to make writing simulations easier? algol is neat, but allocation is hard so let’s try lexically-scoped garbage collection…and writing simulations with shared props is annoying, but parameterized code isn’t good enough, so maybe let’s invent inheritance and subclassing”. This is the beginnings of objects.

      Then, we had Kay et al come along and say “hey, wouldn’t it be neat if programs were really just like cells, sending messages to each other?”, and that got a lot of people interested. This is beginnings of the actor stuff.

      We also had some boring software engineering stuff Meyer and folks did around design-by-contract saying “hey, what if we started modeling code as modules that made guarantees around what they consumed, what they emitted, and how they could fail?” This is the beginnings of interfaces, exceptions, and all that.

      There was also a fellow named Turner and his language Miranda which said, “hey, wouldn’t it be neat if we lazily evaluated everything? could we just specify everything in terms of types?”. This is the beginnings of “modern” functional programming as you’d see in Haskell or whatever.

      And then Bjarne Stroustrup cooked up C++ and everything has been terrible ever since.

      In particular, C++ muddled together a bunch of these concepts and when industry got a hold of it and had to scale it out people ignorant of the history of things (because really who’s gonna find a proceeding from the ACM in the early 70s on the history of an obscure Norwegian language from the decade prior) started building their own interpretations and rediscovering weird and bad versions of these ideas.

      Other languages got built in reaction to C++, focused not on refining the original concepts that were mutilated in that language but instead on trying to clean up the ugliness that C++ had turned them into. Along the way, massive improvements in computing power meant that focus on developer ergonomics was even more important than any fundamentally good pure principles in a language.

      And so, several generations on, we have people freaking out about “OO” (really vulgar OO, a bizarre mutt of several much smaller, much more fundamentally useful ideas) languages and panning them in favor of “functional” languages, when they don’t really understand either.

      Somewhat interestingly, perhaps the academic nature of languages like Haskell is what has allowed them to present their case so strongly. The people that work with those languages and steal ideas have a much neater wellspring to draw from when making careful, deliberate hybridizations like Rust.

      Then again, C won, Javascript won, and the great unwashed masses in their glee tend to outbreed and ultimately render irrelevant those of “nobler” heritage.

      1. 1

        And then Bjarne Stroustrup cooked up C++ and everything has been terrible ever since.

        Thanks for a good chuckle, this made my morning :)

    3. 8

      There’s no ‘flamebait’ flag, and this one’s quite close to my heart after many years developing in various languages and under various paradigms.

      Choosing designs in code is sometimes difficult, but as I think the article points out, this is one area where an ‘OO’ paradigm is forced in, when it’s not just unnecessary, but confusing.

      I remember fighting the urge to write code like that shown at the top, with that urge coming from two places, as far as I can work it out:

      1. Textbooks and teachings. Forcing ‘OO’ by forcing the use of classes is a classic (sorry) trait (sorry) of a junior developer - or even a seasoned developer who’s an Expert Beginner.
      2. Idealism. Not wanting to break the OO paradigm, so using classes, because that somehow makes things OO.

      Most (hopefully most) realise soon enough that OO, like any paradigm, works best when it’s used appropriately - and understanding when it’s appropriate comes only through experience.

      I just hope that we aren’t too harsh on those with less experience, who are more prone to use the ‘wrong’ paradigm. If you’re more experienced, eventually you realise that there’s no such thing as perfect code, and very seldom such a thing as excellent code - with such an accolade being almost entirely subjective anyway.

      1. 12

        A good metaphor for OO (by Pike, I think) is woodworking. People who program in OO are carpenters. They master all the intricacies of building wooden stuff, and they can indeed build beautiful chairs, beds, cupboards, even amazing houses. A basic tenet of their knowledge is that the grain of the wood has to be always parallel to the beams that support the structures. They are right, and they can make essentially everything using this knowledge, thus creating useful, time-lasting, robust objects. Yet, even if almost everything can be build in wood, not everything needs to be build in wood. For instance, when you build a metal chair, or a bridge, or a concrete skyscrapper, the notion of “grain” is moot and useless and you do not need to take it into account, but instead a set of different concepts that are meaningless for wood but very important for metal and concrete.

        In the same vein, programming languages that force you to use objects are imposing a severe limitation in their expressiveness. And this is OK! we do not need to use all the same language for all purposes.

        1. 4

          This talk makes a very compelling case for programming and woodworking being similar trades.


          I don’t want to spoil it but there is a fundamental tool that we build everything out of (be it functions or objects) and it’s remarkable how well this also applies to the context of woodworking.

        2. 3

          Absolutely. I love the fact it’s so easy these days to use more than one language.

          We are also lucky, I believe, that language designers have allowed languages to be multi-paradigm. I remember writing lots of C# code in an OO style, constructing a GUI and modifying it on the fly in reaction to events flying around. But large parts of the codebase would be in a functional style, thanks to LINQ. Code that simply performed steps one after another, with lots of awful side effects - such as installation / platform checking code would be in a more imperative style.

          Now I mostly work with cloud, where again there are objects passing messages, but also pipelines performing non-side-effecting operations spinning out into map/reduce - and also script-like code written in an imperative style to tell the cloud platform how to build it all up and glue it together.

    4. 6

      My sympathies are strongly aligned in this direction. I started off as an orthodox Python programmer, introduced more and more functional techniques as I matured, and moved to programming full-time in a functional, immutable language, to my great relief and joy. That said:

      This is an awfully unconvincing form of the now-classic anti-OO argument.

      The example at the top of the article does two things that the refactored, class-less version doesn’t: it handles memoization and it allows for deferred computation. The author completely handwaves these away, even to the extent of saying “no more worrying about memoization” - as though memoization were a problem to be dealt with rather than a feature!

      Of course, the OO version of the solution does these things because they involve state, and encapsulating some state behind a function is more or less exactly what objects are good at.

      Mind you, this is not an argument for object orientation—let over lambda and all that—but the framing of this article is that object orientation is just frivolous boilerplate around good old fashioned value-oriented programming, and even in the events that that is true, it’s not actually the case in the author’s own example.

      The result is that the article is unconvining and feels half-baked. I would love to see an idiomatic implementation of a memoized function accomplishing the same task, and an examination of the tradeoffs involved. Especially in the deeply OO-by-default languages.

      1. 4

        You left out the important part–OP says ‘no more worrying about memoization (it becomes the caller’s problem, for better or worse).’ IMHO, it’s for the better–even in the OOP world, the first principle that people are taught, is the Single Responsibility Principle. The refactored functional version decouples the responsibilities of calculation and memoization. It’s too easy to fall into the strong-coupling trap in the OOP world.

    5. 9

      Here is the PDF of the book Amazon-linked in the article. Gotta starve the beast.

      1. 6

        I clicked before opening the link and was surprised to see it’s the same book as the one my monitor is resting on :)

      2. 2

        is it legally in the public domain now ? would be pretty surprising…

        1. 10

          the domain this pdf is stored at is public, which is good enough for me

    6. 6

      I don’t like to blame Java for all woes in programming, but it’s really hard not to blame this one on it. Verbs should generally not be entities. The naming gives you the clue: if you’re having to invent nouns from verbs to name your classes, then perhaps they should not really be classes.

      Now, in less restrictive languages, this is a simple fix: create a function instead. But then Java came along, decided that everything should be a class (which is actually the problem. Everything being an object is fine, and actually has nice benefits, like easily enabling functions as parameters, as long as you can have function objects =P), and poisoned the minds of so many young people.

    7. 5

      the use of class for things that should be simple free functions.

      This is why I don’t like java.

      First you make a class, then you make the method static.

      because reasons

    8. 2

      Are there sufficiently smart compilers that recognise this behaviour and factor it out?

    9. 2

      There was a very interesting design decision in the Bitbucket.org codebase back when I worked there - they pushed for function-based-views rather than class-based-views in Django. The reasoning was that writing a decorator called with_repo which attaches the repository to the request is much simpler than writing a class mixin to set self.repository. With functional views, it’s clear what order “mixin” decorators are called in - top to bottom, but with class-based views you might need to understand the resolution order of the mixins, which gets to be a nightmare.

      This is not to say either method is better or worse, but there are times that things get overcomplicated simply to maintain “purity”, to the point that they can be an antipattern.

    10. 2

      This sort of example comes up every time you’re using a class as a namespace, rather than as a type and factory.

    11. 2

      Again, this is not to say that all classes are bad! In fact, the antipattern discussed here is very close to the Builder pattern, and there’s nothing wrong with the Builder pattern – when it’s needed, that is.

      Exactly. But this is the most important part of the problem – many quarrels and misunderstanding is caused not by lack of theory or knowledge of design patterns and idioms etc., but it is caused by different understanding of „what is needed“ (now, in the next release, during next few years…), context, background or differences of in how long/short-term particular developers think (i.e. „when it is needed“ and whether we should prepare for it now).

      I touched this topic in my articles about software complexity, I should translate them to English… this particular issue in short: distinguish between internal interfaces and public API, chose usually the simplest solution for private interfaces (less code and usually less OOP) and chose usually more robust solution for public API that is more prepared for future changes and can evolve in a backward compatible way (which usually means a bit more code, more OOP and design patterns).