1. 31
  1.  

  2. 28

    This begins by complaining about a pointlessly vague principle, then immediately suggests that we just write simple code, leading me to expect a work of satire.

    Unfortunately, it appears to be an attempt at a serious treatment of the topic.

    These slides miss the point so comprehensively that I’m genuinely curious what the author believes they are correcting.

    A few of the more egregious errors:

    • Confusing dependency inversion with dependency injection.
    • The idea that fits in my head is a good rubric for module size (unless all maintainers through time have uniformly sized heads)
    1. 10

      I get irate when the phrase “write simple code!” is used; you think we don’t try to do that already, that the majority of programmers are going around trying to write the messiest, most complex code bases they can? Simple code is hard to write, because it is extremely hard to find the true essence of a particular problem. I’m happy for the author if anytime he has to write a program he is able to correctly identify the core problems that have to be solved and which requirements can be folded into those core components, but I somehow doubt it. I don’t care too much for SOLID myself, but if we can’t find that simple solution, it’s not necessarily bad to have practices that can help us organize the complexity, and that this organization be documented and understood by many programmers.

      (I also want to reinforce a point someone astutely made a few weeks ago: a piece of code is not “simple” because you came up with it. It may seem natural and straight-forward and easy to understand to you, but it probably looks like a gigantic mess to anyone else.)

      1. 5

        I get irate when the phrase “write simple code!” is used; you think we don’t try to do that already, that the majority of programmers are going around trying to write the messiest, most complex code bases they can? Simple code is hard to write, because it is extremely hard to find the true essence of a particular problem.

        Your first statement doesn’t hold in most enterprise or FOSS development. They try to write whatever is most convenient for them under their constraints. Mostly cranking out code. Ridiculous, crufty, legacy codebases abound and grow in size daily. The number of basic defects also show there’s barely any review much less thoughtfulness needed to keep it truly simple. Fortunately, you noticed that as you were writing with that last line. An example of few aiming for simplicity would be the one or two Lobsters that claim to remove more code than they add on each job. Also, Niklaus Wirth who takes it to an extreme that’s still somewhat practical.

        Author’s point on needing more simple code still stands. Unfortunately, it’s usually a social or economic issue driving all the code bloat instead of technical or scientific reasons. Author is probably preaching to the choir (developers) instead of those needing the message (managers/executives).

        1. 5

          It’s the famous “If I had more time I’d have written a shorter letter”. Your point and that of the OP are correct. Most of us are not sociopaths. We’re not trying to sabotage the rest of the team - but we have limited time, so our code is not as crisp as it could have been.

          1. 2

            There’s a big difference between not trying to curb complexity and actively trying to introduce it. In the long run, time-evolving systems tend to follow paths that are “stable under small perturbations”, and social systems are no exception: people settle for workflows such that minimal changes result in no appreciable improvement, and radical changes are, well, too radical to even bother considering.

          2. 4

            I don’t think “keep it simple” is referring to high level design, or even module level design, but on the statement and expression level.

            I’ve seen a lot of code where it’s obvious that the original author used whatever algorithm or “code structure” they first came up with and didn’t give it a second thought.

            I’ve seen even more code where developers maintaining it later on didn’t think about it either, and just followed whatever convention the original author had, even if there are better ways of doing it considering their additional changes.

            As a more concrete example, if there’s one special case while iterating over a container, it makes sense to have “if (thisItem == specialCase) { / Do something special /}”. If more items are special cased later on, then a big if-else statement probably isn’t the simplest or best solution any more.

            It adds up after a while, and even code that may have been simple in the first place can be an ugly mess if people don’t pay attention.

            1. 1

              I agree. It’s my pet peeve, whenever I hear someone ranting on about Bad Code, or Complex Code, or Good Code……

              ..I try get them to be concrete about what they actually mean by saying “Write Simple Code”… after much huffing and puffing through strawman examples and casuistry what eventually emerges is something that looks remarkable like SOLID.

              ie. SOLID are principles which can guide you towards writing simpler code.

            2. 4

              I was struggling with understanding SOLID, but once I finally got it (which really was by trial and error), it made the code written by me more robust and easier to change to fit the new requirements.

              What the author IMO tried to do was to reduce SOLID to a more understandable terms, but their PoV works only if you somewhat grok SOLID. I feel it wouldn’t help me write better code if I was a newbie, because as you said, it’s even more vague.

            3. 9
              • SRP: There is nothing vague about single-responsibility. ETL is quite clearly three different responsibilities (at least!) and if you were modeling an ETL process with objects, you would be using many objects to model that process. Defining your problem domain is where the challenge of identifying responsibilities comes from.
              • Open/Closed: I’d like to point out that if you have code that is incorrect according to the requirements, you don’t have code that works. There’s a whole lot encoded here that needs to be unpacked. First, our goal through the entire SDLC is to treat changes to requirements as new requirements. If that were true, then we would never need to change existing code, but it’s rarely actually true. Think of this as a “happy path” statement- you should never need to change existing code.
              • LSP: This doesn’t actually require inheritance, you know. Have you ever written a test that uses a Mock object? Tah-dah, you’ve used LSP. No, you shouldn’t build tall inheritance trees. Yes, you should favor composition over inheritance. That doesn’t mean that polymorphism isn’t valuable.
              • Interface Segregation: Yes, objects should be simple, which seems to make this redundant- but that’s wrong. Even when an object is simple and does only one thing, that one thing may involve a number of “sub-things”, or different approaches. Think of, say, a Factory class. The Factory may have half a dozen different ways of creating the object, but this particular client only cares about one of them.
              • Dependency Inversion: First, my code itself doesn’t depend on a DI framework, at least if it’s a good one. I should never need to write code that talks to the DI system, the DI system is a runtime concern. So yes, my application “depends” on it, in the sense that I need it, but it is not a dependency in the sense that my code can’t compile without it. But once again, the trees are distracting the author from the forest. The DIP’s core goal is to force objects to communicate through interfaces, an abstraction which describes a kind of functionality, instead of the implementation details of said functionality. This is good, because if, as the author suggests I should do, I throw out incorrect code in the future, I can make the change only at the layer where it matters.

              And this brings us to the core problem with this analysis: it examines each of the SOLID principles in isolation, and not in interaction. DIP depends on Interface Segregation and the LSP. Interface Segregation allows us to take an SRP object and use it in multiple contexts, based on the interface we wrap around it. And so on.

              Aside: Now I’m going to say something that people are going to really hate. Visual Basic.NET has the best approach to Interface Segregation I’ve seen. If I have an interface A, which contains a method Bar, class B could do something like this: Function Foo() as Integer Implements A.Bar. The method name and the name of the interface method it implements don’t have to match. When I access an instance of B through its interface, I can call Bar. When I access the method through its type, I call Foo. The method Foo may be the implementor of multiple interface methods from many different interfaces, meaning Foo is called whatever I want it to be, based on the interface used to access the method. None of the major OO languages do this, and it’s a shame, because this creates an incredible loose coupling between interfaces and implementors that is really, really fantastic and is the purest example of what makes Interface Segregation great.

              1. 6

                “I don’t need SOLID, I’ll just write simple code!”

                “Gotcha. So we’re allowed to sell to Maine but not offer coupons, buyers pay tax if we sell to Oregon, and we pay tax if we sell to someone in Arkansas. We can’t legally sell product X to people in Texas, while due to this esoteric legal thing we can sell Y in New York unless the user accesses our site past 9 PM on a weeknight, except if they’re in New York City or Buffalo. Also after that fiasco last month we only saved a big account by promising we’d offer Z at a 10% discount to every state in New England, but we have to do that through coupons, which means it doesn’t work for Maine… Oh, and there’s some extra restrictions if the buyer lives within ten miles of a national park.”

                “:(”

                1. 1

                  Logic programming.

                2. 5

                  Neither fully agree nor disagree with this. I have to take it topic by topic.

                  SRP:
                  I agree that the name “Single Responsibility” has proven to be a poor name, particularly for of the criticism identified here. What counts as a “single”, indivisible responsibility? Furthermore, SRP does a poor job of prescribing specific actions.

                  However, I still regularly ask myself “Does this method belong in this module/class?” or “Has this module/class grown to encompass too much unrelated behavior?” If all it does is remind the programmer to keep that in mind, I’ll call it a win, albeit a weak one. In fact, keeping those questions close at hand tends to be how I get to the presentation’s “simple code that fits in my head”.

                  Open-Closed:
                  I mostly agree and cannot claim to focus much on Open-Closed in my daily practice. I do connect with the slides' idea of having granular-enough, simple-enough bits (purposely vague: could be modules, classes, functions, or interfaces) such that I can throw it out and quickly reproduce a better fit for the requirements. This might be both Mr. North and I mis-interpreting the implicit scale of any “requirements change”, but even so I lean in agreement with the criticism here.

                  LSP:
                  I both agree and disagree. I prefer to write code with a shallow (or no) inheritance hierarchy, and prefer to focus on the behaviors expected by an interface (be they implicit or explicit in your language of choice). However, when I have been in an environment where I was unavoidably interacting with object hierarchies, I found the LSP to be a useful tool to have.

                  Currently I work mostly in Ruby, where I don’t have “interfaces”, but in reality I absolutely do still have interfaces, they’re just not rigorously specified. I have to actively consider the invariants I am capturing while categorizing objects under a particular shared role in my domain (i.e. an interface), and thinking that though feels just like thinking about the LSP. I have even written some tests where I codify an interface’s assumptions and run said tests for each class to ensure my classes in fact meet the invariants implicit in the interface.

                  Interface Segregation:
                  I do not understand the criticism here. It doesn’t seem like criticism at all?

                  Dependency Inversion:
                  In the comment about “DI Frameworks”, I suspect Mr. North has made the common mistake of confusing Dependency Inversion with Dependency Injection. Furthermore, the distaste for the latter (injection) seems to arise from experiences people have in languages that require those frameworks (or do they?).

                  I practice a very stripped-down, but incredibly useful version of Dependency Injection in Ruby every day. I combine keyword arguments to my initialize methods that have defaults, and class-instance variables on modules to configure those defaults. The default “just works”, but the flexibility is maximum when and if it is needed. The mental energy spent is minimum because there is no framework, it’s just two simple language features being used together. As for actual Dependency Inversion, I have always felt it is just an academic phrasing of “don’t build leaky abstractions”.

                  1. 2

                    I also struggled with SRP, but I think I have it nailed down now to this.

                    Classes are all about enforcing class invariants.

                    If you can decompose the class into smaller and simpler classes and still enforce the invariant, you should.

                    eg. If you have a class with instance variables x,y,z,w and your class invariant is “f(x,y) AND g(z,w)”, you CAN and SHOULD obviously decompose it into a class with instance variables x,y and invariant f(x,y) and a class with instance variables z,w and invariant g(z,w).

                    ie. Classes are not “convenient bundles of plain old data”. They are guarantors of certain properties of, and relationships between, those data elements.

                    They permit a higher level of reasoning as you no longer need to consider the individual properties and relationships of the fields, but can rely on the properties of the class as guaranteed by the class invariant.

                  2. 8

                    I don’t understand why anyone would think the Liskov Substitution Principle is something that you can choose to use. If you are using a type-safe language with subtyping, the Liskov Substitution Principle automatically holds, end of the story.

                    The definition of subtyping is:

                    We say that S is a subtype of T, written S <: T, to mean that any term of type S can safely be used in a context where a term of type T is expected. (…) The so-called rule of subsumption (…) tells us that, if S <: T, then every element t of S is also an element of T.

                    Source (p. 182)

                    The definition of the Liskov substitution principle is:

                    Subtype Requirement: Let φ(x) be a property provable about objects x of type T. Then φ(y) should be true for objects y of type S where S is a subtype of T.

                    Source (p. 1812)

                    The Liskov substitution principle is a tautology once you accept these definitions. Alas, if you conflate subtyping with subclassing, and, furthemore, subclasses can arbitrarily mutate inherited fields and override inherited methods, the Liskov substitution principle buys you much less than you think, since this can destroy invariants that superclass implementors worked so hard to “establish”.

                    1. 2

                      Yup, as I keep telling people, LSP is not about style, it’s about bugs. If you have an LSP violation, you have a common or garden bug.

                      It might not be biting you right now. But it is there and will bite somebody, sometime.

                      1. 2

                        The point to my comment is that you can’t have a “LSP bug”, because the LSP is always true in a type-safe language with subtyping, whether you like it or not.

                        Unfortunately, languages that conflate (sub)classes with (sub)types make it way too easy to make wrong inferences about how programs behave. For example, given the following class:

                        class Animal {
                            public void makeNoise() { System.out.println("growl"); }
                        }
                        

                        Can you conclude that all Animals growl when told to make noises? It’s easy to say “yes”, but then I could define:

                        class Snake extends Animal {
                            public void makeNoise() { System.out.println("hiss"); }
                        }
                        

                        This isn’t a LSP bug. This is a bug in most people’s mental model of a programming language. The real conclusion you should make is “I don’t know for sure what makeNoise() does, because it isn’t final”.

                        1. 3

                          It’s all about class invariants.

                          The parent types class invariant must still hold in the child class in addition to any extra constraints the child classes invariant may impose.

                          If you show me an example like above, I say, What is the invariant?

                          It doesn’t look like you have one, so why do you even have a class?

                          At best you have “Plain Old Data”, at worst you just have a name space for a bundle of unrelated functions.

                          In your example, there is information outside the type system of the language. ie. makeNoise() in theory has the english language meaning “make a noise”, in practice it is whatever those routines do, and your type system is silent on the subject.

                          1. 2

                            The parent types class invariant must still hold in the child class in addition to any extra constraints the child classes invariant may impose.

                            The supertype’s invariants always hold in a subtype. Contrapositively, if an invariant doesn’t hold in a subtype, it doesn’t hold in a supertype either.

                            If you show me an example like above, I say, What is the invariant?

                            Okay, let me show you an example with an actual (intended, but not established) invariant:

                            class Foo {
                                protected int x, y;
                                public Foo(int value) { this.x = this.y = value; }
                                public final bool test() { return this.x == this.y; }
                            }
                            

                            Can you conclude that, given any (non-null) Foo foo, then foo.test() will always return true?

                            1. 1

                              Can you conclude that, given any (non-null) Foo foo, then foo.test() will always return true?

                              Hmm. I think you’re talking java there, and if java’s protected behaves like C++’s protected in this instance…

                              Then since you haven’t defined or documented an invariant a subclass could make test() return false by fiddling x or y.

                              I think we’re slightly arguing cross purposes here.

                              I’m saying the LSP is important to be aware of because the compiler isn’t going to give you any help in spotting the bugs.

                              For example: Suppose Foo also had the following public method…

                              public void gotcha()
                              {
                                  precondition_assert( this.x == this.y);
                              
                                  // Do stuff that will only work if that holds.
                              }
                              

                              Then any client of Foo invoking gotcha() will “just work”.

                              But if the client gets a subtype that doesn’t enforce that precondition, it will potentially fail.

                              The compiler won’t/can’t warn you.

                              However if you documented / implemented an invariant like so…

                              public bool invariant() { return this.x == this.y; }
                              

                              And added at the end of the constructor and the start of the destructor and at the start and end of every public method….

                              assert( invariant());
                              

                              Then if the subclass obeyed LSP and defined invariant like so….

                              public bool invariant() { return super.invariant() && subclassInvariant();}
                              

                              Still the compiler won’t warn you, but hopefully your unit tests will kick you in the pants.

                              Yes, spotting LSP violations are hard. Hence the usual advice to prefer composition to inheritance.

                              However, that said, there is a place for inheritance… but only if you don’t violate LSP.

                              1. 3

                                Your position is logically consistent: The superclass implementor documents his or her invariants and assumes subclass implementors will respect them. However, I contend that, in practice, this position is untenable:

                                • For starters, most programmers don’t document their invariants.
                                • Even when they do, other programmers violate them.

                                In conclusion, if your code depends on being used in a more restrictive way than what the language can enforce, you’re asking for trouble.

                                1. 3

                                  if your code depends on being used in a more restrictive way than what the language can enforce, you’re asking for trouble.

                                  Which is why the usual advice is to prefer composition to inheritance.

                                  I program in embedded (no mmu) real-time multi-threaded C and C++ for a living.

                                  Yup. There be dragons everywhere and the language gives very very little help.

                                  Which is why I have -W -Wall -Werror switch on in the build system.

                                  It is why we use splint (which I hate).

                                  It is why I use -w and rubocop when I speak ruby.

                                  Which is why I’m a strong advocate of TDD and unit testing (running them all under both -fsanitize and valgrind)

                                  Which is why my code is splattered with precondition asserts, and every class that isn’t “Plain Old Data” has an invariant.

                                  Which is why I detest getters and setters, as they are usually cop outs that bypass the protection on the invariant.

                                  Which is why I shove http://ozlabs.org/~rusty/index.cgi/tech/2008-03-30.html under the nose of anyone who doesn’t back away fast enough.

                                  Which is why I advocate strongly for the D language as we urgently need more compiler support. (I prefer D, but Rust still beats the pants of C/C++)

                      2. 2

                        Practical programming in real-world languages almost always involves the use of domain types that have stronger semantics than what’s enforced by the compiler. Indeed the whole point of programming is encoding a domain language in an executable language. The point of the LSP is to argue that type-system subtypes (types that are subtypes in the programming language) should be domain subtypes (and in particular that, in languages in which subclasses are necessarily subtypes, subclasses should only be used when that subtype relationship exists in the domain) - it’s not a tautology because we’re talking about types in two different spaces. You could generalise this by saying that your encoding of the domain language into the programming language should be a homomorphism.

                      3. 3

                        Well, I have a few friends right now who learn programming. I have been programming for roughly two decades now and yet (or because of that) it is very hard to give them simple advice that helps and that they appreciate.

                        At their current stage the code they write is mostly consumed by them. So it has to work for them and nobody else. Their programs have only 1000s of lines.

                        What paradigms would you suggest? They program mostly in JavaScript and VB (which I don’t know well).

                        I tried things like - avoiding global mutable state - composition over inheritance - do not provide indirect dependencies to constructors - get rid of exposing getters/setters for no reason - …

                        Of course, this is based on the code that I saw from them. It is very hard to concentrate on some key things without overwhelming them.

                        So even though SOLID might have faults, I really appreciate the idea to find a starting set of heuristics for design.

                        1. 2

                          When I was a Ruby newbie, I read Avdi Grimm’s “Confident Ruby”. It was built on SOLID and a few other practices. However, the book itself gives specific examples of a problem and how it can be solved.

                          Unfortunately, I don’t know about any such resources for JS or VB, but you could definitely look for something that represents abstract principles (like SOLID) on concrete examples. At least for me, it was way easier to learn things this way.

                        2. 3

                          One problem with these discussions is that it really depends on where you are in development for the discussion to make sense.

                          For starters, SOLID or not, there is way more code written than needs to be.

                          I think consistency is way more important than any other principle. When starting a new codebase, of course pick principles that have meaning to you. But going SOLID on an existing codebase that is not is over-all probably not a good move if you don’t have a long-term plan to make that codebase SOLID.

                          Finally, IMO, what I don’t like so much about SOLID is they mostly apply to languages with run-time sub-typing, which are most of the more popular OOP ones. I don’t believe run-time sub-typing is a good principle to write software on, I lean more towards functional solutions and fall back to run-time sub-typing only in a few situations that make sense. So, for me, SOLID has limited value.

                          1. 4

                            Ah I’ve been meaning to put up something like this. Now I can piggy back on it :)

                            So first, all due respect to whosoever came up with SOLID principles. I assume they are meant to be guidelines which do help at times, and not rigid rules. Hence technically the title is over-dramatic - which is fine. (2017 and dramatic titles? On the internet?! Unthinkable crime!)

                            For me, writing code is not much different than instructing someone on how to do something, except code is instructing a CPU. With the additional constrain that the instructions that I am writing will be read again and again, not just by the CPU but by me and other humans. Anything which applies to clarity of regular spoken languages almost definitely applies in some manner to computer code. Note that “structuring” code-bases into files and modules etc. is a different problem. Perhaps that’s more akin to structuring a book or a thesis.

                            The general advocacy for clarity in spoken languages is to use simple words. There are just so many kinds of things to express that you can’t come up with heavily constrained rules that can apply to everything. You have to use your common-sense to understand what you are writing, what you are describing, and to just keep it simple, intuitive and easy to understand - for the specific thing that you are writing. When you are talking to someone, you don’t go lookup a “way” of speaking.

                            The fact that people try to apply patterns in application code and that they are encouraged to do so is daunting to me. What is so hard about saying what you have to in simple words?

                            I quote someone I’m no fan of, but they put it very elegantly:

                            When I see patterns in my programs, I consider it a sign of trouble. The shape of a program should reflect only the problem it needs to solve. Any other regularity in the code is a sign, to me at least, that I’m using abstractions that aren’t powerful enough– often that I’m generating by hand the expansions of some macro that I need to write.
                            (source)

                            1. 1

                              I think analogies between programming languages and natural languages are often misleading. But even so, there are “patterns” in natural language: people are encouraged to structure arguments in particular ways, to use certain forms of repetition, even to use particular template phrases. People absolutely do look up how to write an essay (and further, how to structure an exploratory vs a persuasive essay), how to write an opinion column… Just saying “use simple words” is occasionally a useful reminder, but by no means the be-all and end-all.

                              1. 1

                                “Uncle” Bob Martin created the acronym and collected/created the principles. In the mid-90s he popularized them in magazine articles, usenet, and the C2 wiki, culminating in his book Clean Code.

                              2. 2

                                While I don’t agree with most of the slides (sure, “write simple code” is a great advice but what does simple really mean as long as you don’t have a context?), I was simply amazed by one of the last slides which claimed that the SOLID principle is “too much to remember”.

                                Aren’t we supposed to be software engineers? Since when five principles with a nice mnemonic are “too much to remember”? Why do we need an even vaguer advice like “write simple code” to replace SOLID?