I’m really burning out on “simplicity” posts. I get it, simplicity is good. But that doesn’t actually inform me as a developer. Why do things become complex? What kinds of simplicity are there? How do we detect simplicity? How do we know when we shouldn’t simplify? None of these posts ever answer that.
It’s like if I stood on stage and said “Be good! Don’t be evil! Being evil is bad!” Sure, everybody agrees with that, but does it actually help people make moral choices?
(Also the analogy is dumb. Yes, we should totally base our engineering practice on a movie! A movie where the engineers are wrong because of magic.)
Why do things become complex? What kinds of simplicity are there? How do we detect simplicity? How do we know when we shouldn’t simplify? None of these posts ever answer that.
Because your questions are difficult and answers are dependent on a lot of factors.
I’ll tell you what I do to detect simplicity, maybe you’ll find it useful. Let’s start with a real-life example.
I needed tokens for authorization, I reviewed existing formats, JWTs look conservative and Macaroons look powerful.
What do I do? I dissect the formats. For JWTs I read the RFCs and implemented software to create them and verify them (each in 2 languages) for various options (key algorithms).
For Macaroons I read the whitepaper, then implemented verifier based on the whitepaper, reviewed existing implementations, found out differences between the whitepaper and de-facto code with explanations. While comparing my implementation I found out some security issues with existing code. Additionally I implemented the rest of the stack (de/serialization, UI for manipulation of Macaroons). After two months I knew precisely where does complexity lie in Macaroons and of course there are the only spots all blogposts don’t mention (spoilers: cycles in third party caveats, no standards for encoded caveats…)!
Then I looked at my JWT proof-of-concept code - it uses base64(url) and JSON, primitives that basically all programming environments have built-in. After limiting the algorithms used the entire verifier takes just a couple of lines of code! It’s vastly simpler than the Macaroon one.
What’s the moral here? That you need a lot of time to see for yourself what is simple and what is complex. Now every time I see a post recommending Macaroons I can already see the author didn’t use them in practice (compare that with the Tess Rinearson post linked at the end of that article).
That’s only the example, I routinely implement various protocols and re-implement software (ActivityPub, Mailing Lists, roughtime client) and each time I discover what’s simple or what’s complex in each one of them.
Alas, not everybody gets it. The best that these kinds of exhortations can do (all that they aim to do, as far as I can tell) is to persuade people to modify their own set of values. This doesn’t immediately result in better code… but I think it’s a necessary precondition. The only developers who will even ask the good questions you suggest (let alone look for good answers) are the developers who hold simplicity as a value.
(The analogy is pretty dumb though, and not especially motivating.)
I’ve never met a developer who does not claim to hold simplicity as a value. But as a concept it is so subjective that this is meaningless. It’s extremely common for two developers arguing for opposing approaches each to claim that their approach is the simpler one.
I get the value of exhortations. I think more examples would be better. Pairs of solutions where the simple one meets requirements with a number of better attributes. Developers often prefer to see the difference and benefits instead of being told.
Exactly. This is one of those things you can’t explain in a book. When to compose, when to decompose. When to extract methods, when to inline methods. When to add a layer of abstraction, when to remove one. When is it too flexible, when is it too simplistic?
No amount of rules of thumb is going to answer those question. I only know of one way to learn it: practice. Which takes effort and most importantly, time. Rendering this kind of posts mostly useless.
I agree that anecdotes like this can get old, but I’ve been meaning to actually write a similar post to this… on something I’ve been calling the “too many buttons” syndrome. This issue pops up a ton in large pieces of software (Though I’m specifically thinking of projects like JRA and Confluence) where there’s an option for everything.
Not everyone gets that simplicity is good because it can be harder to sell. “If a user wants it, we should do it” is something I’ve heard just a few too many times without bothering to look at the use case or if it could be done better. Sometimes it’s worth stepping back and looking at the complexity something will add to the project (in both code and testing… especially when it comes to options and how they interact with each other) rather than just adding all the little features.
In my experience a lot of commercial companies that develop under tight deadlines produce a lot of suboptimal and dreadful code. Often it takes more time, to produce less code simply because the more time you spend on a difficult problem, the better you understand it. I think the reason that mosta lot of software is bloated and complex is because it’s “good enough” which is optimal from an economic point of view.
The other day there was a discussion here on Lobsters about all the required pieces needed to run a Mastodon instance and the popular solution of abstracting all that away in a Docker container. There are alternative implementations that depend on a smaller number of components alleviating the need for dumping everything in a container (of course the question is, do these alternatives offer the same functionality).
How do we detect simplicity?
For me personally simplicity has to do with readability, maintainability and elegance of code or infrastructure. If someones solution involves three steps, and someone else can do it in two steps (with comparable cognitive load per step), I would say it’s more simple.
You are so right. After years of experience, I only start to clarify my idea of “simplicity”. There are different kind of simplicity most of them are not totally compatible. And in my opinion some need to be preferred to other, but there is no clear rule. To make a choice between different complexity I still use a lot of intuition and I debate a lot, and I am still unsure my choice are the best.
only using basic feature of a language (do not use advanced programming language feature) is certainly the most important aspect in simplicity. It will make your code easy to read by more people.
don’t use too much intermediate functions, and if possible don’t disperse those function in many different files before really feel you are copy/pasting too much. My rule of thumb is, 2 or 3 times duplications is totally fine and superior to centralisation of code. It start to be really clear that code factorisation is good when you start repeating yourself more than 6 to 10 times
only really use advanced feature of the language after having tried not to use it for some time and really lack the ability of that advanced feature. Some examples of what I call advanced feature of a language are; class heritage, protocols in Clojure, writing your own typeclasses in Haskell, meta programming (macros in LISP), etc…
prefer stateless functions to objects/service with internal states
prefer pure functions (side effect free) other procedures (functions with side effects)
give a lot of preference to composable solutions ; composable in the algebraic meaning. For example, I do my best not to use LISP macros, because most of the time macros break composability. The same could be said when you start to deal with type-level programming in Haskell, or when you are doing meta-programming in ruby/python.
For now, all those rules are still quite artisanal. I don’t have any really hard metrics or strong rules. Everything I just said is “preferable” but I’m pretty sure we can find exception to most of those rules.
Amen, +1, etc. “Simplicity” often just means that a concept fits cleanly in the maker’s head at a particular point in time. How many times have I returned to a project I thought was simple only to find I had burdened it with spooky magic because I didn’t benefit from critical distance at the time? When was the last time I deemed another person’s work “too complex” because I couldn’t understand it in one sitting and wasn’t aware of the constraints they were operating under? Answers: too often and too recently.
This is a good question (as are the others). Borrowing from Holmes, I’d say there’s a continuum from naive simplicity, to complexity, to simplicity on the other side of complexity (which is what is truly interesting)
For example, “naively simple” code would only cover a small subset (say, the happy path) of a business problem. Complex code would handle all, or most, of the business complexity but in a messy, complicated way. “Other side” simplicity refines that complex code into something that can handle the business complexity without itself becoming overly complicated.
What happens to simplicity? We trade it for a other things of course. For example, you can have simple regular expressions, but most people prefer less simple and more powerful implementation like Perls.
Simplicity is often a tradeoff versus easyness, performance, flexibility, reusability, useability, etc. So simplicity is good, but those other things are also good.
Most people seem to agree that simplicity is best. However, when it comes down to simplicity for the user versus the developer, I have seen disagreement. Each trade off is going to be situation and implementation dependent, but at my job I’ve been pushing for a simpler developer environment.
In my office, there is a tendency to create exceptions to rules because it makes things simpler for the user. Since the environment has more exceptional circumstances, it tends to have more errors when people forget the undocumented exception case. In my opinion, this causes an uneven experience for the user despite being “simpler.”
My experience is coming from a medium sized, non-tech company. I work in the IT department so we are a cost center. There is an emphasis on white glove treatment of the revenue producing portions of the company. YMMV
It’s not always easy to design a simple system, especially if you don’t have a good prior model for the system you are trying to design.
I am experiencing this problem repeatedly with my current project (Curv). What happens is that the first solution I come up with to a given problem tends to be overly complex. And it can take me a long time to see that there is another solution which is both simpler and more powerful. Sometimes, when I find the simple solution, it hits me like a lightning bolt: the new design is so “obvious” than an outside observer might assume that it was trivial to design, not knowing the process I went through.
The famous quote “I would have written a shorter letter, but I did not have the time” kinda summarizes this pretty well for me. Simplicity stems from a deep understanding of the problem and our ability to reduce it to its essence.
Developers are obsessed with the notion of “best practice”.
It implies that there is one correct way of doing things, and all other solutions are either imperfect or, at worst, “anti-patterns”. But the definition of best practice changes everytime a new technology arises, rendering the previous solution worthless garbage (even though it still gets the job done).
Developers should be concerned with best practices. We are constantly learning better ways to do the jobs that we are doing. Some of the things that we used to do are no longer done, because there are better ways to do them; some of the new ways to do things are quite complex. The definitions change, because our understanding, or our underlying technology, changes. Similarly, we updated things like “our model of the atom” when we had a better understanding of how it should actually work. Complexity isn’t necessarily bad, just as simplicity isn’t necessarily good.
It’s much more important to think for yourself than it is to strive for simplicity because some guy on this blog told you to try to be simple. Sometimes it’s fine and great to have a static site with no database, but pretty often it turns out that databases are important and do things that you kind of need. Sometimes it’s great to go for a simple tech stack and remove moving parts, but sometimes it turns out that a more complex tech stack is actually important to do the kinds of things that you need to do. The thing that’s actually important is not simplicity, it’s understanding why should approach something a particular way.
Don’t get me wrong; there are certainly developers who find answers and then go looking for problems to solve with them. Sometimes developers make things much more complicated than they need to be. However, there’s a counterjerk to this happening where people make things much more simple than they should be, while questioning the utility of doing more. I think the simplicity counterjerk is just as destructive as thoughtless complexity.
The issue with obsessing over “best practice” is that we end up building a certain way because it’s “best practice”, not because we understand why it’s appropriate to our situation.
This is a great point. I think sometimes we fail to consider the full context for any given “best practice”, especially while in a development-intensive mode.
I think it’s important to have a touch of skepticism about best practices, especially in environments that change rapidly.
Ultimately, I think the intention comes from a good place. It seems to follow that if one is interested in following best practices at all, they want to do so out of a sense that it is the correct thing to do. This applies technically as well as socially. On its face, this may not seem problematic. However, issues arise once correctness (as a whole) and “best practices” become muddled.
Are best practices good because they are correct, or are they correct because they are best practices? This is starting to look a lot like the Euthyphro Dilemma.
People should be concerned with which practices are the best, but not with ‘best practice’. The problem with the concept is that it’s used as a justification for doing something. ‘We’re doing X, Y and Z because it’s best practice’ is nonsensical. There’s someone that things that A, B and C are best practice as well. Justify why it’s the best thing for you to do based on its actual merits. When new, superior approaches arise, they might become YOUR best practice immediately, but for a lot of people ‘best practice’ is synonymous with ‘the way we’ve always done it’.
I’m really burning out on “simplicity” posts. I get it, simplicity is good. But that doesn’t actually inform me as a developer. Why do things become complex? What kinds of simplicity are there? How do we detect simplicity? How do we know when we shouldn’t simplify? None of these posts ever answer that.
It’s like if I stood on stage and said “Be good! Don’t be evil! Being evil is bad!” Sure, everybody agrees with that, but does it actually help people make moral choices?
(Also the analogy is dumb. Yes, we should totally base our engineering practice on a movie! A movie where the engineers are wrong because of magic.)
Because your questions are difficult and answers are dependent on a lot of factors.
I’ll tell you what I do to detect simplicity, maybe you’ll find it useful. Let’s start with a real-life example.
I needed tokens for authorization, I reviewed existing formats, JWTs look conservative and Macaroons look powerful.
What do I do? I dissect the formats. For JWTs I read the RFCs and implemented software to create them and verify them (each in 2 languages) for various options (key algorithms).
For Macaroons I read the whitepaper, then implemented verifier based on the whitepaper, reviewed existing implementations, found out differences between the whitepaper and de-facto code with explanations. While comparing my implementation I found out some security issues with existing code. Additionally I implemented the rest of the stack (de/serialization, UI for manipulation of Macaroons). After two months I knew precisely where does complexity lie in Macaroons and of course there are the only spots all blogposts don’t mention (spoilers: cycles in third party caveats, no standards for encoded caveats…)!
Then I looked at my JWT proof-of-concept code - it uses base64(url) and JSON, primitives that basically all programming environments have built-in. After limiting the algorithms used the entire verifier takes just a couple of lines of code! It’s vastly simpler than the Macaroon one.
What’s the moral here? That you need a lot of time to see for yourself what is simple and what is complex. Now every time I see a post recommending Macaroons I can already see the author didn’t use them in practice (compare that with the Tess Rinearson post linked at the end of that article).
That’s only the example, I routinely implement various protocols and re-implement software (ActivityPub, Mailing Lists, roughtime client) and each time I discover what’s simple or what’s complex in each one of them.
(By the way your book is excellent!)
Alas, not everybody gets it. The best that these kinds of exhortations can do (all that they aim to do, as far as I can tell) is to persuade people to modify their own set of values. This doesn’t immediately result in better code… but I think it’s a necessary precondition. The only developers who will even ask the good questions you suggest (let alone look for good answers) are the developers who hold simplicity as a value.
(The analogy is pretty dumb though, and not especially motivating.)
I’ve never met a developer who does not claim to hold simplicity as a value. But as a concept it is so subjective that this is meaningless. It’s extremely common for two developers arguing for opposing approaches each to claim that their approach is the simpler one.
I get the value of exhortations. I think more examples would be better. Pairs of solutions where the simple one meets requirements with a number of better attributes. Developers often prefer to see the difference and benefits instead of being told.
Exactly. This is one of those things you can’t explain in a book. When to compose, when to decompose. When to extract methods, when to inline methods. When to add a layer of abstraction, when to remove one. When is it too flexible, when is it too simplistic?
No amount of rules of thumb is going to answer those question. I only know of one way to learn it: practice. Which takes effort and most importantly, time. Rendering this kind of posts mostly useless.
P.S. They do feel good to write though, so people will keep writing them, and there’s nothing wrong with it either.
I agree that anecdotes like this can get old, but I’ve been meaning to actually write a similar post to this… on something I’ve been calling the “too many buttons” syndrome. This issue pops up a ton in large pieces of software (Though I’m specifically thinking of projects like JRA and Confluence) where there’s an option for everything.
Not everyone gets that simplicity is good because it can be harder to sell. “If a user wants it, we should do it” is something I’ve heard just a few too many times without bothering to look at the use case or if it could be done better. Sometimes it’s worth stepping back and looking at the complexity something will add to the project (in both code and testing… especially when it comes to options and how they interact with each other) rather than just adding all the little features.
In my experience a lot of commercial companies that develop under tight deadlines produce a lot of suboptimal and dreadful code. Often it takes more time, to produce less code simply because the more time you spend on a difficult problem, the better you understand it. I think the reason that
mosta lot of software is bloated and complex is because it’s “good enough” which is optimal from an economic point of view.The other day there was a discussion here on Lobsters about all the required pieces needed to run a Mastodon instance and the popular solution of abstracting all that away in a Docker container. There are alternative implementations that depend on a smaller number of components alleviating the need for dumping everything in a container (of course the question is, do these alternatives offer the same functionality).
For me personally simplicity has to do with readability, maintainability and elegance of code or infrastructure. If someones solution involves three steps, and someone else can do it in two steps (with comparable cognitive load per step), I would say it’s more simple.
If that would cut some features you cannot miss.
You are so right. After years of experience, I only start to clarify my idea of “simplicity”. There are different kind of simplicity most of them are not totally compatible. And in my opinion some need to be preferred to other, but there is no clear rule. To make a choice between different complexity I still use a lot of intuition and I debate a lot, and I am still unsure my choice are the best.
For now, all those rules are still quite artisanal. I don’t have any really hard metrics or strong rules. Everything I just said is “preferable” but I’m pretty sure we can find exception to most of those rules.
Amen, +1, etc. “Simplicity” often just means that a concept fits cleanly in the maker’s head at a particular point in time. How many times have I returned to a project I thought was simple only to find I had burdened it with spooky magic because I didn’t benefit from critical distance at the time? When was the last time I deemed another person’s work “too complex” because I couldn’t understand it in one sitting and wasn’t aware of the constraints they were operating under? Answers: too often and too recently.
This is a good question (as are the others). Borrowing from Holmes, I’d say there’s a continuum from naive simplicity, to complexity, to simplicity on the other side of complexity (which is what is truly interesting)
For example, “naively simple” code would only cover a small subset (say, the happy path) of a business problem. Complex code would handle all, or most, of the business complexity but in a messy, complicated way. “Other side” simplicity refines that complex code into something that can handle the business complexity without itself becoming overly complicated.
What happens to simplicity? We trade it for a other things of course. For example, you can have simple regular expressions, but most people prefer less simple and more powerful implementation like Perls.
Simplicity is often a tradeoff versus easyness, performance, flexibility, reusability, useability, etc. So simplicity is good, but those other things are also good.
Most people seem to agree that simplicity is best. However, when it comes down to simplicity for the user versus the developer, I have seen disagreement. Each trade off is going to be situation and implementation dependent, but at my job I’ve been pushing for a simpler developer environment.
In my office, there is a tendency to create exceptions to rules because it makes things simpler for the user. Since the environment has more exceptional circumstances, it tends to have more errors when people forget the undocumented exception case. In my opinion, this causes an uneven experience for the user despite being “simpler.”
My experience is coming from a medium sized, non-tech company. I work in the IT department so we are a cost center. There is an emphasis on white glove treatment of the revenue producing portions of the company. YMMV
It’s not always easy to design a simple system, especially if you don’t have a good prior model for the system you are trying to design.
I am experiencing this problem repeatedly with my current project (Curv). What happens is that the first solution I come up with to a given problem tends to be overly complex. And it can take me a long time to see that there is another solution which is both simpler and more powerful. Sometimes, when I find the simple solution, it hits me like a lightning bolt: the new design is so “obvious” than an outside observer might assume that it was trivial to design, not knowing the process I went through.
The famous quote “I would have written a shorter letter, but I did not have the time” kinda summarizes this pretty well for me. Simplicity stems from a deep understanding of the problem and our ability to reduce it to its essence.
Developers should be concerned with best practices. We are constantly learning better ways to do the jobs that we are doing. Some of the things that we used to do are no longer done, because there are better ways to do them; some of the new ways to do things are quite complex. The definitions change, because our understanding, or our underlying technology, changes. Similarly, we updated things like “our model of the atom” when we had a better understanding of how it should actually work. Complexity isn’t necessarily bad, just as simplicity isn’t necessarily good.
It’s much more important to think for yourself than it is to strive for simplicity because some guy on this blog told you to try to be simple. Sometimes it’s fine and great to have a static site with no database, but pretty often it turns out that databases are important and do things that you kind of need. Sometimes it’s great to go for a simple tech stack and remove moving parts, but sometimes it turns out that a more complex tech stack is actually important to do the kinds of things that you need to do. The thing that’s actually important is not simplicity, it’s understanding why should approach something a particular way.
Don’t get me wrong; there are certainly developers who find answers and then go looking for problems to solve with them. Sometimes developers make things much more complicated than they need to be. However, there’s a counterjerk to this happening where people make things much more simple than they should be, while questioning the utility of doing more. I think the simplicity counterjerk is just as destructive as thoughtless complexity.
The issue with obsessing over “best practice” is that we end up building a certain way because it’s “best practice”, not because we understand why it’s appropriate to our situation.
This is a great point. I think sometimes we fail to consider the full context for any given “best practice”, especially while in a development-intensive mode.
I think it’s important to have a touch of skepticism about best practices, especially in environments that change rapidly.
Ultimately, I think the intention comes from a good place. It seems to follow that if one is interested in following best practices at all, they want to do so out of a sense that it is the correct thing to do. This applies technically as well as socially. On its face, this may not seem problematic. However, issues arise once correctness (as a whole) and “best practices” become muddled.
Are best practices good because they are correct, or are they correct because they are best practices? This is starting to look a lot like the Euthyphro Dilemma.
People should be concerned with which practices are the best, but not with ‘best practice’. The problem with the concept is that it’s used as a justification for doing something. ‘We’re doing X, Y and Z because it’s best practice’ is nonsensical. There’s someone that things that A, B and C are best practice as well. Justify why it’s the best thing for you to do based on its actual merits. When new, superior approaches arise, they might become YOUR best practice immediately, but for a lot of people ‘best practice’ is synonymous with ‘the way we’ve always done it’.
There’s a great talk from Rich Hickey on the topic called Easy Made Simple that I highly recommend for anybody who hasn’t seen it.