This article doesn’t make the same old “complex software is bad” fallacy, but it does make a related one: that large APIs intrinsically make a library difficult to use. This is rubbish - almost every library I’ve seen has bad documentation, including the simple ones - with the difference being that when a library is simple, bad documentation is much less noticeable, because if you can fit the whole API reference in a few pages, you tend not to notice the poor organizational structure or adherence to [https://documentation.divio.com/](the correct documentation format).
Similarly, the number of functions is an API has very little to do with its difficulty of use - that’s a function of (1) how well the documentation allows you to filter and find the functions that you need (2) the actual number of concepts that the library exposes and (3) the difficulty of using the functions you actually care about. Let’s take the Docker documentation (which is very slightly less bad than most docs) as an example - when the developers add a new CLI command, it has no effect on how difficult it is for me to continue to use Docker. I don’t even notice that I can now run “docker ai-singularity”, let alone does it make it any harder for me to use the rest of the tool.
Large APIs aren’t necessarily more difficult to use, that’s true. But it’s also true that, all else equal, a smaller API is better than a larger one.
when the developers add a new CLI command, it has no effect on how difficult it is for me to continue to use Docker. I don’t even notice that I can now run “docker ai-singularity”, let alone does it make it any harder for me to use the rest of the tool.
I don’t think that’s true. Every expansion of the surface area of a thing necessarily increases the cognitive burden of understanding it. You can just stick to your little corner, of course, but that’s sidestepping the point.
all else equal, a smaller API is better than a larger one
Absolutely. Unfortunately, all else is never equal, because the feature-set of the tool/library/framework affects API size. You can make the argument that you should try to get the same number of features with a smaller API, but that’s a matter of actual design, not merely reducing the API size, and especially not in the way that the author is suggesting (by removing useful features).
I don’t think that’s true. Every expansion of the surface area of a thing necessarily increases the cognitive burden of understanding it.
Your use of “understanding” is different than mine. You’re using it to mean “complete understanding of the entire tool”, which is not very valuable. There’s no reason for you to need to understand every feature of a tool that you’re using, unless it was designed that way - which is extremely poor design, and avoidable in almost every situation.
Examples: Linux, Docker, Firefox, Emacs, tmux, vim, gcc, llvm, Chrome, Microsoft Word, Windows, PowerShell…almost every single software tool that has any non-trivial number of users is designed such that you don’t need to understand every one of its features in order to use it effectively - and as a matter of fact almost every computer user, including the most competent ones, doesn’t completely understand all of the features of their tools.
So, your statement is trivially true, and effectively useless.
You can just stick to your little corner, of course, but that’s sidestepping the point.
If that’s the point, then the point is wrong. Good design means not needing to care about features that you’re not using. Making the API smaller in order to make it simpler means that it has bad design.
You can make the argument that you should try to get the same number of features with a smaller API, but that’s a matter of actual design, not merely reducing the API size,
That’s right. Similar to good writing, well-designed APIs are as small as they can possibly be (but no smaller).
and especially not in the way that the author is suggesting (by removing useful features).
The author is not suggesting that useful features should be removed, and the package being discussed does not remove useful features.
almost every single software tool that has any non-trivial number of users is designed such that you don’t need to understand every one of its features in order to use it effectively … Good design means not needing to care about features that you’re not using.
I agree that good design makes it possible to use something effectively without fully understanding all of its features, but “features you’re not using” are value-negative, not neutral or positive.
The author is not suggesting that useful features should be removed, and the package being discussed does not remove useful features.
If your issue is with the word “removed” - the author is pushing the idea that useful features should not be included in the first place - which is not an interesting or relevant distinction.
If your issue is with the word “useful” - colors and prompts are both useful features that the library is specifically excluding, so yes, the features being discussed are useful.
“features you’re not using” are value-negative, not neutral or positive.
I agree, but the meaning of “you” is important. The author is writing a library for others to use, and I can guarantee you that people who write CLI’s with Go are going to want to use some of the features that are being specifically excluded, such as color and prompting.
More generally, if you’re writing a thing for yourself, then good design is cutting out all of the features that you’re not going to use - but if you’re writing a thing for other people to use (which the author of the article/library is), then good design is putting in features that those other people are going to use.
This relates back to the topic of APIs: a smaller API is only better if it supports its users equally well - which the topical Go library did not, given that it elided valuable functionality in order to shrink its API footprint. Note that it’s possible to have valid reasons to remove functionality - most notably, because the users don’t use that functionality any more - but the author explicitly mentioned wanting to make the interface smaller, which is not a valid reason (only a side-effect).
I can guarantee you that people who write CLI’s with Go are going to want to use some of the features that are being specifically excluded, such as color and prompting.
Is it important that all of the features that are conceivably useful for something be included in a single package?
That’s misrepresentating my argument. I specifically mentioned two features, color support and CLI prompts, that are going to be used by many people building CLIs in Go.
The mentioned packages for those features have over 4k stars on GitHub each (significantly higher than ffcli, which as of the time of this writing has less than 900). To suggest that those are merely “conceivably useful” and not actually used by a significant number of people is disingenuous.
Nor did I say that libraries shouldn’t have limited scope - again, my point is that limiting API size is not a good reason for limiting scope by itself. Limiting scope is useful for maintainers - but it shouldn’t be portrayed as being good for users, because it rarely (if ever) is.
Limiting scope is useful for maintainers - but it shouldn’t be portrayed as being good for users, because it rarely (if ever) is.
I don’t agree; it depends on what you’re trying to optimize for. Personally, as a user, I always prefer packages with well-defined and non-leaky abstractions, with totally orthogonal features, which are easy to understand totally, rather than easy to use. This almost always corresponds to a smaller API and more limited scope. I build packages in the same way.
All forms of “optimizing for your users” take the form of “total utility for them”. This almost always corresponds to a feature-set that covers their needs, and because users are very diverse, so are their needs, and so the necessary feature-set is large.
I can also assure you that you’re an edge-case - I’ve heard hundreds of complaints in person (and thousands online) about software from dozens (and thousands, respectively) of users, and the number of complaints about missing features vastly outweighed complaints that “these two features aren’t orthogonal” or “this abstraction was too leaky”.
More generally, the number complaints that I hear about missing features is orders of magnitude greater than the number of complaints about “too many features” or not being able to understand the tool - and I’ve never heard anyone say “I don’t like this tool because I can’t understand it completely” until you stated that opinion - you, sir, are an outlier among users, and your preferences are not consistent with the majority of them.
Moreover, your preference has a negative correlation with utility. You’re welcome to have it, but for my own good (as well as that of the ecosystem), I’m going to actively discourage others from adopting it as well. A tool which is easy to understand totally is feature-limited (and utility-limited) by definition, whereas you can build a feature-rich tool that is easy to use, provides value, is easy to understand in part, and gives you all the features that you need to accomplish your desired task.
I understand this perspective, and why it’s popular. But it’s a local optima that produces a bad global result. Best way I can express it is via the commandline. The “UNIX philosophy” describes a constellation of well-scoped, more-or-less single-purpose tools, composed together to solve higher-order problems. It is strictly superior in every meaningful metric to the all-in-one approach exemplified by — I don’t know — maybe Docker, arguably git, probably jq, etc.
Of course you ask Joe Public which they like better between the coreutils and jq and they’ll say jq. But letting that kind of user dictate the direction of your project is myopic. The question “what don’t you like about X” is scoped to a single tool, or component. You will of course never receive the answer that it’s doing too much. That response speaks at the scope of the entire system, the problem domain — it’s about the design of a larger thing.
A similar story plays out in the evolution of programming languages. Everyone always wants to add features, because features have value. But that’s a truism: a feature necessarily has value. That’s not what’s important. The important consideration is the effect that feature has on the system, the language as a whole, the ways it interacts with every other feature and property of the language, combinatorially, that it touches. Languages start good and get bad over time because the original designers tend to have this (essential! important!) systems perspective in their minds as they build the thing, but almost nobody who comes later can say the same.
The Design of Design by Fred Brooks is a great book on this topic. The most influential book I’ve read professionally.
I can also assure you that you’re an edge-case - I’ve heard hundreds of complaints in person (and thousands online) about software from dozens (and thousands, respectively) of users, and the number of complaints about missing features vastly outweighed complaints that “these two features aren’t orthogonal” or “this abstraction was too leaky”.
Heh, I bet! Well, here’s one now: jq sucks. It sucks that it does all of its work in a single execution of a process, via its own unique query language, provided in a single opaque string by the user. It would be a far better tool if each invocation performed a single transformation, if its query language were far less powerful, and if it worked on text streams like ~every other tool. The resulting and substantial loss of expressive capability would be a huge step backwards at the component level, but an enormous improvement at the system level.
I never said that, nor is that my position - I believe that the right number of features is what your users need (I specifically said “so the necessary feature-set is large” - not infinite, nor ever-expanding). This means a large number of features, not every conceivable number of them - but certainly not an artificial restriction in order to obtain some amount of “beauty”.
But it’s a local optima that produces a bad global result. Best way I can express it is via the commandline. The “UNIX philosophy” describes a constellation of well-scoped, more-or-less single-purpose tools, composed together to solve higher-order problems. It is strictly superior in every meaningful metric to the all-in-one approach exemplified by — I don’t know — maybe Docker, arguably git, probably jq, etc.
This is backwards. The UNIX philosophy is the one that yields a local optima producing a bad global result, precisely because the constellation of single-purpose tools results in much greater complexity from the composition and integration of those tools. Said another way - as you decrease the complexity of your individual tools (whether they be functions or programs), the complexity of the individual modules decreases linearly, but the complexity of the whole system increases superlinearly, because the complexity of “plumbing” those modules together increases as the square of the number of modules.
This has a lot of evidence behind it. For all of the elegance of the UNIX philosophy, you never see any functional large system built with UNIX shell scripts and command-line tools, or entirely out of tiny functions <10 lines each, because such systems are (1) fragile and (2) difficult to understand and architect due to extremely high levels of indirection. Linux won over Minix partially because of the complexity of the microkernel approach, which boils down to the UNIX philosophy. All of the tools that you named are incredibly popular, because they actually deliver concrete value to their users. You want to know of some other massive, integrated, feature-expanding tools that are very popular and extremely useful to users? The Linux kernel itself, Firefox, Blender, Emacs, vim, Visual Studio Code, Python, gcc, LLVM, LibreOffice, Krita, GIMP, Audacity, Anki, Singularity, Terraform, C++, Common Lisp (kind-of - it’s only popular among Lisp users, but notably it’s more popular than Scheme, with the main design difference being that Common Lisp throws everything and the kitchen sink in, whereas Scheme is a pretty “jewel” language).
All of these things allow users to get things done more easily than if they had to assemble their own system out of primitive UNIX CLI components - furthermore, a pre-assembled system made out of those components, with the same levels of features, would be massively more complex from a source-code perspective, as well as far less performant.
It is strictly superior in every meaningful metric to the all-in-one approach exemplified by
If this were true, then most, if not all, popular tools would be compositions of simple UNIX CLI tools, because they would be easier to understand (lower code complexity), more featureful, and higher-performance. They’re not, which further invalidates that theory, in addition to the theoretical arguments I presented above.
That is to say - whole-system complexity is increased by reducing the size of the modules your system is composed of. If you don’t agree with that, then I would like to ask you why Linux CLI enthusiasts haven’t re-written any of the above tools that I’ve named from their current language into UNIX CLI tools, and then had their solutions overtake the originals due to their superiority - or why none of those above tools were composed of UNIX CLI tools in the first place (with the possible exception of git, which might be currently being re-written from shell scripts into C? but which you already explicitly condemned as being “all-in-one”) - or why few, if any, useful programs today are composed of a large number of small functions/objects.
Now, am I arguing against orthogonal design? Absolutely not - I think that designs should be made orthogonal in order to provide value to the user. A tool with a slightly smaller number of features, with significantly more orthogonality built in, will be more useful to users than one with a few more features but much less orthogonality. What I’m arguing against is the UNIX philosophy, that takes this to an extreme, to the detriment of users and to whole-system complexity.
I have never understood the need for such things. Can someone explain a scenario when the standard flag package is not good enough for writing CLIs in Go? I don’t understand what more features one could possibly need.
Can someone explain a scenario when the standard flag package is not good enough for writing CLIs in Go?
Yes, I can. package flag provides no affordances for CLI tools with subcommands, and all of the complexity that arises from that kind of design, e.g. global vs. local flags.
flag provides no affordances for CLI tools with subcommands
I’ve done just that before. It isn’t overly complex.
For each subcommand, I declare a new flag.FlagSet (local flags), and then use os.Args[1] to determine what subcommand the user has requested. I think you could even still use the normal flag. for global flags as well, though I haven’t tried that myself.
I suppose I do actually see the benefit of a library to do all that for me though - I see your point.
This article doesn’t make the same old “complex software is bad” fallacy, but it does make a related one: that large APIs intrinsically make a library difficult to use. This is rubbish - almost every library I’ve seen has bad documentation, including the simple ones - with the difference being that when a library is simple, bad documentation is much less noticeable, because if you can fit the whole API reference in a few pages, you tend not to notice the poor organizational structure or adherence to [https://documentation.divio.com/](the correct documentation format).
Similarly, the number of functions is an API has very little to do with its difficulty of use - that’s a function of (1) how well the documentation allows you to filter and find the functions that you need (2) the actual number of concepts that the library exposes and (3) the difficulty of using the functions you actually care about. Let’s take the Docker documentation (which is very slightly less bad than most docs) as an example - when the developers add a new CLI command, it has no effect on how difficult it is for me to continue to use Docker. I don’t even notice that I can now run “docker ai-singularity”, let alone does it make it any harder for me to use the rest of the tool.
Large APIs aren’t necessarily more difficult to use, that’s true. But it’s also true that, all else equal, a smaller API is better than a larger one.
I don’t think that’s true. Every expansion of the surface area of a thing necessarily increases the cognitive burden of understanding it. You can just stick to your little corner, of course, but that’s sidestepping the point.
Absolutely. Unfortunately, all else is never equal, because the feature-set of the tool/library/framework affects API size. You can make the argument that you should try to get the same number of features with a smaller API, but that’s a matter of actual design, not merely reducing the API size, and especially not in the way that the author is suggesting (by removing useful features).
Your use of “understanding” is different than mine. You’re using it to mean “complete understanding of the entire tool”, which is not very valuable. There’s no reason for you to need to understand every feature of a tool that you’re using, unless it was designed that way - which is extremely poor design, and avoidable in almost every situation.
Examples: Linux, Docker, Firefox, Emacs, tmux, vim, gcc, llvm, Chrome, Microsoft Word, Windows, PowerShell…almost every single software tool that has any non-trivial number of users is designed such that you don’t need to understand every one of its features in order to use it effectively - and as a matter of fact almost every computer user, including the most competent ones, doesn’t completely understand all of the features of their tools.
So, your statement is trivially true, and effectively useless.
If that’s the point, then the point is wrong. Good design means not needing to care about features that you’re not using. Making the API smaller in order to make it simpler means that it has bad design.
That’s right. Similar to good writing, well-designed APIs are as small as they can possibly be (but no smaller).
The author is not suggesting that useful features should be removed, and the package being discussed does not remove useful features.
I agree that good design makes it possible to use something effectively without fully understanding all of its features, but “features you’re not using” are value-negative, not neutral or positive.
If your issue is with the word “removed” - the author is pushing the idea that useful features should not be included in the first place - which is not an interesting or relevant distinction.
If your issue is with the word “useful” - colors and prompts are both useful features that the library is specifically excluding, so yes, the features being discussed are useful.
I agree, but the meaning of “you” is important. The author is writing a library for others to use, and I can guarantee you that people who write CLI’s with Go are going to want to use some of the features that are being specifically excluded, such as color and prompting.
More generally, if you’re writing a thing for yourself, then good design is cutting out all of the features that you’re not going to use - but if you’re writing a thing for other people to use (which the author of the article/library is), then good design is putting in features that those other people are going to use.
This relates back to the topic of APIs: a smaller API is only better if it supports its users equally well - which the topical Go library did not, given that it elided valuable functionality in order to shrink its API footprint. Note that it’s possible to have valid reasons to remove functionality - most notably, because the users don’t use that functionality any more - but the author explicitly mentioned wanting to make the interface smaller, which is not a valid reason (only a side-effect).
Is it important that all of the features that are conceivably useful for something be included in a single package?
That’s misrepresentating my argument. I specifically mentioned two features, color support and CLI prompts, that are going to be used by many people building CLIs in Go.
The mentioned packages for those features have over 4k stars on GitHub each (significantly higher than ffcli, which as of the time of this writing has less than 900). To suggest that those are merely “conceivably useful” and not actually used by a significant number of people is disingenuous.
Nor did I say that libraries shouldn’t have limited scope - again, my point is that limiting API size is not a good reason for limiting scope by itself. Limiting scope is useful for maintainers - but it shouldn’t be portrayed as being good for users, because it rarely (if ever) is.
I don’t agree; it depends on what you’re trying to optimize for. Personally, as a user, I always prefer packages with well-defined and non-leaky abstractions, with totally orthogonal features, which are easy to understand totally, rather than easy to use. This almost always corresponds to a smaller API and more limited scope. I build packages in the same way.
All forms of “optimizing for your users” take the form of “total utility for them”. This almost always corresponds to a feature-set that covers their needs, and because users are very diverse, so are their needs, and so the necessary feature-set is large.
I can also assure you that you’re an edge-case - I’ve heard hundreds of complaints in person (and thousands online) about software from dozens (and thousands, respectively) of users, and the number of complaints about missing features vastly outweighed complaints that “these two features aren’t orthogonal” or “this abstraction was too leaky”.
More generally, the number complaints that I hear about missing features is orders of magnitude greater than the number of complaints about “too many features” or not being able to understand the tool - and I’ve never heard anyone say “I don’t like this tool because I can’t understand it completely” until you stated that opinion - you, sir, are an outlier among users, and your preferences are not consistent with the majority of them.
Moreover, your preference has a negative correlation with utility. You’re welcome to have it, but for my own good (as well as that of the ecosystem), I’m going to actively discourage others from adopting it as well. A tool which is easy to understand totally is feature-limited (and utility-limited) by definition, whereas you can build a feature-rich tool that is easy to use, provides value, is easy to understand in part, and gives you all the features that you need to accomplish your desired task.
I understand this perspective, and why it’s popular. But it’s a local optima that produces a bad global result. Best way I can express it is via the commandline. The “UNIX philosophy” describes a constellation of well-scoped, more-or-less single-purpose tools, composed together to solve higher-order problems. It is strictly superior in every meaningful metric to the all-in-one approach exemplified by — I don’t know — maybe Docker, arguably git, probably jq, etc.
Of course you ask Joe Public which they like better between the coreutils and jq and they’ll say jq. But letting that kind of user dictate the direction of your project is myopic. The question “what don’t you like about X” is scoped to a single tool, or component. You will of course never receive the answer that it’s doing too much. That response speaks at the scope of the entire system, the problem domain — it’s about the design of a larger thing.
A similar story plays out in the evolution of programming languages. Everyone always wants to add features, because features have value. But that’s a truism: a feature necessarily has value. That’s not what’s important. The important consideration is the effect that feature has on the system, the language as a whole, the ways it interacts with every other feature and property of the language, combinatorially, that it touches. Languages start good and get bad over time because the original designers tend to have this (essential! important!) systems perspective in their minds as they build the thing, but almost nobody who comes later can say the same.
The Design of Design by Fred Brooks is a great book on this topic. The most influential book I’ve read professionally.
Heh, I bet! Well, here’s one now: jq sucks. It sucks that it does all of its work in a single execution of a process, via its own unique query language, provided in a single opaque string by the user. It would be a far better tool if each invocation performed a single transformation, if its query language were far less powerful, and if it worked on text streams like ~every other tool. The resulting and substantial loss of expressive capability would be a huge step backwards at the component level, but an enormous improvement at the system level.
I never said that, nor is that my position - I believe that the right number of features is what your users need (I specifically said “so the necessary feature-set is large” - not infinite, nor ever-expanding). This means a large number of features, not every conceivable number of them - but certainly not an artificial restriction in order to obtain some amount of “beauty”.
This is backwards. The UNIX philosophy is the one that yields a local optima producing a bad global result, precisely because the constellation of single-purpose tools results in much greater complexity from the composition and integration of those tools. Said another way - as you decrease the complexity of your individual tools (whether they be functions or programs), the complexity of the individual modules decreases linearly, but the complexity of the whole system increases superlinearly, because the complexity of “plumbing” those modules together increases as the square of the number of modules.
This has a lot of evidence behind it. For all of the elegance of the UNIX philosophy, you never see any functional large system built with UNIX shell scripts and command-line tools, or entirely out of tiny functions <10 lines each, because such systems are (1) fragile and (2) difficult to understand and architect due to extremely high levels of indirection. Linux won over Minix partially because of the complexity of the microkernel approach, which boils down to the UNIX philosophy. All of the tools that you named are incredibly popular, because they actually deliver concrete value to their users. You want to know of some other massive, integrated, feature-expanding tools that are very popular and extremely useful to users? The Linux kernel itself, Firefox, Blender, Emacs, vim, Visual Studio Code, Python, gcc, LLVM, LibreOffice, Krita, GIMP, Audacity, Anki, Singularity, Terraform, C++, Common Lisp (kind-of - it’s only popular among Lisp users, but notably it’s more popular than Scheme, with the main design difference being that Common Lisp throws everything and the kitchen sink in, whereas Scheme is a pretty “jewel” language).
All of these things allow users to get things done more easily than if they had to assemble their own system out of primitive UNIX CLI components - furthermore, a pre-assembled system made out of those components, with the same levels of features, would be massively more complex from a source-code perspective, as well as far less performant.
If this were true, then most, if not all, popular tools would be compositions of simple UNIX CLI tools, because they would be easier to understand (lower code complexity), more featureful, and higher-performance. They’re not, which further invalidates that theory, in addition to the theoretical arguments I presented above.
That is to say - whole-system complexity is increased by reducing the size of the modules your system is composed of. If you don’t agree with that, then I would like to ask you why Linux CLI enthusiasts haven’t re-written any of the above tools that I’ve named from their current language into UNIX CLI tools, and then had their solutions overtake the originals due to their superiority - or why none of those above tools were composed of UNIX CLI tools in the first place (with the possible exception of git, which might be currently being re-written from shell scripts into C? but which you already explicitly condemned as being “all-in-one”) - or why few, if any, useful programs today are composed of a large number of small functions/objects.
Now, am I arguing against orthogonal design? Absolutely not - I think that designs should be made orthogonal in order to provide value to the user. A tool with a slightly smaller number of features, with significantly more orthogonality built in, will be more useful to users than one with a few more features but much less orthogonality. What I’m arguing against is the UNIX philosophy, that takes this to an extreme, to the detriment of users and to whole-system complexity.
I have never understood the need for such things. Can someone explain a scenario when the standard
flag
package is not good enough for writing CLIs in Go? I don’t understand what more features one could possibly need.Yes, I can.
package flag
provides no affordances for CLI tools with subcommands, and all of the complexity that arises from that kind of design, e.g. global vs. local flags.I’ve done just that before. It isn’t overly complex.
For each subcommand, I declare a new
flag.FlagSet
(local flags), and then useos.Args[1]
to determine what subcommand the user has requested. I think you could even still use the normalflag.
for global flags as well, though I haven’t tried that myself.I suppose I do actually see the benefit of a library to do all that for me though - I see your point.