As with many things, there’s a lot that can be lost in the details. The example in the article was just short enough that it fitted on my screen at the font size that it used. That means that I can keep the whole thing visible without scrolling, which is a very nice property because anything that refers to things earlier has the things that it refers to visible.
For me, the rule of thumb when outlining a function (particularly with a toolchain with an always-inline option that means outlining doesn’t affect code generation) is ‘does looking inside this block help comprehension of the function to a reader?’ That’s a subjective judgement but it’s useful. Both of the versions of this in the article are contrived, but the key part for me is that I can read the ‘createPizza’ function on the right and not care what prepare, bake, and box do. If you’re following a ‘no (or, at least, minimal) global state’ rule (which every style guide that I’ve seen since the ’80s has encouraged) then you know that each of these things steps are just modifying the pizza.
On the other hand, both of them are bad for a few reasons that the article approaches:
Why are you creating a new oven for every pizza.
Why isn’t bake either a method on an oven or a function that takes a pizza and an oven?
order.Size is very badly named, because it’s the size of the pizza not the size of the order.
Why is the cheese a string not an enumeration? Are toppings also stringly typed? At some point, you’re probably going to want to be able to filter them by dietary requirements and so not having some way of checking for coverage in the type space is going to lead to bugs.
Similarly, why is the kind of pizza not an enumeration? If I order a “four cheese” pizza, I will get a pizza with no toppings. I should get some kind of pizza-type-not-supported error.
Both completely lack error handling. This is weird for the original because the error handling cases in the straight-line version will become a lot less readable, because there are lots of points where you would want to exit the straight-line path and need to manage cleanup. The straight-line version here is much simpler in a language with RAII, but in Go or C it will get exciting.
Why does SlicePizza need to know the size of the base from the order, why is that not a property of the pizza?
Why does the box need to be explicitly closed? Why do you not cut the pizza and then insert it and atomically close the box in the box’s constructor? This code could easily be correct by construction but instead makes the caller responsible for ensuring that they remember to close the box. You could also use a box-builder pattern to avoid this kind of thing if you don’t want to put everything in the constructor.
When you are discussing two bad pieces of code, it’s hard to draw any conclusions about which is in a worse style.
Similarly, why is the kind of pizza not an enumeration?
Because it’s written in Go. Go programmers re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding enums but we want to use them to bake pizza.
Wasn’t this code written by Google’s security team? I’d expect them to write half decent code. I recently wrote some Go code for the first time in a decade and it does now have type safe enumerations (though all enumeration values in a package are in a flat namespace, because Go does so love to copy the mistakes of C).
Cargo cult programming isn’t good, best practices are not always best, depending on context, etc. Somewhat ironically, the samples I noticed in books like “Clean Code” are some of the worst I’ve seen.
However, long functions are a code smell. It’s not always bad, but a “long function” is a symptom that a piece of code does too much. And the obvious problem with a black box that does too much is that it may be hard to test. And from a maintainability perspective, the behavior of a long function may be hard to document and understand.
There are good reasons for why the best practice exists. Ofc, the real world is too complicated for such simplistic pieces of advice. But in a team, I’d rather work with people that at least care about the maintainability of their work.
A colleague of mine asked: “Why do you violently reject the book? You already do most of the good things in it.”
And I was like: “I recommend you get there without the book.”
I think the article shows a particularly clean example. In reality, such code may be longer, with less clear (re)use of variables. I prefer Fowler’s advice — if you feel you need to add a comment inside a function describing what the next section does, it should be a call to an appropriately named helper function instead.
I’d prefer to read it as a story and only break it out into functions if those functions would stand a reasonable chance to be reused.
I get there are cases where breaking it out into functions would serve as a comment/chapter marker in and of itself and I’m open to it but I’d err on the side of inlining.
Most of the time I would disagree with Fowler. In my code too.
Sometimes his advice applies, sometimes not. I believe the criteria is how difficult the outer algorithm is. If we just do operations a, b, c, … z the outer algorithm is trivial and does not merit enhanced readability. Inlining everything that’s used only once is a no-brainer, even if the result is 50 sections spanning 700 lines. But if the outer algorithm is some intricate mess with nested recursions and weird edge cases or whatever, then hoisting out sections will likely help a ton.
In a 700-line function I’m very concerned about scope of variables. If section 12 sets num_widgets, and section 47 uses it, is this number still accurate, or did something add or remove widgets in the meantime? If section 24 overwrites it, is it intentional correction, or reuse for a different set of widgets by accident?
Basically in large functions locals become globals. “Arguments” and “return values” of each block are not explicit. And if you start intentionally nesting blocks, naming them in comments, clarifying their inputs and outputs, you’re getting close to having functions with informal syntax.
So I think there’s value in encapsulation, even if the smaller functions are used only in one place.
In a 700-line function I’m very concerned about scope of variables.
So am I. My solution is to put each section in its relevant scope. Then I can easily distinguish block scoped variable from function scoped variable… that is, if the language designers didn’t botched it and made it so all variables are function scoped instead of doing the right thing.
And if the language designers did botch it in this way, then creating a lot of smaller functions is the only tool you have to create a lot of smaller scopes that are easier to reason about.
I prefer files around 500-1000 lines because I think about code and editor windows spatially, and most editors aren’t great at having the same file open in multiple windows. Larger files => More within-file dependence => more jumping around on the scrollbar instead of viewing things on the same screen or tabbing back and forth.
Eventually you learn to deal with this in various ways (editor that lets you see the same buffer in multiple windows/frames; folding; …), but it can still be pretty annoying. I end up feeling “spatially lost” / un-anchored a lot more in files bigger than a couple thousand lines.
For my latest work project I went with the opposite method and the whole React frontend fits in one 6k lines file. It’s honestly great. You just open multiple views of the same file.
I haven’t had the same experience. I’m used to opening the same file in different windows, and my bad experiences are more around opening a file with unexpectedly trivial contents (eg. 20 different exceptions in 20 different files, 19 of which are empty exceptions) or not knowing what file a function lives in which can be solved by using an editor which indexes by function but that is slightly less convenient than searching for that function in the same file. I have hit some problems where if a file has too many unrelated contents then doing searching and replacement gets harder, so one file isn’t any better, but it’s unclear to me how same file different buffers is worse than different files.
Worst code bases I’ve worked on are ones with arbitrary function length limits. You end up splitting up stuff for the sake of it.
Technically “big functions are a code smell is right”. But prematurely splitting code because “big function bad” just makes it harder to figure out a sensible abstraction further down the line because you lose all locality of behaviour.
Sometimes I think too, design is somewhat emergent when you see how you are going to make use of extracted pieces elsewhere.
Extracting too early, and you may make the wrong choices for what you extract, and create an awkward interface that other things begin to rely on, and the matrix of things you have to refactor then increases dramatically.
I’ve definitely noticed this same trend where more novice programmers want to extract everything immediately, even when you’re writing a function where the pieces are unlikely to ever to be called anywhere but in that one spot, and where it’s easier to debug and understand inlined.
This (and most of the comments so far) seems to neglect one very important detail:
No two readers are the same.
I can think of cases where someone would prefer either the left or the right:
Left:
You’re stepping through the code in a debugger
You’re looking for a tricky bug that’s hidden in the details
You prefer to think in low-level details and don’t want to be distracted by the high-level “systems” view
Right:
You’re trying to answer a question to a “business” person about what steps a process takes
You’re skimming through code, trying to figure out what various modules do
You prefer to think in high-level “systems” terms rather than get bogged down in low-level details
In any case, no matter how you write the code, someone will be unhappy with it. You can’t cater to everyone, yet everyone has legitimate reasons for preferring one or the other (depending on their personality OR their current task).
I can think of a few more important details neglected.
For one, how stable is this code? Is heatOven something that’s likely to be changed? Maybe to tweak the temperature, maybe the company frequently changes ovens, whatever. If that bit is gonna be fiddled with a lot, it probably makes sense to isolate it. If it’s actually stable, then meh, inline is probably fine.
That’s a good architectural reason to split the code though, it is not about readability.
But from that perspective, here’s another thing - how big is this code? It’s easy to read things inline like in this example when both of them together fit into the screen. But business-grade code is frequently going to be much thicker. And yes, baking pizza is simple in this example.
But what if he had to go fetch the flour first? And they’re out of whole wheat, can we use another type? Oh, no, wait, is the order gluten-free, which flour then? Oh, no, the flour shelf empty. I need to go fetch another one from the long-term storage. Do I put the order aside, or is the storage far away and I’ll have to throw away other ingredients?
And that’s just the first ingredient of that pizza. What about the toppings? What is a “proper heating temperature”?
In my eyes, it’d be much more readable to getFlour() inline, and deal with the fiddly stuff like business logic and infrastructural logic and retries and all the other fun stuff somewhere else.
This is where the first part (architectural stability) comes back into play. Of course I can make the whole pizza as a one long Pascal procedure. But am I going to be able to read the whole thing next summer when I’m called back from vacation because the CEO was showing off to his friends and wanted to have a goldfish pizza with crust plated in ivory and the thing didn’t work?
It’s funny that you say that because one of my references on this topic is this post by Martin Sústrik where he argues that inlining is useful precisely when context can change and you feel that the problem may become more convoluted (because it is hard to design abstractions that will remain valid in such cases).
I am the author of the blog post, and I think this is spot on. It probably just turns out I am the first guy here. I tend to run software in production and be paged to fix bugs (quickly) in my or other people’s code. I wrote here that one of my core principles is:
Think about debuggability in production. There is nothing worse than having your software break and not being able to figure out why.
I do think in high-level systems too but I do not need every single detail to be extracted to a function for this, I can do it in my mind. And having a “shallow stack” (fewer well-chosen abstractions) makes that easier for me too.
Sure, but the common denominator is bigger than we give it credit for. Though it would be easy to argue that even code locality depends on the use case: debugging vs answering a business question is a very good one.
Was about to post this – it argues quite well for the inlining stuff. In particular this bit:
Besides awareness of the actual code being executed, inlining functions also has the benefit of not making it possible to call the function from other places. That sounds ridiculous, but there is a point to it. As a codebase grows over years of use, there will be lots of opportunities to take a shortcut and just call a function that does only the work you think needs to be done. There might be a FullUpdate() function that calls PartialUpdateA(), and PartialUpdateB(), but in some particular case you may realize (or think) that you only need to do PartialUpdateB(), and you are being efficient by avoiding the other work. Lots and lots of bugs stem from this. Most bugs are a result of the execution state not being exactly what you think it is.
I’m also in the right side camp. It seems much easier to understand it from a high level perspective, maintain it, and test it.
Maybe the linear code is more readable, but then 3 years later you need to fix a small bug on line 242 of a 500+ line-monster-function and you typically would not spending your precious time understanding the whole function while only a small chunk is relevant.
It’s probably not easier to test, though. Pizza object is shared and mutated in multiple places. That means more of pre- and post- conditions to check.
Also, in my experience, in order to fix the issue on line 242 you need to understand the data structure setup at lines 200-240 and how is the result used on lines 260-270.
Do you know what happens in a few years when you fix the line 4 of the extracted prepare_pizza function to support triangular pizzas? That you find in production that your square boxes are not big enough to hold triangular pizzas, because the code that prepares the pizza is now disconnected from the code that boxes pizzas.
I’m in the right side camp, because small functions are easier to document and compose, I get a better overview when I view the entry point. I can then “zoom in” on the parts that interest me, in a more precise manner than if I had code like on the left. I rarely want to understand a full process all at once, I’m usually more interested in either an overview, or specific parts, rather than the whole. The right side lends itself better for this approach.
The left side is more convenient if you find yourself wanting to understand the whole thing more often. But even then, I’d argue that with a sufficiently smart IDE that can temporarily (and/or visually) inline function calls would let you reap the benefits of both, if using the right side structure.
Ah, abstractions. But what does the the right example abstract? According to the OOP: “low-level implementation details (e.g., how to heat the oven), intermediate-level functions (e.g., how to bake pizza), and high-level abstractions (e.g., preparing, baking, and boxing the pizza).” Domain-specific concepts all over the place so let’s go with that. Do we care about any of those concepts to abstract them away? That is, can we change/swap/remove/add any of the parts or is the whole thing is “atomic” from the domain’s perspective?
This is very much a matter of perspective. I personally start with the linear (left) type of code and abstract away things then there’s a need. I very much dislike any premature breaking code in pieces. I’m of the opinion that rules like “methods no longer than 5 lines” are thoroughly misguided and more often than not produce unreadable and extremely inefficient code.
From this (good) example I would rather conclude that abstraction comes at a cost of some added cognitive overhead. Whether you want to make this trade completely depends on the context.
The examples here don’t illuminate the main issue that I have with long functions. With long functions (600+, although my examples are often closer to 2,000), I’ve seen it be much more common to have constant state which isn’t marked const and to have mutable state which has long ranging impacts. Putting initialization far away from where it’s used means that all the code between is suspect for mutating that state.
Just figuring out that operations have been applied to the variables in an arbitrary line of code gets difficult. With small functions, people tend to write code where the temporary variables used are contained within their own scope, so the distance in initialization is less and the suspect mutability is less. This is less true if the operations are done to many different fields of a god class which is passed around by pointer, which I’ve seen a fair amount of in C, and Java nudges you towards.
One thing that can help here is inline lambdas which are explicit about their captures which gives both linearity and encapsulation of mutable state. but sometimes the other two alternatives make sense also.
I love inline lambdas! Or, sometimes what’s even better is inline block scopes.
code code code
var foo = something you'll set up in the following scope
{
let var = something that only exists in this scope
code that sets up foo
bonus: control flow stuff like returns go directly to the outer scope, nice
}
code code code
If a bit of code could reasonably be extracted to a function, but is only used in one place, and is not tested independently, then I almost always prefer to write it like I have above. Being able to read code in a straight line is enormously beneficial.
Excellent example of why I sometimes hate auto-formatters. When I call a function with a gazillion arguments (anti-pattern I know, can’t be helped sometimes), I like to choose how many I put in each line. Here for instance:
: compare ( c#4 c-addr -- c-addr f)
eq >r
eq >r
eq >r
eq r> r> r>
and and and ;
Still have no idea how it works, but I’m guessing this layout makes it easier to spot the patterns.
Edit: perhaps this one instead (just noticed the difference between >r and r>) ?
: compare ( c#4 c-addr -- c-addr f)
eq
>r eq
>r eq
>r eq
r> r> r> and and and ;
Or that one (counting eqs shouldn’t be too hard):
: compare ( c#4 c-addr -- c-addr f)
eq >r eq >r eq >r eq
r> r> r> and and and ;
I have to be honest with you guys. In my source it’s one line…
: compare ( c#4 c-addr -- c-addr f) eq >r eq >r eq >r eq r> r> r> and and and ; :^)
It just does a bunch of comparisons as part of checking if two nibbles are the same. Forth really is write-only, I’m sure I had a good reason for doing it this way but I cannot for the life of me be bothered to figure out why.
I was into the right side for quite some time, but in the last couple of years I try really hard to delay splitting the function until there is another caller. Usually it provides better insights into what should be factored out and what should be kept.
That is, only split the different oven drivers when you get the second one.
The other extreme is writing a unified oven drivers that decides on minor differences using a bunch of well-placed its. This ends up being a nightmare with no way to extend support for cool new features of the newer model without making a total mess of things.
So in conclusion, it’s probably not the function count that hinders understanding. It’s the total distance jumped where calls to familiar functions do not count.
If you structure your code linearly, you can still extract it into sub functions with some thought. You then put the “main” at the top and you can still read it top to bottom because it will just call them in order.
That said, putting one line comments throughout that let you understand the intent of a function just by reading the comments is also underrated.
I wrote this almost a decade ago, about what looks like this exact subject. I claimed that the in-line form suffered from “interpretation effort”, while the externalized form suffered from “lookup effort”, and tried to describe trade-offs. I’ve been through a phase of my career when I’ve used small functions, and I’m probably biased in that direction today (when I’m reading code, I like to be sure of what part of what state is being used where, which I think is easier to see in smaller functions), but I’ve also seen that others respond better to the style this author advocates..
I’ve been struggling with how to respond to this article. I tend to prefer the code on the right, but there has been a lot of support for the code on the left. Ultimately, I came to the conclusion that both code samples suck.
Some operations should just be added to the objects. Why doesn’t the oven have a preheat method? Why doesn’t the pizza take toppings in the constructor or have a separate method that adds toppings?
The whole section about boxing the pizza makes zero fucking sense. Why does the box have a method for slicing pizza? Shouldn’t that be on the pizza? Why is there a boxed property on the pizza? Why is the pizza ready if the box has been closed? It might make more sense to have a separate object that handles the scheduling/status of the food and let the pizza object just be a pizza object. (Or if you want to make things more complicated use a builder or type state pattern.)
If I saw this code in a code review I would be shaking my head the whole way through. I know that the code is meant to be illustrative and it’s function isn’t the point exactly, but it’s really hard to talk about the structure of the code without considering how you would model the domain.
If you added some of the reusable parts to the different objects and handled the responsibilities better, then the code on the left would look a lot cleaner and make a lot more sense.
The code on the right is more rigid, and will be harder to change.
(Specifically, we’re writing a guarantee that baking wholly precedes boxing, with no overlap. Are we really sure, though, that it’ll always be like that?)
If it’s implementing some business logic (liable to change), I’m hesitant to prefer that. What if boxes need to be preheated 5 minutes before end-of-bake? I’ve got to modify bake to return a start-of-bake-time, or something (plus maybe temperature), then pass that along to my boxing subroutine, &c.
It’s definitely nice when you (can) boil some seemingly complicated algorithm to 3 function calls. But the likelihood of every piece of code having some neat form like that just seems … low.
It’s certainly easier to test the code on the right, I’ll give them that.
For years I preferred right side style, until I read Ousterhout’s A Philosophy of Software Design. That books gives you tools to remove part of the subjectivity from the problem, like looking at function “depth” instead of its length, measuring complexity in terms of the interfaces “surface”, etc.
Now I default to longer functions with sections separated by comments (as in the OPs last example) until I smell the function is getting complicated or I find legitimate reasons to extract portions to their own function.
The biggest rule of thumb for me is to ask myself whether a helper function can be used or even understood outside the context of the higher level function. If it can’t, that’s a sign that it shouldn’t be separated.
I hate function/file splitting for no reason other than “it’s too long”.
It’s for me always the smell of a novice programmer who read a rule somewhere and tries to apply it blindly everywhere.
As with many things, there’s a lot that can be lost in the details. The example in the article was just short enough that it fitted on my screen at the font size that it used. That means that I can keep the whole thing visible without scrolling, which is a very nice property because anything that refers to things earlier has the things that it refers to visible.
For me, the rule of thumb when outlining a function (particularly with a toolchain with an always-inline option that means outlining doesn’t affect code generation) is ‘does looking inside this block help comprehension of the function to a reader?’ That’s a subjective judgement but it’s useful. Both of the versions of this in the article are contrived, but the key part for me is that I can read the ‘createPizza’ function on the right and not care what prepare, bake, and box do. If you’re following a ‘no (or, at least, minimal) global state’ rule (which every style guide that I’ve seen since the ’80s has encouraged) then you know that each of these things steps are just modifying the pizza.
On the other hand, both of them are bad for a few reasons that the article approaches:
order.Size
is very badly named, because it’s the size of the pizza not the size of the order.SlicePizza
need to know the size of the base from the order, why is that not a property of the pizza?When you are discussing two bad pieces of code, it’s hard to draw any conclusions about which is in a worse style.
Because it’s written in Go. Go programmers re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding enums but we want to use them to bake pizza.
Wasn’t this code written by Google’s security team? I’d expect them to write half decent code. I recently wrote some Go code for the first time in a decade and it does now have type safe enumerations (though all enumeration values in a package are in a flat namespace, because Go does so love to copy the mistakes of C).
They are riffing on the Rob Pike quote about Googlers being fresh out of school. I almost took the bait earlier myself.
Cargo cult programming isn’t good, best practices are not always best, depending on context, etc. Somewhat ironically, the samples I noticed in books like “Clean Code” are some of the worst I’ve seen.
However, long functions are a code smell. It’s not always bad, but a “long function” is a symptom that a piece of code does too much. And the obvious problem with a black box that does too much is that it may be hard to test. And from a maintainability perspective, the behavior of a long function may be hard to document and understand.
There are good reasons for why the best practice exists. Ofc, the real world is too complicated for such simplistic pieces of advice. But in a team, I’d rather work with people that at least care about the maintainability of their work.
I reject that book entirely (just as a personal sanity measure) even though there are some good things in it.
And sometimes there’s just a lot to do that’s not worth abstracting away.
My rationale to do the same is thus:
An then I recommend Ousterhout and his Philosophy of Software Design.
Oh, that is great.
A colleague of mine asked: “Why do you violently reject the book? You already do most of the good things in it.” And I was like: “I recommend you get there without the book.”
I think the article shows a particularly clean example. In reality, such code may be longer, with less clear (re)use of variables. I prefer Fowler’s advice — if you feel you need to add a comment inside a function describing what the next section does, it should be a call to an appropriately named helper function instead.
I’d prefer to read it as a story and only break it out into functions if those functions would stand a reasonable chance to be reused.
I get there are cases where breaking it out into functions would serve as a comment/chapter marker in and of itself and I’m open to it but I’d err on the side of inlining.
Most of the time I would disagree with Fowler. In my code too.
Sometimes his advice applies, sometimes not. I believe the criteria is how difficult the outer algorithm is. If we just do operations a, b, c, … z the outer algorithm is trivial and does not merit enhanced readability. Inlining everything that’s used only once is a no-brainer, even if the result is 50 sections spanning 700 lines. But if the outer algorithm is some intricate mess with nested recursions and weird edge cases or whatever, then hoisting out sections will likely help a ton.
In a 700-line function I’m very concerned about scope of variables. If section 12 sets
num_widgets
, and section 47 uses it, is this number still accurate, or did something add or remove widgets in the meantime? If section 24 overwrites it, is it intentional correction, or reuse for a different set of widgets by accident?Basically in large functions locals become globals. “Arguments” and “return values” of each block are not explicit. And if you start intentionally nesting blocks, naming them in comments, clarifying their inputs and outputs, you’re getting close to having functions with informal syntax.
So I think there’s value in encapsulation, even if the smaller functions are used only in one place.
So am I. My solution is to put each section in its relevant scope. Then I can easily distinguish block scoped variable from function scoped variable… that is, if the language designers didn’t botched it and made it so all variables are function scoped instead of doing the right thing.
And if the language designers did botch it in this way, then creating a lot of smaller functions is the only tool you have to create a lot of smaller scopes that are easier to reason about.
I prefer files around 500-1000 lines because I think about code and editor windows spatially, and most editors aren’t great at having the same file open in multiple windows. Larger files => More within-file dependence => more jumping around on the scrollbar instead of viewing things on the same screen or tabbing back and forth.
Eventually you learn to deal with this in various ways (editor that lets you see the same buffer in multiple windows/frames; folding; …), but it can still be pretty annoying. I end up feeling “spatially lost” / un-anchored a lot more in files bigger than a couple thousand lines.
I like Rust a lot for the fact that 1000+ line files are normalized because the tests often live in the same file.
My experience with for instance React projects is that I’ll have 20 files open many of which will only contain 5-10 lines of code (mostly imports).
For my latest work project I went with the opposite method and the whole React frontend fits in one 6k lines file. It’s honestly great. You just open multiple views of the same file.
I haven’t had the same experience. I’m used to opening the same file in different windows, and my bad experiences are more around opening a file with unexpectedly trivial contents (eg. 20 different exceptions in 20 different files, 19 of which are empty exceptions) or not knowing what file a function lives in which can be solved by using an editor which indexes by function but that is slightly less convenient than searching for that function in the same file. I have hit some problems where if a file has too many unrelated contents then doing searching and replacement gets harder, so one file isn’t any better, but it’s unclear to me how same file different buffers is worse than different files.
Worst code bases I’ve worked on are ones with arbitrary function length limits. You end up splitting up stuff for the sake of it.
Technically “big functions are a code smell is right”. But prematurely splitting code because “big function bad” just makes it harder to figure out a sensible abstraction further down the line because you lose all locality of behaviour.
Sometimes I think too, design is somewhat emergent when you see how you are going to make use of extracted pieces elsewhere.
Extracting too early, and you may make the wrong choices for what you extract, and create an awkward interface that other things begin to rely on, and the matrix of things you have to refactor then increases dramatically.
I’ve definitely noticed this same trend where more novice programmers want to extract everything immediately, even when you’re writing a function where the pieces are unlikely to ever to be called anywhere but in that one spot, and where it’s easier to debug and understand inlined.
This (and most of the comments so far) seems to neglect one very important detail:
No two readers are the same.
I can think of cases where someone would prefer either the left or the right:
Left:
Right:
In any case, no matter how you write the code, someone will be unhappy with it. You can’t cater to everyone, yet everyone has legitimate reasons for preferring one or the other (depending on their personality OR their current task).
I can think of a few more important details neglected.
For one, how stable is this code? Is
heatOven
something that’s likely to be changed? Maybe to tweak the temperature, maybe the company frequently changes ovens, whatever. If that bit is gonna be fiddled with a lot, it probably makes sense to isolate it. If it’s actually stable, then meh, inline is probably fine.That’s a good architectural reason to split the code though, it is not about readability.
But from that perspective, here’s another thing - how big is this code? It’s easy to read things inline like in this example when both of them together fit into the screen. But business-grade code is frequently going to be much thicker. And yes, baking pizza is simple in this example.
But what if he had to go fetch the flour first? And they’re out of whole wheat, can we use another type? Oh, no, wait, is the order gluten-free, which flour then? Oh, no, the flour shelf empty. I need to go fetch another one from the long-term storage. Do I put the order aside, or is the storage far away and I’ll have to throw away other ingredients?
And that’s just the first ingredient of that pizza. What about the toppings? What is a “proper heating temperature”?
In my eyes, it’d be much more readable to
getFlour()
inline, and deal with the fiddly stuff like business logic and infrastructural logic and retries and all the other fun stuff somewhere else.This is where the first part (architectural stability) comes back into play. Of course I can make the whole pizza as a one long Pascal procedure. But am I going to be able to read the whole thing next summer when I’m called back from vacation because the CEO was showing off to his friends and wanted to have a goldfish pizza with crust plated in ivory and the thing didn’t work?
It’s funny that you say that because one of my references on this topic is this post by Martin Sústrik where he argues that inlining is useful precisely when context can change and you feel that the problem may become more convoluted (because it is hard to design abstractions that will remain valid in such cases).
I am the author of the blog post, and I think this is spot on. It probably just turns out I am the first guy here. I tend to run software in production and be paged to fix bugs (quickly) in my or other people’s code. I wrote here that one of my core principles is:
I do think in high-level systems too but I do not need every single detail to be extracted to a function for this, I can do it in my mind. And having a “shallow stack” (fewer well-chosen abstractions) makes that easier for me too.
Sure, but the common denominator is bigger than we give it credit for. Though it would be easy to argue that even code locality depends on the use case: debugging vs answering a business question is a very good one.
John Carmack also argues in favor of the code on the left, for those who haven’t read it yet: http://number-none.com/blow/john_carmack_on_inlined_code.html
Was about to post this – it argues quite well for the inlining stuff. In particular this bit:
John Carmack really changed my perspective on this when I first read that post. I should probably frame that summary.
I’m also in the right side camp. It seems much easier to understand it from a high level perspective, maintain it, and test it.
Maybe the linear code is more readable, but then 3 years later you need to fix a small bug on line 242 of a 500+ line-monster-function and you typically would not spending your precious time understanding the whole function while only a small chunk is relevant.
It’s probably not easier to test, though. Pizza object is shared and mutated in multiple places. That means more of pre- and post- conditions to check.
Also, in my experience, in order to fix the issue on line 242 you need to understand the data structure setup at lines 200-240 and how is the result used on lines 260-270.
Do you know what happens in a few years when you fix the line 4 of the extracted prepare_pizza function to support triangular pizzas? That you find in production that your square boxes are not big enough to hold triangular pizzas, because the code that prepares the pizza is now disconnected from the code that boxes pizzas.
I’m in the right side camp, because small functions are easier to document and compose, I get a better overview when I view the entry point. I can then “zoom in” on the parts that interest me, in a more precise manner than if I had code like on the left. I rarely want to understand a full process all at once, I’m usually more interested in either an overview, or specific parts, rather than the whole. The right side lends itself better for this approach.
The left side is more convenient if you find yourself wanting to understand the whole thing more often. But even then, I’d argue that with a sufficiently smart IDE that can temporarily (and/or visually) inline function calls would let you reap the benefits of both, if using the right side structure.
Ah, abstractions. But what does the the right example abstract? According to the OOP: “low-level implementation details (e.g., how to heat the oven), intermediate-level functions (e.g., how to bake pizza), and high-level abstractions (e.g., preparing, baking, and boxing the pizza).” Domain-specific concepts all over the place so let’s go with that. Do we care about any of those concepts to abstract them away? That is, can we change/swap/remove/add any of the parts or is the whole thing is “atomic” from the domain’s perspective?
This is very much a matter of perspective. I personally start with the linear (left) type of code and abstract away things then there’s a need. I very much dislike any premature breaking code in pieces. I’m of the opinion that rules like “methods no longer than 5 lines” are thoroughly misguided and more often than not produce unreadable and extremely inefficient code.
From this (good) example I would rather conclude that abstraction comes at a cost of some added cognitive overhead. Whether you want to make this trade completely depends on the context.
The examples here don’t illuminate the main issue that I have with long functions. With long functions (600+, although my examples are often closer to 2,000), I’ve seen it be much more common to have constant state which isn’t marked const and to have mutable state which has long ranging impacts. Putting initialization far away from where it’s used means that all the code between is suspect for mutating that state.
Just figuring out that operations have been applied to the variables in an arbitrary line of code gets difficult. With small functions, people tend to write code where the temporary variables used are contained within their own scope, so the distance in initialization is less and the suspect mutability is less. This is less true if the operations are done to many different fields of a god class which is passed around by pointer, which I’ve seen a fair amount of in C, and Java nudges you towards.
One thing that can help here is inline lambdas which are explicit about their captures which gives both linearity and encapsulation of mutable state. but sometimes the other two alternatives make sense also.
I love inline lambdas! Or, sometimes what’s even better is inline block scopes.
If a bit of code could reasonably be extracted to a function, but is only used in one place, and is not tested independently, then I almost always prefer to write it like I have above. Being able to read code in a straight line is enormously beneficial.
Still runs into the issue that both inlined lambdas and block scopes will capture variables from the outer function, while another function will not
Yeah, but whether that’s good or bad is contextual, I think.
Forth code is even more readable ;-)
Until you just…
Excellent example of why I sometimes hate auto-formatters. When I call a function with a gazillion arguments (anti-pattern I know, can’t be helped sometimes), I like to choose how many I put in each line. Here for instance:
Still have no idea how it works, but I’m guessing this layout makes it easier to spot the patterns.
Edit: perhaps this one instead (just noticed the difference between
>r
andr>
) ?Or that one (counting
eq
s shouldn’t be too hard):I have to be honest with you guys. In my source it’s one line…
: compare ( c#4 c-addr -- c-addr f) eq >r eq >r eq >r eq r> r> r> and and and ;
:^)It just does a bunch of comparisons as part of checking if two nibbles are the same. Forth really is write-only, I’m sure I had a good reason for doing it this way but I cannot for the life of me be bothered to figure out why.
I was into the right side for quite some time, but in the last couple of years I try really hard to delay splitting the function until there is another caller. Usually it provides better insights into what should be factored out and what should be kept.
That is, only split the different oven drivers when you get the second one.
The other extreme is writing a unified oven drivers that decides on minor differences using a bunch of well-placed its. This ends up being a nightmare with no way to extend support for cool new features of the newer model without making a total mess of things.
So in conclusion, it’s probably not the function count that hinders understanding. It’s the total distance jumped where calls to familiar functions do not count.
At least for me.
It’s not either/or.
If you structure your code linearly, you can still extract it into sub functions with some thought. You then put the “main” at the top and you can still read it top to bottom because it will just call them in order.
That said, putting one line comments throughout that let you understand the intent of a function just by reading the comments is also underrated.
I wrote this almost a decade ago, about what looks like this exact subject. I claimed that the in-line form suffered from “interpretation effort”, while the externalized form suffered from “lookup effort”, and tried to describe trade-offs. I’ve been through a phase of my career when I’ve used small functions, and I’m probably biased in that direction today (when I’m reading code, I like to be sure of what part of what state is being used where, which I think is easier to see in smaller functions), but I’ve also seen that others respond better to the style this author advocates..
I’ve been struggling with how to respond to this article. I tend to prefer the code on the right, but there has been a lot of support for the code on the left. Ultimately, I came to the conclusion that both code samples suck.
Some operations should just be added to the objects. Why doesn’t the oven have a preheat method? Why doesn’t the pizza take toppings in the constructor or have a separate method that adds toppings?
The whole section about boxing the pizza makes zero fucking sense. Why does the box have a method for slicing pizza? Shouldn’t that be on the pizza? Why is there a boxed property on the pizza? Why is the pizza ready if the box has been closed? It might make more sense to have a separate object that handles the scheduling/status of the food and let the pizza object just be a pizza object. (Or if you want to make things more complicated use a builder or type state pattern.)
If I saw this code in a code review I would be shaking my head the whole way through. I know that the code is meant to be illustrative and it’s function isn’t the point exactly, but it’s really hard to talk about the structure of the code without considering how you would model the domain.
If you added some of the reusable parts to the different objects and handled the responsibilities better, then the code on the left would look a lot cleaner and make a lot more sense.
Well, yeah.
The code on the right is more rigid, and will be harder to change.
(Specifically, we’re writing a guarantee that baking wholly precedes boxing, with no overlap. Are we really sure, though, that it’ll always be like that?)
If it’s implementing some business logic (liable to change), I’m hesitant to prefer that. What if boxes need to be preheated 5 minutes before end-of-bake? I’ve got to modify bake to return a start-of-bake-time, or something (plus maybe temperature), then pass that along to my boxing subroutine, &c.
It’s definitely nice when you (can) boil some seemingly complicated algorithm to 3 function calls. But the likelihood of every piece of code having some neat form like that just seems … low.
It’s certainly easier to test the code on the right, I’ll give them that.
For years I preferred right side style, until I read Ousterhout’s A Philosophy of Software Design. That books gives you tools to remove part of the subjectivity from the problem, like looking at function “depth” instead of its length, measuring complexity in terms of the interfaces “surface”, etc.
Now I default to longer functions with sections separated by comments (as in the OPs last example) until I smell the function is getting complicated or I find legitimate reasons to extract portions to their own function.
The biggest rule of thumb for me is to ask myself whether a helper function can be used or even understood outside the context of the higher level function. If it can’t, that’s a sign that it shouldn’t be separated.
[Comment removed by author]
The code on the right reminds me of novice Java code “public static void FactoryFactoryManagerRunner”
I found myself sometimes glue functions together for my own code, because I couldn’t understand it being split.