1. 57
  1.  

  2. 24

    MISRA (the automotive applications standard) specifically requires single-exit point functions. While refactoring some code to satisfy this requirement, I found a couple of bugs related to releasing resources before returning in some rarely taken code paths. With a single return point, we moved the resource release to just before the return. https://spin.atomicobject.com/2011/07/26/in-defence-of-misra/ provides another counterpoint though it wasn’t convincing when I read it the first time.

    1. 8

      This is probably more relevant for non-GC languages. Otherwise, using labels and goto would work even better!

      1. 2

        Maybe even for assembly, where before returning you must manually ensure stack pointer is in right place and registers are restored. In this case, there’s more chances to introduce bugs if there are multiple returns (and it might be harder for disassembly when debugging embedded code).

        1. 1

          In some sense this is really just playing games with semantics. You still have multiple points of return in your function… just not multiple literal RET instructions. Semantically the upshot is that you have multiple points of return but also a convention for a user-defined function postamble. Which makes sense, of course.

        2. 2

          Sure, but we do still see labels and gotos working quite well under certain circumstances. :)

          For me, I like single-exit-point functions because they’re a bit easier to instrument for debugging, and because I’ve had many time where missing a return caused some other code to execute that wasn’t expected–with this style, you’re already in a tracing mindset.

          Maybe the biggest complaint I have is that if you properly factor these then you tend towards a bunch of nested functions checking conditions.

          1. 2

            Remember the big picture when focusing on a small, specific issue. The use of labels and goto might help for this problem. It also might throw off automated, analysis tools looking for other problems. These mismatches between what humans and machines understand is why I wanted real, analyzable macros for systems languages. I had one for error handling a long time ago that looked clean in code but generated the tedious, boring form that machines handle well.

            I’m sure there’s more to be gleaned using that method. Even the formal methodists are trying it now with “natural” theorem provers that hide the mechanical stuff a bit.

            1. 2

              Yes, definitely – I think in general if we were able to create abstractions from within the language directly to denote these specific patterns (in that case, early exits), we gain on all levels: clarity, efficiency and the ability to update the tools to support it. Macros and meta-programming are definitely much better options – or maybe something like an ability to easily script compiler passes and include the scripts as part of the build process, which would push the idea of meta-programming one step further.

          2. 5

            I have mixed feelings about this. I think in an embedded environment it makes sense because cleaning up resources is so important. But the example presented in that article is awful. The “simpler” example isn’t actually simpler (and it’s actually different).

            Overall, I’ve only ever found that forcing a single return in a function often makes the code harder to read. You end up setting and checking state all of the time. Those who say (and I don’t think you’re doing this here) that you should use a single return because MISRA C does it seem to ignore the fact that there are specific restrictions in the world MISRA is targetting.

            1. 4

              Golang gets around this with defer though that can incur some overhead.

              1. 8

                C++, Rust, etc. have destructors, which do the work for you automatically (the destructor/drop gets called when a value goes out of scope).

                1. 1

                  Destructors tie you to using objects, instead of just calling a function. It also makes cleanup implicit vs. defer which is more explicit.

                  The golang authors could have implemented constructors and destructors but generally the philosophy is make the zero value useful, and don’t add to the runtime where you could just call a function.

                2. 4

                  defer can be accidentally forgotten, while working around RAII / scoped resource usage in Rust or C++ is harder.

                3. 2

                  Firstly he doesn’t address early return from error condition at all.

                  And secondly his example of single return…

                  singleRet(){
                      int rt=0;
                      if(a){
                          if(b && c){
                              rt=2;
                          }else{
                              rt=1;
                          }
                      }
                      return rt;
                  }
                  

                  Should be simplified to…

                  a ? (b && c ? 2 : 1) : 0
                  
                  1. 1

                    Are you sure that wasn’t a result of having closely examined the control flow while refsctoring, rather than a positive of the specific form you normalised the control flow into? Plausibly you might have spotted the same bugs if you’d been changing it all into any other specific control flow format which involved not-quite-local changes?

                  2. 18

                    I believe that the “single entry, single exit” coding style has not been helpful for a very long time, because other factors in how we design programs have changed.

                    Unless it happens that you still draw flowcharts, anyway. The first exhortation for “single entry, single exit” I can find is Dijkstra’s notes on structured programming, where he makes it clear that the reason for requiring that is so that the flowchart description of a subroutine has a single entry point and a single exit point, so its effect on the whole program state can be understood by understanding that subroutine’s single collection of preconditions and postconditions. Page 19 of the PDF linked above:

                    These flowcharts share the property that they have a single entry at the top and a single exit at the bottom: as indicated by the dotted block they can again be interpreted (by disregarding what is inside the dotted lines) as a single action in a sequential computation. To be a little bit more precise” we are dealing with a great number of possible computations, primarily decomposed into the same time-succession of subactions and it is only on closer inspection–i.e, by looking inside the dotted block–that it is revealed that over the collection of possible computations such a subaction may take one of an enumerated set of distinguished forms.

                    These days we do not try to understand the behaviour of a whole program by composing the behaviour of each expression. We break our program down into independent modules, objects and functions so that we only need to understand the “inside” of the one we’re working on and the “outside” of the rest of them (the “dotted lines” from Dijkstra’s discussion), and we have type systems, tests and contracts to support understanding and automated verification of the preconditions and postconditions of the parts of our code.

                    In other words, we’re getting the benefits Dijkstra wanted through other routes, and not getting them from having a single entry point and a single exit point.

                    1. 7

                      An interesting variant history is described in https://softwareengineering.stackexchange.com/a/118793/4025

                      The description there is that instead of “single exit” talking about where the function exits from, it actually talks about a function only having a single point it returns to, namely the place where it was called from. This makes a lot more sense, and is clearly a good practice. I’ve heard this description from other places to, but unfortunately I don’t have any better references.

                      1. 3

                        Yes, very interesting. It makes sense that “single exit” means “don’t jump to surprising places when you’re done” rather than “don’t leave the subroutine from arbitrary places in its flow”. From the Structured Programming perspective, both support the main goal: you can treat the subroutine as a box that behaves in a single, well-defined way, and that programs behave as a sequence of boxes that behave in single, well-defined ways.

                      2. 4

                        The first exhortation for “single entry, single exit” I can find is Dijkstra’s notes on structured programming

                        Also Tom Duff’s “Reading Code From Top to Bottom” says this:

                        During the Structured Programming Era, programmers were often taught a style very much like the old version. The language they were trained in was normally Pascal, which only allowed a single return from a procedure, more or less mandating that the return value appear in a variable. Furthermore, teachers, influenced by Bohm and Jacopini (Flow Diagrams, Turing Machines and Languages with only two formation rules, Comm. ACM 9#5, 1966, pp 366-371), often advocated reifying control structure into Boolean variables as a way of assuring that programs had reducible flowgraphs.

                        1. 1

                          These days we do not try to understand the behaviour of a whole program by composing the behaviour of each expression. We break our program down into independent modules, objects and functions so that we only need to understand the “inside” of the one we’re working on and the “outside” of the rest of them (the “dotted lines” from Dijkstra’s discussion), and we have type systems, tests and contracts to support understanding and automated verification of the preconditions and postconditions of the parts of our code.

                          Maybe I’m missing something, but it seems to me that Dijkstra’s methodology supports analyzing one program component at a time, treating all others as black boxes, so long as:

                          • Program components are hierarchically organized, with lower-level components never calling into higher-level ones.
                          • Relevant properties of lower-level components (not necessarily the whole analysis) are available when analyzing higher-level components.

                          In particular, although Dijkstra’s predicate transformer semantics only supports sequencing, selection and repetition, it can be used to analyze programs with non-recursive subroutines. However, it cannot handle first-class subroutines, because such subroutines are always potentially recursive.

                          In other words, we’re getting the benefits Dijkstra wanted through other routes, and not getting them from having a single entry point and a single exit point.

                          Dijkstra’s ultimate goal was to prove things about programs, which few of us do. So it is not clear to me that we are “getting the benefits he wanted”. That being said, having a single entry point and a single exit point merely happens to be a requirement for using the mathematical tools he preferred. In particular, there is nothing intrinsically wrong about loops with two or more exit points, but they are awkward to express using ALGOL-style while loops.

                        2. 8

                          Expression-oriented languages make single-entry, single-exit much easier. I often use this style in Rust for instance, because I know that the code is much easier to understand if you can build up the flow chart in your head. It’s not a hard-or-fast rule though - sometimes it is clearer to early return. And the try ? operator does early returning under the hood, and it definitely helps readability.

                          1. 7

                            Erlang making this hard is the biggest reason I didn’t continue to learn Erlang. I don’t understand how the “just crash” philosophy and actually returning useful error messages are supposed to intersect. I have just assumed since Erlang was made for telecom infrastructure, where I assume it’s reasonable to drop bad input rather than return errors, it just doesn’t have an ergonomic way to return errors. At least based on my experience with switches and routers, which tend to drop bad packets with only a few exceptions.

                            I’d be happy to learn I’m wrong though.

                            On the flip side, Swift guard statements make this pattern delightfully easy. In particular, I like that you can do a guard let and the assignment will occur in the outer scope, as opposed to if let which makes the binding local to the body to the if.

                            For example:

                            guard let value = possiblyReturnsNil() else {
                                // handle error
                                return
                            }
                            
                            doSomething(value) // valid
                            

                            Notice the else after the guarded expression, that hints how the scoping works. So guard let isn’t quite the same as what you might expect unless let / if not let to be. Aces.

                            1. 5

                              The way you tackle these things in Erlang is by tagging the return values. If you really, really need to return early, you can always erlang:throw/1, which does pretty much what return does in javascript.

                              Admittedly, you can end up with multiply-indented case statements in Erlang, too, but there are ways of dealing with that (depending on your style) that don’t involve non-local returns.

                              1. 2

                                I’m familiar with return tagging. The nested case statements are what bother me. I appreciate the link but that strategy is so over the top and verbose that I absolutely don’t want to do anything like it for generalized input guarding. Throw might be what I want but it seems wrong, maybe because of how I’m used to using exceptions in other languages, maybe not. Like what if your caller doesn’t expect you to throw, and it isn’t your code? You’d have to wrap everything in a catching function. It feels like way more work than it should be just to have access to early returns.

                                1. 5

                                  I use the following construct in my code:

                                  -spec fold(t(A,B), [fun((A) -> t(A,B))]) -> t(A,B).
                                  fold(R, []) when ?is_result(R) -> R;
                                  fold({error,E}, _) -> {error, E};
                                  fold({ok, V0}, [Fun|Rest]) -> fold(Fun(V0), Rest);
                                  fold(Other, []) -> error({badarg, Other});
                                  fold(_, Other) -> error({badarg, Other}).
                                  

                                  If you squint, it’s an early-exit chain of >>= (monadic binds) on the squintly-typed Either type.

                                  Then, you can use the following pattern:

                                  squinty_either:fold({ok, InitVal}, 
                                  [fun(V) -> {ok, x(V)} end,  %% x is total and pure
                                  fun(V) -> 
                                    case some_pred(V) of   %% case statement inlined, should be sep. fun.
                                      true -> {ok, z(V)};
                                      false -> {error, {V, not_valid}}
                                    end
                                  end,
                                  fun(V) -> ... ]
                                  

                                  Any fun that returns {error, E} will cause the entire chain to exit with that tuple. OTOH, returns of {ok, V} will pass on the bare V to the next element in the chain. This means you only code for the relevant path. You can then deconstruct the values in function heads to further reduce case-yness.

                                  Regarding the ‘unknown code crashing your process’ problem: yes, I hear you. There is no foolproof solution to this: sometimes you do have to explicitly try-catch, sometimes your running process doesn’t care and can just crash, because the supervision tree is built in such a way that it doesn’t matter.

                                  1. 2

                                    I like this pattern overall, but I still wish it was more ergonomic. I believe the |> operator in Elixir exists to essentially do this?

                                    1. 2

                                      Yeah, ergonomics isn’t Erlang’s strong suit. I does help if all your functions have a uniform return type, so you can get by with just referencing them:

                                      fold({ok, N}, [ fun add_one/1, fun add_two/1, fun reject_odd_numbers/1 ]).
                                      

                                      where all the above functions are :: number() -> {ok, number() | {error, any()}

                                      In Elixir, the |> operator solves half of the issue (chaining), and the with syntax solves the other part of the issue (logic/dispatch based on previous return value). Yet, there is no succinct way of combining them, i.e. implementing something like Haskell’s >>=.

                                      *edited example function name

                                      1. 2

                                        The fold strategy reminds me of pipeline from Joyent’s vasync library. Node has a callback-based runtime that totally eradicates anything resembling a call stack whenever you have to do IO, so using a library that attaches context to function calls makes sense. Dynamically executing functions just to get a syntax you like seems silly though. What about generating macro code into a header?

                                        1. 1

                                          It’s been done: https://github.com/rabbitmq/erlando The issue with marco- and parse-transform-based Erlang libraries is that they don’t ‘stick’ in the ecosystem. My guess is that developers are too used to 1:1 mapping of code to bytecode (for reasons of debuggability). Also, parse transforms can be brittle/untestable. Basho’s lager is pretty much the only parse_transform that I’ve seen embraced extensively in the wild.

                                          From a different angle, there’s nothing dramatically silly about dynamically executing functions in Erlang. There are tons of places all around OTP that do this: see the {Mod,Fun,Args} interfaces to Supervisors and gen_servers, dynamic callbacks in gen_event, mnesia transactions, etc. fun(Blah) -> expressions even get compiled to ‘named’ functions during lambda lifting, and they are very efficient, unless of course huge environment captures are involved. Erlang is in fact very introspective and dynamic – I’d go as far as to say that it is almost a lisp, at heart. I’m certain Robert Virding played no small part in making it so.

                                          1. 1

                                            From a different angle, there’s nothing dramatically silly about dynamically executing functions in Erlang.

                                            Yes that’s true. I was thinking about it purely from force of habit—I normally use C++. I just traced a performance issue in some code using std::function down to an allocation in libstdc++ that only happens if the function is a lambda with captures. 8 bytes made all the difference. So you can see what I’m used to thinking about. ¯\_(ツ)_/¯

                                  2. 4

                                    The nested case statements are what bother me.

                                    I’ve generally addressed this with two solutions: 1) toss the nested handling into a function call 2) case on a tuple that has everything that can be evaluated in it and case on the dot product of options. 2 obviously only works if you nested cases don’t depend on each other.

                                    1. 3

                                      Elixir addresses nested cases with the with macro.

                                      1. 2

                                        I should give Elixir a spin, given how much I love Ruby.

                                      2. 1

                                        I’ve done the nested handling as function calls. It annoys me because it splits up all the code in a Node.js kind of way. But it’s decently ergonomic as the function names end up as comments for the early exit condition, and I think early exit conditions should be commented unless they’re really truly obvious.

                                        I haven’t heard of doing a single case over all the inputs before. That’s a fantastic idea! My first thought is I usually order my checks and returns to avoid writing out the Cartesian product of their conditions in if/else statements, so this strategy could be cumbersome. But pattern matching, wildcard _ in particular, should provide opportunities to merge cases. I’ll definitely have to try it out!

                                        Thinking about it also makes me curious how aggressively Erlang can optimize pattern matching. For example, suppose you have 4 independent options, one of which is expensive to compute, and there is only one valid case. If you just case over all 4 options anyway, will the expensive one only be computed in some cases? Or will all options be fully evaluated and bound before the pattern matching starts? Can Erlang detect some functions have no side effect? Of course you can manually optimize this pretty easily, but I’m still interested in seeing what Erlang can do there.

                                2. 4

                                  The PHP community has morphed “Avoid else” into “Never use else”. The idea was originally taken from Code calisthenics for Java, adapted for PHP and popularized by varios conference talks.

                                  The benefits of early return are obvious, but to avoid using “else” altogether is just silly and can lead to some convoluted code.

                                  1. 8

                                    Ruby community is also obsessed with enforcing small-scale coding rules (this is aspect I dislike most in ruby’s culture). For example, default settings of rubocop, the most popular ruby linter, forbids something like this:

                                    def check_thermal_safety
                                      if widgets_temperature > 100
                                        stop_warp_engine
                                      end
                                    end
                                    

                                    It forces user to invert logic like this:

                                    def check_thermal_safety
                                      return unless widgets_temperature > 100
                                      stop_warp_engine
                                    end
                                    

                                    I find this less readable, and moreover, it’s defined in style guide that almost became analogue of python’s pep-8 and it’s enforced by “canonical” ruby linter with default settings.

                                    However, avoiding long nesting, like in this article, is ok, I think.

                                    1. 3

                                      The thing I like, but many ignore, about PEP-8 is that it encourages breaking the rules when it makes sense to do so: A Foolish Consistency is the Hobgoblin of Little Minds.

                                      And Beyond PEP-8 is a good talk about how people tend to miss the point of style guides.

                                      1. 2

                                        Primary reason for applying guides without questioning might be that lots of projects run linters on CI. Your code will not pass if you don’t obey style guide.

                                        Disabling checks for blocks of code are not very convenient in most linters. For example rubocop allows to disable check before block and to enable after it. I’m not even sure if rubocop:enable all will not turn all checks after this block, even if only few checks are enabled in config. And nowadays any comments are red rag for code reviewers because “comments are code smell”.

                                      2. 2

                                        I’ve also seen this style, which in certain cases hides the conditional:

                                        def check_thermal_safety
                                          stop_warp_engine if widgets_temperature > 100
                                        end
                                        
                                        1. 2

                                          I think this style is acceptable for things like raise where it’s obvious at a glance that you would never do it unconditionally, but for anything more subtle it’s a readability disaster.

                                          1. 2

                                            This style is not that bad, but only if text editor is configured to show keywords such as if in color/font that stands out. It’s almost unreadable without syntax highlighting.

                                            Rubocop does not issue warning for this style. But this option of formating is available only if condition expression and body are small. This is the case when style guide requires you to rewrite code (negate logic, add return) if expression becomes larger (for example, if method is renamed).

                                            1. 1

                                              For what it’s worth, as a full-time ruby dev for 6 or 7 years now, this is my take on the “ruby idiomatic” way of writing this method.

                                        2. 3

                                          LLVM style is similar: Don’t use else after a return.

                                          1. 1

                                            When I submitted my first patch for a large C++ project at my first job, the reviewer returned it with a demoralizingly large list of code style nitpicks, to help get me up to speed with the expected code style of the codebase. One of them was exactly this point about preferring early returns over nesting the main logic in a if-else block.

                                            1. 1

                                              With respect to resource leaks and single exit point functions….

                                              I would argue if you….

                                              • reduce the scope of the variable referencing the resources to a minimum.
                                              • Use “const” static single assignment naming pattern. (ie. Think of variables a labels you affix to things, rather than pidgeon holes you can stuff things into.)

                                              The point at which you must release resources tends to become a lot more obvious. Furthermore you provided with an excellent starting to point to separate resource allocation from logic.

                                              ie. If you have a complex tangle of resource allocation (which may fail due to lack of resource) and logic (which may have multiple failure exits), separate them into two functions….

                                              1. One function just does resource allocation and release, and when and only when it has all resources needed to do the job successfully, invokes…
                                              2. The complex tangle of logic with multiple failure exits.

                                              The result is simple, clean and refactorable.

                                              In fact, that’s a fairly general principle that makes your code cleaner and more testable…. Move resource allocation up the call graph and logic down. (By up and down I mean if A call’s B, then B is below A)

                                              1. -1

                                                Hmm. I like early return code…. but I will note a larger and worse anti-pattern.

                                                Someone writes a function as described in TFA…. and then someone else (sometimes the same person) invokes that function….

                                                …but doesn’t know what to do if it returns an error code.

                                                Often the error code really means “You failed to meet my preconditions, therefore whoever invoked me has a bug, and the only fix is to fix the code that called me”.

                                                Of course, the client code, believing that, of course, it is Doing Things Right, casts the error code to (void).

                                                Then everybody wonders why the code is flaky and sporadically does The Wrong Thing.

                                                Of course, if the function concerned had said “You have failed to meet my preconditions, fix yourself now, here is a handy stack trace of how we got here, call me again when you’re aren’t so full of shit.” ie. Invoked abort() instead of return we wouldn’t be shipping bugs.

                                                To me, error codes are a code smell. They have a narrow range of applicability, (eg. where races may occur between query and use) but usually they are just stink.

                                                Usually they are A Good Idea if the Mad Irrational Lazy Idiot Who Asked me to Do X turns in to a conscientious hyperrational being who checks these return codes and magically starts writing correct error handling code instead of his usual crap .

                                                Usually they turn a verb from Do_X(), so I know X has been done when Do_X() completes, into “X might have been done (unless anyone anywhere has been an idiot at any time).”

                                                Guess which code is easier to analyze, debug and maintain?

                                                ie. I usually refactor to early return.

                                                And then analyze whether any clients are doing anything sane with the results.

                                                Usually no.

                                                Then I replace the checks with asserts that abort() on failure.

                                                Guess what?

                                                I find and fix a whole bunch of bugs in the process.

                                                1. 2

                                                  Yes to asserting preconditions. But error codes are a code smell? That’s outrageous. Plenty of things legitimately fail in ways other than invalid preconditions.

                                                  1. 1

                                                    Plenty of things legitimately fail in ways other than invalid preconditions.

                                                    True.

                                                    Especially something like the POSIX open() command, there fundamentally is a race between any other processes doing things to files and directories and open() and only open() can tell you, via an error code if you won.

                                                    So, no, an error code is not a code smell in such a case.

                                                    However, they are a lot less common than people think.

                                                    However, if I review your code and find…

                                                    • The functions you write return error codes.
                                                    • And nothing anywhere handles them properly.
                                                    • And all the error handling code is untested and untestable.
                                                    • Or worse just casts away the error code.

                                                    I bet you I will find bugs in your code.

                                                    I will re-write to not return error codes but just die messily as soon as the problem is found.

                                                    I will massively reduce your line count and improve your readability, and ship something that is less flaky.

                                                    And I will find more bugs in the code that calls your routines.

                                                    How do I know?

                                                    Decades of doing just this.

                                                    Error codes are typically a sign of a programmer abdicating responsibility. It’s too hard to think about how this is being used, I will throw the problem upward and outward and let that guy handle it.

                                                    Except “that guy” is seldom any better than you, and often has less that he can do about it, and is busy thinking about other things.

                                                    http://www.monkeyuser.com/2017/future-self/