1. 2

      As a Portuguese speaker, that was a delight to read. Grazas!

      1. 6

        Funny story: my mother tongue is Spanish, but I lived in Brazil for a long time (7 years). One time I was visiting my family back in my home country and over dinner the TV was on in the background. At some point, I made a comment about the topic the show was covering and asked my dad what he thought about it and he gives me this blank stare and says: ‘huh? What are you talking about?’.

        Turns out the TV was set on the TV Galicia channel and nobody was paying attention to it because they didn’t understand the language. Gallego being kind of a mix of Spanish and Portuguese, I had been following the content without issue, assuming it was Spanish.

        It kind of blew my mind that my brain would kind of put it together and I never even notice.

    1. 2

      First of all, I am from a country where you cannot use the title of engineer unless you have an academic degree in that area. Also, I am an engineer according to these standards so I am sure that I am incurring the risk of being a little bit biased.

      I agree with the article’s take on how it ultimately boils down to a language game. But once this argument is in the table it is hard to argue against it.

      My own mental model to assess the level of maturity of an engineering enterprise sort of a “Kardashevian” scale where the main variables are component reuse, and impact on other engineering disciplines. An engineering enterprise can:

      • Eventually reuse self-made components;
      • Create new designs mostly from self-made reusable components;
      • Export self-made reusable components to other engineering fields.

      If we agree on this half-assed, completely arbitrary and non-comprehensive scale, there is no doubt that the software industry has achieved that higher level of development. Embedded OSs and CAD tools are examples of software enterprises that are vital to other engineering fields as we know them today. In other engineering disciplines that have successfully achieved that higher level of development, the lowest ones are usually relegated to technicians. They are the ones that usually get their certification from a six-month or full-year training program. And there is absolutely nothing wrong with that.

      But, of course, not every mom-and-pop software house will achieve this level of development; in most of them, their loosely defined team of engineers will have a hard time adapting their “generic” MVC implementation to support a new CRUD UI request.

      1. 5

        Far from being a reliable metric, but comment density should be inversely proportional to the total team’s expertise on a given language.

        A Fortran project that is absolutely clueless about C could benefit from an “increment i by 1” comment, pretty much in the same way a team with no bash experts could benefit from the “copy array” comment. A team of C experts would not benefit from the former case, and a team of bash experts would not benefit from the latter.

        In cases where the cardinality of the team is one, the idea of adding comments becomes a simple matter of confidence. If you think you don’t have enough expertise in that language, feel free to add comments to the source code as you like.

        1. 1

          But… the expertise of the team is not fixed?

          1. 1

            Neither is code. New entries will have less and less comments and, over time, files that are constantly maintained should experience a drop in code density as well.

            (Or so it goes in my head.)

            1. 1

              If code drops comments over time and we don’t get an associated rise in readability (that is, presuming comments would still have been useful) doesn’t that make the recipe for unmaintainable legacy codebade? The team’s experience level is not likely trending up with time, churn will bring the domain experience down but change the language experience randomly as hires of wildly differing experience levels onboard.

              1. 1

                Oh, I believe I might have misinterpreted your original question. As a matter of fact I assume the expertise of the team is (ideally) not fixed.

                I can see the scenario you’ve described happening to projects that use languages with low overall adoption and poor developer retention, but I would not expect a reduction on comments density in this scenario—at least not one that is correlated with the mentioned factors.

        1. 1

          The author claims this is shortest:

          b=( "${a[@]}" ) # copy array
          

          But this is 27 characters (including the comment).

          The following is quite a bit shorter, and clearer:

          copy_of_a=( "${a[@]}" )
          

          If you want it clearer still, define a function. Then you don’t need “# copy array” every time you want to copy an array. DRY.

          1. 7

            Then you don’t need “# copy array” every time you want to copy an array. DRY.

            I highly recommend you read the post more carefully. It really seems like you have missed the point here.

            1. 5

              But this is kind of a low-hanging fruit, isn’t it? Now assume that a and b bear some semantic context, say, about the problem domain. So you are basically replacing b by a potentially confusing variable name just for the sake of removing the comments.

              I agree that this might be a better option in some cases—maybe most of them—but it is not a silver bullet.

            1. 1

              Some combination of this with point-free programming would be beautiful.

              1. 5

                There are two possible reasons why people may insist on incontingent advice: they may be parroting platitudes; or, more interestingly, assuming it as a precondition for providing contingent advice instead.

                The latter can be also interpreted as (mostly healthy) insecurity and frustration. An expert in, say, unit testing, may not know how to do their best job in environments where it is not widely adopted.

                It is also a sign that their expertise might not have been properly considered at the moment they have joined the project, which is absolutely not their fault.* In that scenario, incentivizing them to provide contingent advice under different circumstances will be pretty hard.

                So I would take this advice with a grain of salt. Although it might look great in the short term, and eventually lead to a working product, it will only unnecessarily increase stress on (well-intentioned) hedgehogs.

                * Also, it is usually recruitment stakeholders that look for professionals on a very trendy field of expertise and have them integrate projects that cannot incorporate their knowledge anytime soon.

                1. 7

                  I just add comments, it’s simpler and it seems like there’s fewer things that can go wrong.

                  1. 2

                    I beg to differ. One question would be, what is substantially different between bash and other scripting languages that makes it more prone to a single-file script where different sections of the code are only indicated by comments?

                    1. 6

                      I like my configuration to be “easily portable”; that is, copying a single file to a new machine is a lot less work than copying six files. And sure, there are a myriad ways to deal with this; I even wrote an entire script for this myself. But sometimes it’s just convenient to be able to copy/scp (or even manually copy/paste) a single file and just be done with it.

                      I used to use the multi-file approach, but went back to the single-file one largely because of this.

                      I also somewhat prefer larger files in general (with both config files and programming), rather than the “one small thing per file”-approach. Both schools of thought are perfectly valid, just a matter of personal preference. I can open my zshrc or vimrc and search and I don’t have to think about which file I have to open. I never cared much for the Linux “/etc/foo.d”-approach either, and prefer a single “/etc/foo.conf”.

                      1. 1

                        How I personally use it is that the non-portable snippets go to ${BASHRC_D} instead. Having worked as a developer in projects with very heterogeneous stacks, I got fed up of the constant changes to ~/.bashrc that would have to be cleaned up sooner or later.

                        My usual workflow when I am working on a new machine temporarily is to copy only ~/.bashrc. Any additional config is added to ${BASHRC_D} as needed.

                        1. 1

                          copying a single file to a new machine is a lot less work than copying six files

                          Is it? I have all of my configs in a git repo, so it’s a single command for me to git clone to a new machine. Copying a single file is maybe simpler if that’s the only operation that you do, but copying and versioning a single file is no easier than copying and versioning a directory. The bash config part of my config repo has a separate file or directory per hostname, so I can have things in there that only make sense for a specific machine, but everything is versioned as a whole.

                          I never cared much for the Linux “/etc/foo.d”-approach either, and prefer a single “/etc/foo.conf”.

                          For files that are edited by hand, this is very much a matter of personal preference. The big difference is for files that need to be updated by tools. It’s fairly trivial to machine edit FreeBSD’s rc.conf in theory, because it’s intended to be a simple key-value store, but it’s actually a shell script and so correctly editing in a tool it has a bunch of corner cases and isn’t really safe unless you have a full shell script parser and symbolic execution environment (even for a simple case such as putting the line that enables the service in an if block: how should a tool that uninstalls that service and cleans up the config handle it?). Editing rc.conf.d by hand is a lot more faff (especially since most files there contain only one line) but dropping a new file in there or deleting it is a trivial operation for a package installer do do.

                        2. 2

                          Same thing I’d say about Python: it’s a interpreted scripting language where multiple files are only loosely linked together and there’s no compilation or verification step. At least usually you have source files right next to each other but in this case they’re associated using environment variables. Just feels like overengineering.

                          1. 1

                            (…) there’s no compilation or verification step

                            Still no difference from a single-file approach. So I’m afraid I fail to see how is this a relevant aspect in making such an option.

                            At least usually you have source files right next to each other but in this case they’re associated using environment variables.

                            Environment variables like ${BASHRC_D} are nothing but a convenience. It could be replaced by local variables or sheer repetition with no downside. It is a matter of personal preference.

                            Just feels like overengineering.

                            There is no engineering involved in that at all, so calling it “overengineering” feels like overestimation :)

                      1. 17

                        The pipe is one-character glue: “|”.

                        I think that this is mistaken. Pipe makes it easy to connect the output of one program to the input of another, but that is not the same as “gluing” them - you have to do text processing to actually extract the fields of data out of the output from one command and convert it into the right format for input to another.

                        This is why you see tr, cut, sed, awk, perl, grep, and more throughout any non-trivial shell script (and even in many trivial ones) - because very, very few programs actually speak “plain text” (that is, fully unstructured text), and instead speak an ad-hoc, poorly-documented, implementation-specific, brittle semi-structured-text language which is usually different than the other programs you want them to talk to. Those text-processing programs are the Unix glue code.

                        The explicit naming of “glue code” is brilliant and important - but the Unix model, if anything, increases the amount of glue code, not deceases it. (and not only because of the foolish design decision to make everything “plain” text - the “do one thing” philosophy means that you have to use more commands/programs (which are equivalent to functions in a program) to implement a given set of features, which increases the amount of glue code you have to use - gives you a larger n, which is really undesirable if your glue code scales as n^2)

                        1. 2

                          Not sure if that could be a valid criticism to your comment, but I think there is a difference between the initial idea of program composition by people like Doug McIlroy, the creator of Unix pipes, and the way the auxiliary tools you mentioned were reified. So part of the accidental complexity brought with the mentioned ad-hoc formats is not very different from ordinary glue as described in the original post.

                          So I might be missing something but the power is not so much in the single |—it is only an elegant token—but in | p_1 | … | p_n |. It can be converted to a mnemonic | q | and be reused, say, in p_0 | q | p_n+1. But I will come back to it later.

                          This is something could not be done in traditional languages used for programming-in-the-large until very recently. OOP tried to address that in a way but ultimately failed. The very same logic applies: .f_1().f_2(). … .f_n(). can be converted to a mnemonic .g(). and reused in f_0().g().f_nPlus1().

                          Trying to come back to the original subject, I think one of the reasons OOP failed is that, in order to make code reusable in the real world, programmers would have to account for all the relevant permutations of the intermediate parameters, which is impractical. It is easier (in the short term at least) to write glue code in the usual form. Maybe f_i() third parameter cannot be a FooController if f_j() first parameter is a BarView, or you should not call f_k() in-between both if you have BazView and BazController instead because it has some side effects you want to avoid. So you go ahead and write exactly the code you need. (Edit: or you write even more code, and create beautiful class diagrams to leverage this compositional approach. That will most likely account for the same net amount of code when compared to the “glue” approach, not to say anything about the extra amount of work.)

                          Now, this is not to say that it doesn’t happen in the wonderful Unix universe where everything is a file, it does, but in this case, program composition is first-class, maybe due to evolutionary pressure: after all, the usual scenario in an Unix system has always been that you need to get two separate programs with not much shared logic other than OS primitives to talk to each other. That same kind of pressure did not apply to mainstream OO languages/environments. They were born in the middle of the personal computer revolution, and some even of predate public access to the Internet. Monoliths were the norm and, in this case, shared logic is a mere problem of code encapsulation.

                          Well, I still need to sleep over this :)

                          1. 2

                            So I might be missing something but the power is not so much in the single |—it is only an elegant token—but in | p_1 | … | p_n |. It can be converted to a mnemonic | q | and be reused, say, in p_0 | q | p_n+1.

                            Back in my university days, one of the class projects was a Unix shell, and for extra credit one could add conditionals and loops [1]. I already had a language I had written (for fun) and it took just a few hours to add Unix commands as a first class type to the language. So one could do:

                            p1 | p2 | p3
                            

                            and not only execute it, but save the entire command in a variable and compose it:

                            complex1 = p1 | p2 | p3
                            complex2 = p4 | p5 > "filename"
                            c = complex1 | complex2 
                            exec (c)
                            

                            (That’s not the exact syntax [2] but it gets the point across—it also avoided the batshit crazy syntax modern shells use to redirect stderr to a pipe, but I digress). The issue is that I found it to be of limited use overall. Yes, it was enough to get me a “A+, and stop with the overkill” on the project, but that was about it. Once a pipeline is composed, then what? If it does a bit too much, it’s hard to shave off what’s not needed. If you need to do something in the middle, it’s hard to modify.

                            Another example—I use LPEG [3] and it composes beautifully, but there are downsides. I deal with URIs at work, and yes, I have plenty of LPEG code to parse the various ones we deal with (sip: and tel: which have their own structure). I was recently given a new URI type to parse, partly based off one we already deal with. But I couldn’t reuse that particular LPEG because the draft specification we have to use lied [4] and I couldn’t use composition, but “cut-and-paste”. Grrr.

                            I also try to apply such stuff to work. Yes, the project I’m involved with has a bunch of incidental complexity, but that’s because creating a thread to handle a transaction was deemed “too expensive” so there’s all this crap involving select() [5] and a baroque state machine abstraction in C++ to avoid “expensive threads” because we have to call multiple different databases (at the same time, because we have real tight deadlines) to deal with the request, never mind the increasing insanity of the “business logic.” What it does is easy to describe—given a phone number, look up a name and a reputation (requiring two different servers) based on the phone number. How it works is not easy to describe. Is the entire app glue? It could be argued either way really.

                            [1] A simple “execute this command with redirection or pipes” was an individual project; the conditionals and loops bumped it up to a group project—I was the only one who did the “group project.”

                            [2] My language was inspired by Forth.

                            [3] http://www.inf.puc-rio.br/~roberto/lpeg/ It stands for “Lua Parser Expression Grammars”

                            [4] It specified a particular rule from RFC-3966, but that rule included additional data that’s not part of the draft specification. It’s like someone was only partially aware of RFC-3966.

                            [5] The concept, not the actual system call.

                            1. 1

                              Once a pipeline is composed, then what? If it does a bit too much, it’s hard to shave off what’s not needed. If you need to do something in the middle, it’s hard to modify.

                              This is similar to my comparison to OOP. The only difference, I think, is that the difference between object interfaces usually introduces higher “impedance”. Again, it is not that the problem does not exist in Unix, it is only exacerbated in OO environments. And this is probably because of the evolutionary pressure I described in my original reply, etc, etc.

                          2. 2

                            It’s true that you sometimes have to massage the output of one program to match the input of another. So the glue isn’t “free”.

                            However, I’d claim that the glue is often less laborious than you’d see in other non-Unix/non-shell contexts. A little bit of sed or awk can go a long way.

                            In Unix, the glue seems linear; in some OOP codebases, the glue seems quadratic. I can’t find it now, but Steve Yegge has a memorable phrase that Java is like Legos where every piece is a different shape … while in Unix the Legos actually do fit.

                            It does have disadvantages (which can be mitigated), but O(n) glue is a big difference from O(n^2).

                          1. 2

                            I started with something like this but it grew when I needed to understand what was taking so much time when I started a new shell. That turned into this benchmarking code path.

                            1. 1

                              In my case, it is more an exercise in proper separation than scale. I don’t usually keep more than half a dozen of very short files in the configuration dir.

                              More than enabling the maintenance of several config files, as I mentioned in the title, it helps me ensure that my ~/.bashrc doesn’t become a mess. It also facilitates reuse across different machines.

                            1. 3

                              Or, apply minimalism. My bashrc looks roughly like:

                              PS1='\$ '
                              
                              1. 4

                                Minimalism is a valid option. But, in this case, I can’t help but think on how to enable or disable a behavior through an environment variable as presented in the article. Should it be configured manually before execution? If yes, how to you keep track of these in order not to forget them—or not to execute them more than once when it is critical?

                              1. 22

                                I too do something similar:

                                for i in ~/.bashrc.d/[0-9]*; do
                                    . "$i"
                                done
                                

                                You can control the order of sourceing this way.

                                1. 6

                                  Your glob is nice because it limits valid names to a set which is very distinguishable from ordinary helper scripts (in order to be sourced, the filename must start with a digit).

                                  In my particular experience, I have never hit a case where I had to bother with the sourcing order but, since bash globs are sorted by default, I could use this very same approach without changing the bootstrap snippet.

                                1. 14

                                  Apparently the “Submit Story” form has eaten the .md extension from the link and I didn’t notice it. It should be readable in pretty-printed Markdown as well by re-adding it. Would some moderator edit the link, please? :)

                                  Edit: link to the pretty-printed version: https://write.as/bpsylevc6lliaspe.md

                                  1. 3

                                    That’s neat—didn’t know write.as had that feature.

                                  1. 3

                                    In the linked examples, what is the distinction between this technique and IoC/dependency injection? (is there an intended distinction?)

                                    The test implementation also seems like it would fail if you appended more than once.

                                    1. 1

                                      I have that same question. Not only the examples are similar to what I would expect from somebody who happens to be explaining the dependency inversion principle, also the title of the posted article is reminiscent of the good old “rely on abstraction, not on concretions” concept.

                                      Most likely I am missing something, but other than a “koan” that works as tool for thought even if it is not immediately comprehensible, the article leaves room for confusion by linking jargons in such an odd manner.

                                    1. 5

                                      Many objects with getters/setters are just structs with extra steps. As much as Meyer’s gets cited, most people seem to have skipped over the Uniform Access Principle when bringing OOP to languages.

                                      Getters and setters are a violation of UAP and a major failing of most languages which allow OOP, with the notable exceptions of Ruby, Objective-C and C#.

                                      1. 2

                                        Can you expand on what you mean here? I thought the whole argument for getters and setters was to make field access by clients be method calls, to follow UAP?

                                        The idea being that by making clients do ‘x.getFoo()’ instead of ‘x.foo’ you leave space to refactor without breaking clients later on.

                                        ie. in what way are getter/setters a violation of UAP?

                                        In my mind the thing I disagree with is not getters and setters, its UAP thats the problem.

                                        1. 3

                                          If I understand correctly “planning ahead” and using getter and setter methods are a workaround for the lack of UAP, it’s being treated as a property of a language not a program. Ruby and Objective-C don’t allow you to access members without going through a method (they force UAP), C# lets you replace member access with methods since it supports member access syntax for methods.

                                          1. 1

                                            C# lets you replace member access with methods since it supports member access syntax for methods.

                                            Python has the same feature (in fact, I’m pretty sure had it first). You can start with a normal member, then replace it with a method via the property decorator and if you want to, implement the full set of operations (get, set, delete) for it without breaking any previous consumers of the class.

                                            Of course, Python also doesn’t actually have the concept of public versus private members, aside from socially-enforced conventions, but your concern seems to be less whether someone can find the private-by-convention attributes backing the method, and more with whether x.foo continues to work before and after refactoring into a method (which it does, if the new foo() method is decorated with property).

                                            1. 1

                                              Of course, Python also doesn’t actually have the concept of public versus private members, aside from socially-enforced conventions

                                              I’m genuinely curious, as at first glance you seem to be an advocate of this, what’s the benefit of socially-enforced conventions over compiler-enforced privacy? Also, another thing I’ve been curious about not having programmed much in a language with these semantics, how does a language like Python handle something like the following:

                                              There’s a class I want to extend, so I inherit it. I implement a “private member” __name. The base class also implemented __name, does my definition override it?

                                              I’ve been wondering about that because if that’s the case, it seems like that would require people to know a lot of implementation details about the code their using. But for all I know, that’s not the case at all, so I’d be happy to hear someone’s perspective on that.

                                              1. 3

                                                There’s a class I want to extend, so I inherit it. I implement a “private member” __name. The base class also implemented __name, does my definition override it?

                                                It doesn’t. Double underscores mangle the attribute name in this case by prepending _<class name> to the original attribute name, which obfuscates external access. The attribute is only accessible by its original name in the class they are declared.

                                                Python docs: https://docs.python.org/3/tutorial/classes.html#private-variables

                                                1. 2

                                                  I’m genuinely curious, as at first glance you seem to be an advocate of this, what’s the benefit of socially-enforced conventions over compiler-enforced privacy?

                                                  It was my first day at the new job. I’d been shown around the office, and now I was at my new desk, laptop turned on and corporate email set up. I’d been told my first ticket, to help me get to know the process, would be changing the user-account registration flow slightly to set a particular flag on certain accounts. Easy enough, so I grabbed a checkout of the codebase and started looking around. And… immediately asked my “onboarding buddy”, Jim, what was going on.

                                                  “Well, that’s the source code”, he said. “Yeah, but is it supposed to look like that? “Of course it is, it’s encrypted, silly. I though you were supposed to be coming in with years of experience in software development!” Well, I said I’d seen some products that shipped obfuscated or encrypted code to customers, but never one that stored its own source that way. “But this way you have proper access control! When you’re authorized to work on a particular component, you reach out to Kevin, who’s the senior engineer on our team, and he’ll decrypt the appropriate sections for you, then re-encrypt when you’re done. That way you never see or use any code you’re not supposed to know about. It’s called Data Hiding, and it’s a fundamental part of object-oriented programming. Are you sure you’ve done this before?”

                                                  I sighed. And then recognition dawned. “Hey, wait”, I said, “this isn’t really encrypted at all! It’s just ROT13! Look, here this qrs ertvfgre_nppbhag is actually just def register_account…”

                                                  THWACK!

                                                  I’d been pointing at the code on my screen excitedly, and didn’t notice someone sneaking up behind me. Until he whacked me across the fingers, hard, with a steel ruler. “YOU ARE NOT AUTHORIZED TO READ THAT CODE!” he yelled, and then walked away.

                                                  “Who was that?”

                                                  “That was Kevin, the senior engineer I told you about. You really should be more careful, you’ll get written up to HR for an access violation. And if you accumulate three violations you get fired. Maybe they’ll let this one slide since you’re new and obviously inexperienced.”

                                                  “But how does anyone get anything done here?”, I asked.

                                                  “I told you – you ask Kevin to decrypt the code you’re supposed to work on.”

                                                  “But what if I need to use code from some other part of the codebase?”

                                                  “Then Kevin will liaise with senior engineers on other teams to determine whether you’re allowed to see their code. It’s all very correct according to object-oriented design principles!”

                                                  I goggled a bit. Jim finally said, “Look, it’s obvious to me now that you’ve never worked somewhere that followed good practices. I’m not going to tell on you to HR for whatever lies you must have put on your résumé to get hired here, but maybe you could tell me what you used to do so I can help you get up to speed on the way professionals work.”

                                                  So I explained that at previous jobs, you could actually see all the code when you checked it out, and there was documentation explaining what it all did, how to perform common tasks, what APIs each component provided, and so on, and you’d look things up and write the code you needed to write for your tasks and file a pull request that eventually got checked in after review.

                                                  Now Jim was goggling at me. “But… what if someone used the code in a way the original team didn’t want it to be used? How would you protect against that?”

                                                  “Well, there were conventions for indicating and documenting which APIs you were committing to support and maintain, and the policy was anyone could use those APIs any time. But if you needed something that wasn’t provided by any supported API, you’d talk to the team that wrote the component and work something out. Maybe they would say it was OK to use a non-supported API as long as you took responsibility to watch for changes, maybe they’d work with you to develop a new supported API for it, or come up with a better solution.”

                                                  Jim couldn’t believe what I was telling him. “But… just knowing which team wrote some other code is a violation of the Principle of Least Knowledge! That’s a very important object-oriented principle! That’s why everything that crosses boundaries has to go through Kevin. Why, if you could just go talk to other teams like that you might end up deciding to write bad code that doesn’t follow proper object-oriented principles!”

                                                  I tried my best to explain that at my previous jobs people trusted and respected each other enough that there wasn’t a need for fanatically-enforced controls on knowledge of the code. That we did just fine with a social-convention-based system where everybody knew which APIs were supported and which ones were “use at your own risk”. That there certainly weren’t senior engineers wandering among the desks with steel rulers – that senior engineers had seen it as their job to make their colleagues more productive, by providing tools to help people write better code more quickly, rather than being informational bottlenecks who blocked all tasks.

                                                  After I finished explaining, Jim shook his head. “Wow, that sounds awful and I bet the code they produced was pretty bad too. I bet you’re glad to be out of those old jobs and finally working somewhere that does things right!”

                                                  1. 1

                                                    So just to make sure I’m following, your argument is that if you need to use something that’s not included in the public API, compiler-enforced privacy requires you to talk to the team that developed the code if you need an extension to the API, while convention-enforced privacy requires that in order to make sure you don’t break anything you… talk to the team that developed the code so that you can work out an extension to the API?

                                                    1. 1

                                                      My argument is that in languages with enforced member-access/data-hiding, I can’t even think about using a bit of API that hasn’t been explicitly marked as available to me. If I try it, the compiler will thwack me across the hand with a steel ruler and tell me that code is off-limits. My only options are to implement the same thing myself, with appropriate access modifiers to let me use it, or somehow convince the maintainer to provide public API for my use case, but even that won’t happen until their next release.

                                                      In Python, the maintainers can say “go ahead and use that, just do it at your own risk because we haven’t finalized/committed to an API we’re willing to support for that”. Which really is what they’re saying when they underscore-prefix something in their modules. And Python will let me use it, and trust that I know what I’m doing and that I take on the responsibility. No steel rulers in sight.

                                                      A lot of this really comes down to Python being a language where the philosophy is “you can do that, but it’s on you to deal with the consequences”. And that’s a philosophy I’m OK with.

                                              2. 1

                                                I have mixed feelings about UAP, because I want to know when accessing a field is a trivial operation, and when it can run some expensive code.

                                                If a field is just a field, then I know for sure. If it could be a setter/getter, then I have to trust that author of the class isn’t doing something surprising, and will not make it do something surprising in the future.

                                                1. 1

                                                  You can’t know that in languages like Java which default to get_X() and set_Y() for even the simplest field access, if only to not break their interface when they need to add a simple range guard.

                                                  Languages without UAP will go to such lengths to emulate it using methods that you will seldom see a bare field exposed, at which point you can no longer know if a simple getter will return a local field or fire nuclear missiles.

                                                  1. 1

                                                    Yeah, IMO it’s a terrible idea for multiple reasons.

                                                    One, like you’re saying, it gives up a super powerful tool for improving readability in client code. If you’re using a language that has “property accessor” nonsense, every access to fields provided by a library may - for all you know - throw IO exceptions or have any other arbitrary behavior. With method calls being explicitly separate, you add a huge aid in optimizing for readers by reducing the number of possible things that can happen.

                                                    Two, it makes library authors think they can swap client field access for computation without breaking backwards compatibility, which is some sort of post-modernist academic fiction.

                                            1. 9

                                              Implementing accessors like these in C++ is a huge code smell IMO, especially if you need a setter. In these cases, most of the logic in the setter method should be in the constructor of a specialized type pretty much like Latitude or SphericalCoordinate, but with copy/move constructors and overloaded assignment operators—instead of operator() overloads—just like the rule of three/five tells us to. And of course the member in question should be of that type instead.

                                              1. 4

                                                Some of the examples provided in the text look like watered-down versions of real use cases which happen to benefit from the proposed refactorings. But, for instance, if one has to check more than one error condition, that will lead to either a bunch of cascading ifs or an unreasonable number of auxiliary functions. At that point, the readability benefits of keeping separate branches of the same if clause vanish quickly.

                                                1. 1

                                                  Since the point of an example is to take a point across, this is probably true, but regarding the case with the multiple errors, I would be pretty happy with

                                                  if (error1) {
                                                     throw exception1
                                                  } else if (error2) {
                                                     throw exception2
                                                  } else {
                                                     doStuff()
                                                  }
                                                  

                                                  (The rule for not conflating is only valid for “multiple conditions that are dependent on one another” )

                                                  Otherwise, I would be happy to see some examples where the rules don’t work. After all, we know that every rule has exceptions.

                                                  1. 1

                                                    So, I was thinking in scenarios such as this one, which is a very common pattern in C. Every single function call can raise an error, and their results in case of success are feeded into the next call.

                                                    I tried to follow your recommendations and it resulted in extremely unidiomatic C.

                                                    In these cases, early returns—or even gotos jumping to the cleanup section—are much more readable.

                                                    fd = open(path, O_RDONLY);
                                                    if (fd < 0) {
                                                      status = -1;
                                                    } else if ((buf = malloc(sz)) == NULL) {
                                                      status = -2;
                                                    } else if ((n = read(fd, buf, sz)) < 0) {
                                                      status = -3;
                                                    } else {
                                                      /* TODO consume `buf` and `n` */
                                                      status = 0;
                                                    }
                                                    /* TODO cleanup */
                                                    return status;
                                                    
                                                    1. 2

                                                      I am not a C programmer, but for me this looks pretty readable.

                                                1. 1

                                                  Very nice article :) One thing I couldn’t help but look sideways at is the use of NoReturn as a function argument type to assert_never—I know it is not a big deal, but I went to check mypy source code for its internal representation of the bottom type.

                                                  NoReturn is a possible representation of the bottom type (yes, that’s weird). The bottom type itself is named UninhabitedType. I will spare you the details, but that type is equivalent to an empty union, Union[()], which kind of makes sense. I was surprised Guido himself did not point it out in the respective GitHub issue thread, maybe because that’s a minor detail and he didn’t bother to.

                                                  So, to please my inner armchair type theorist, I would probably replace that signature by

                                                  def assert_never(value: 'Union[()]') -> NoReturn: ...
                                                  

                                                  Edit: the original snippet was not enough to get it working in runtime; the quotes around the argument type annotation are required. Please check the full explanation in the replies below.

                                                  The emitted error message also gets slightly more intuitive, since NoReturn is replaced by <nothing>:

                                                  ~$ mypy src/exhaustiveness_check.py
                                                  src/exhaustiveness_check.py:18: error: Argument 1 to "assert_never" has incompatible type "Literal[OrderStatus.Scheduled]"; expected <nothing>
                                                  Found 1 error in 1 file (checked 1 source file)
                                                  
                                                  1. 2

                                                    That’s a great idea! I updated the article with your suggestion

                                                    https://hakibenita.com/python-mypy-exhaustive-checking#updates

                                                    1. 2

                                                      Hi, just an update because I noticed things get more complex in runtime.

                                                      If we use the following annotation format:

                                                      def assert_never(value: Union[()]) -> NoReturn: ...
                                                      

                                                      We get the following error in runtime:

                                                      $ python src/exhaustiveness_check.py
                                                      Traceback (most recent call last):
                                                        File "src/exhaustiveness_check.py", line 9, in <module>
                                                          def assert_never(value: Union[()]) -> NoReturn:
                                                        File "/usr/lib/python3.7/typing.py", line 251, in inner
                                                          return func(*args, **kwds)
                                                        File "/usr/lib/python3.7/typing.py", line 344, in __getitem__
                                                          raise TypeError("Cannot take a Union of no types.")
                                                      TypeError: Cannot take a Union of no types.
                                                      

                                                      This is because Python actually attempts to construct a typing.Union object to compose the annotation. That can be avoided by having the argument type annotation as a string:

                                                      from typing import NoReturn, Union
                                                      
                                                      def assert_never(value: 'Union[()]') -> NoReturn:
                                                          raise AssertionError(f'Unhandled value: {value} ({type(value).__name__})')
                                                      

                                                      One other option is the snippet below, which is certainly much more verbose than the original solution; besides that, it introduces different function types at compile and runtime.

                                                      from typing import NoReturn, Union, TYPE_CHECKING
                                                      
                                                      if TYPE_CHECKING:
                                                          def assert_never(value: Union[()]) -> NoReturn: ...
                                                      else:
                                                          def assert_never(value) -> NoReturn:
                                                              raise AssertionError(f'Unhandled value: {value} ({type(value).__name__})')
                                                      

                                                      The advantage of these approaches is that at least they do not contradict PEP 484, as NoReturn is only used as a return annotation.

                                                  1. 1

                                                    Very nice! MyPy’s flow sensitive type checking is indeed powerful.

                                                    I also like it for the nullable checks, which is related to this argument [1]. If the nullable type is flow sensitive, that’s basically what I want, and it’s useful in real code.

                                                    [1] https://lobste.rs/s/hek0ym/why_nullable_types

                                                    1. 1

                                                      It actually is. The only gotcha is that sometimes you must rewrite your code in order to allow the type checker to figure things out. For instance, consider the two functions in the following snippet:

                                                      from typing import Dict, Optional
                                                      
                                                      def dynamic_option_check(d: Dict[str, Optional[str]]) -> str:
                                                          return d['key'] if d.get('key', None) is not None else ''
                                                      
                                                      def static_option_check(d: Dict[str, Optional[str]]) -> str:
                                                          return d['key'] if 'key' in d and d['key'] is not None else ''
                                                      

                                                      … And the following type checker output:

                                                      $ mypy src/optional_test.py
                                                      src/optional_test.py:4: error: Incompatible return value type (got "Optional[str]", expected "str")
                                                      Found 1 error in 1 file (checked 1 source file)
                                                      

                                                      Type checking fails for dynamic_option_check() because mypy cannot narrow d’s type from the d.get() call. static_option_check() works fine, though, as we are explictly testing if d['key'] is not None.

                                                    1. 6

                                                      I like it.

                                                      This general space (maybe “reheating cold context”?) has been interesting to me (read: my achilles’ heel?) for a while.

                                                      Things I already do or have done:

                                                      • open a terminal tab for a distinct project (sometimes more than one, for distinct sub-tasks/sub-projects)
                                                      • keep tabs around for ongoing but inactive projects, so that I can review what I was doing
                                                      • working on a project-oriented shell-history module to help sand down some rough edges for the above
                                                      • working on an ST3 extension to sand down some sharp edges around ST3 projects (group multiple to open/close them together; have a persistent list of these meta projects that won’t get scrolled off of the recent-project list each time I open a bunch of related work projects…)

                                                      I’ve also daydreamed about:

                                                      • some sort of editor plugin/extension that keeps a contextual log of what you’ve been touching
                                                      • some affordance for annotating shell history, and probably for grouping+annotating common command sequences (probably eventually part of or paired with the shell-history module) that (ideally) does things like:
                                                        • passively notice common patterns and prompt me to annotate them (I notice you run these 4 commands in this order a lot; would you like me to help you annotate it, create a shortcut (script? function? alias?) and remind you about it when I see you manually run the commands?)
                                                        • make it easy to see annotations + context (frequency, location, project, etc) by command/directory/project/etc.
                                                        • maybe notice when I’m fumbling around with a command (you probably don’t need two guesses :)
                                                        • maybe append/prepend/wrap/annotate the syntax help or manpage with my own invocations
                                                      1. 12

                                                        I am a bash history junkie somehow; I’d rather have one-liners in my history when I notice they are long but simple to come up with (e.g. find usages with several options). That means I don’t need to pollute $PATH with writable directories in order to reach these commands from the current working directory.

                                                        So, far from being an automated process, when I notice I will need to run my-lengthy-one-liner more than once over the next couple of hours, I annotate them like this:

                                                        : mnemonic ; my-lengthy-one-liner
                                                        

                                                        Then I can search for mnemonic on my shell history anytime I want to use that command.

                                                        1. 2

                                                          Oh, wow, that’s brilliant, thanks for sharing!

                                                        2. 2

                                                          Some useful tips there. I also keep tabs around (browser and terminal).

                                                          Terminal: it certainly helps being able to rename the title so context shows up in the tab. There is also a way to add colour to iTerm2 tabs for almost a tag system. 1

                                                          Browser-wise I use Tree-Style Tabs which allows me to set a project “parent” tab, say the git repo, and then collapse it’s children when I’m not working on it.

                                                          As for shell history, I often find myself doing things along the lines of

                                                          % command -with weird -flags i -wont remember # some additional context here about what I’m doing so all my notes get written to my eternal shell history (which is also in git)
                                                          
                                                          1. 2

                                                            maybe notice when I’m fumbling around with a command (you probably don’t need two guesses :)

                                                            perhaps you want one or both of tldr and thefuck?

                                                            brew install tldr

                                                            https://github.com/nvbn/thefuck

                                                          1. 16

                                                            People like me have been saying this for quite some time. You could use traditional non-linear optimization techniques here to do even better than what the author’s simple random search does, for example gradient descent.

                                                            My old boss at uni used to point out that neural networks are just another form of interpolation, but far harder to reason about. People get wowed by metaphors like “neural networks” and “genetic algorithms” and waste lots of time on methods that are often outperformed by polynomial regression.

                                                            1. 12

                                                              Most of ML techniques boil down to gradient descent at some point, even neural networks.

                                                              Youtuber 3blue1brown has an excellent video on that: https://www.youtube.com/watch?v=IHZwWFHWa-w .

                                                              1. 3

                                                                Yep, any decent NN training algorithm will seek a minimum. And GAs are just very terrible optimization algorithms.

                                                                1. 1

                                                                  I’d say that only a few ML algorithms ultimately pan out as something like gradient descent. Scalable gradient descent is a new thing thanks to the advent of differentiable programming. Previously, you’d have to hand-write the gradients which often would involve investment into alternative methods of optimization. Cheap, fast, scalable gradients are often “good enough” to curtail some of the other effort.

                                                                  An additional issue is that often times the gradients just aren’t available, even with autodiff. In this circumstance, you have to do something else more creative and end up with other kinds of iterative algorithms.

                                                                  It’s all optimization somehow or another under the hood, but gradients are a real special case that just happens to have discovered a big boost in scalability lately.

                                                                2. 6

                                                                  A large part of ML engineering is about evaluating model fit. Given that linear models and generalized linear models can be constructed in a few lines of code using most popular statistical frameworks [1], I see no reason for ML engineers not to reach for a few lines of a GLM, evaluate fit, and conclude that the fit is fine and move on. In practice for more complicated situations, decision trees and random forests are also quite popular. DL methods also take quite a bit of compute and engineer time to train, so in reality most folks I know reach for DL methods only after exhausting other options.

                                                                  [1]: https://www.statsmodels.org/stable/examples/index.html#generalized-linear-models is one I tend to reach for when I’m not in the mood for a Bayesian model.

                                                                  1. 1

                                                                    Didn’t know about generalized linear models, thanks for the tip

                                                                  2. 5

                                                                    For a two parameter model being optimized over a pretty nonlinear space like a hand-drawn track I think random search is a great choice. It’s probably close to optimal and very trivial to implement whereas gradient descent would require at least a few more steps.

                                                                    1. 3

                                                                      Hill climbing with random restart would likely outperform it. But not a bad method for this problem, no.

                                                                    2. 1

                                                                      I suppose people typically use neural networks for their huge model capacity, instead of for the efficiency of the optimization method (i.e. backward propagation). While neural networks are just another form of interpolation, they allow us to express much more detailed structures than (low-order) polynomials.

                                                                      1. 4

                                                                        There is some evidence that this overparameterisation in neural network models is actually allowing you to get something that looks like fancier optimisation methods[1] as well as it’s a form of regularisation[2].

                                                                        1. http://www.offconvex.org/2018/03/02/acceleration-overparameterization/
                                                                        2. http://www.offconvex.org/2019/10/03/NTK/
                                                                        1. 2

                                                                          The linked works are really interesting. Here is a previous article with a similar view: https://lobste.rs/s/qzbfzc/why_deep_learning_works_even_though_it

                                                                        2. 1

                                                                          neural networks […] allow us to express much more detailed structures than (low-order) polynomials

                                                                          Not really. A neural network and a polynomial regression using the same number of parameters should perform roughly as well. There is some “wiggle room” for NNs to be better or PR to be better depending on the problem domain. Signal compression has notably used sinusodial regression since forever.

                                                                          1. 2

                                                                            A neural network and a polynomial regression using the same number of parameters should perform roughly as well.

                                                                            That’s interesting. I have rarely seen polynomial models with more than 5 parameters in the wild, but neural networks easily contain millions of parameters. Do you have any reading material and/or war stories about such high-order polynomial regressions to share?

                                                                            1. 3

                                                                              This post and the associated paper made the rounds a while ago. For a linear model of a system with 1,000 variables, you’re looking at 1,002,001 parameters. Most of these can likely be zero while still providing a decent fit. NNs can’t really do that sort of stuff.