Threads for effdee

  1. 5

    Alright, I’ve gotten “optimal” solutions for all of them except 6, 10, 13, and 14. Ended up coming up with the same solution for 6 and 14, so clearly I need to golf those more…

    • 6: 32 characters (22 is current best)
    • 10: 30 characters (27 is current best)
    • 13: 13 characters (11 is current best)
    • 14: 32 characters (18 is current best)

    Did any lobsters manage to optimize all of them, or beat the “absolute best”? :-)

    1. 2

      A quick Google search brought up All answers to with explanations.

    1. 1

      As predicted in the movie “Blade Runner” (which is actually set in 2019). Fascinating! :)

      1. 1

        Envy Code R seems to be missing.

        1. 3

          Possibly because it is not a free font and the author specifically disallows redistribution (as would be required for a web font):

          These files are free to download and use from but CAN NOT be redistributed either by other web sites or be included in your package, download, product or source repository

          This also means that if this site ever goes down with the author missing the font is lost forever.

          1. 2

            Aww, a pity. That’s of course a good reason to omit it. Thanks for pointing it out.

          1. 1

            Thank you for your comment, I’ve merged these article submissions.

          1. 4

            Take a branch with only 3 small changes and it will get a whole lot of comments and suggestions. Take one with +100 changed files and it will get none.

            That’s a great example of bikeshedding.

            1. 2

              But it’s a real problem. Nobody reads long diffs. You want your code reviewed, don’t you? Then make it shorter. If it must be longer, then turn it into a lot of small diffs, each requiring no further or minimal context.

              1. 3

                Thankfully some people don’t buy into this self-defeating rhetoric and can and do read longer diffs when the changes required are longer. Constraining the length of a change works for some problems and under some conditions, but it’s not a fundamental good or even universally achievable.

                1. 5

                  It’s not a self-defeating rhetoric. It’s just hard to pay attention when your work is longer. It’s not because people are lazy or stupid or some other wrong thing. It’s because we’re humans and we are just not good at reading long and rambling bits of code that someone else wrote all at once. When every line of a giant hairball diff involves a context switch, nobody is going to read that, not even the original author.

                  I’m not saying you can’t make long changes. I am saying that you should split up your long changes. Split them as much as possible so each change has the least context possible to be understood. Books have sentences, paragraphs, chapters. Code has functions, modules, source files, repositories. Code review should use commits as the demarcation.

                  Move the effort of making the diff understandable to the writer, not the reader.

                  1. 1

                    Obviously if a change is rambling, it could probably stand to be improved – but something can be long without being rambling.

                    When every line of a giant hairball diff involves a context switch, nobody is going to read that, not even the original author.

                    I think you may be projecting a little. To stack my anecdote alongside yours, I have both read and written longer changes that required a lot of context to understand. I agree that it takes longer, which perhaps means I won’t get to do it all in one sitting – but I can take notes about my thoughts, as I would encourage all engineers to do, and I can pick up where I left off.

                    Split them as much as possible so each change has the least context possible to be understood.

                    I agree this can be beneficial when it’s possible, I just don’t think it always is. I’ve definitely seen people err too far on the side of microscopic changes. While the tiny change at issue may seem correct in isolation, the broader context is often actually very important and by avoiding understanding it you’re not going to give or get a very thorough review.

                    Code review, like designing and writing the code in the first place, and like testing it, takes time and energy. There’s just no magic bullet when the goal is thoughtful, consistent, and rigorous change to the software.

                    1. 2

                      The data is on the side of shorter reviews.

                      Our results suggest that review effectiveness decreases with the number of files in the change set. Therefore, we recommend that developers submit smaller and incremental changes whenever possible, in contrast to waiting for a large feature to be completed.


                      Reviews should be of changes that are small, independent, and complete.

             (based on data from )

                      There is no large code change that cannot be split up into incremental, meaningful changes. It takes training to recognise those atomic boundaries, but they exist, and using them is helpful for reviewers.

                  2. 1

                    This is one of the things I really like about Go: the language designers explicitly design features to enable easier incremental changes.

              1. 3

                Refuse to work on systems that […]


                1. 1

                  Sacrilege! That’s not an Amiga 1000 keyboard in the first picture! ;)

                  1. 1

                    #!/usr/bin/env bash

                    1. 14

                      I don’t buy it because the real protocol is what you read and write from the file, not that you can read and write files. And if the “file” is a directory, what do the filenames you read and write from/to it mean?

                      So is there really any difference between open(read("/net/clone")) and net_clone();? The author seems to say the former is more loosely coupled than the latter because the only methods are open and read on the noun that is the file…. but really, you are stating exactly the same thing as the “verb” approach (if anything, I’d argue it is more loosely typed than loosely coupled). If a new version wants to add a new operation, what’s the difference between making it a new file that returns some random data you must write code to interpret, and a new method that returns some data you must write code to use?

                      1. 24

                        So is there really any difference between open(read(”/net/clone”)) and net_clone();

                        Yes: The fact that you can write tools that know nothing about the /net protocol, and still do useful things. And the fact that these files live a uniform, customizable namespace. You can use “/net/tcp/clone”, but you can also use “/net.home/tcp/clone”, which may very well be a completely different machine’s network stack. You can bind your own virtual network stack over /net, and have your tests run against it without sending any real network traffic. Or you can write your own network stack that handles roaming and reconnecting transparently, mount it over /net, and leave your programs none the wiser. This can be done without any special support in the kernel, because it’s all just files behind a file server.

                        The difference is that there are a huge number of tools you can write that do useful things with /net/clone that know nothing about what gets written to the /net/tcp/* files. And tools that weren’t intended to manipulate /net can still be used with it.

                        The way that rcpu (essentially, the Plan 9 equivalent of VNC/remote desktop/ssh) works is built around this. It is implemented as a 90 line shell script It exports devices from your local machine, mounts them remotely, juggles around the namespace a bit, and suddenly, all the programs that do speak the devdraw protocol are drawing to your local screen instead of the remote machine’s devices.

                        1. 5

                          You argue better than I can, but I’ll add that the shell is a human interactive environment, C api’s are not. Having a layer that is human interactive is neat for debugging and system inspection. Though this is a somewhat weaker argument once you get python binding or some equivalent.

                          1. 1

                            I was reminded of this equivalent.

                          2. 1

                            But in OOP you can provide a “FileReader” or “DataProvider”, or just a FilePath that abstracts either where the file is or what you are reading from too. The simplest would be the net_clone function above just taking a char* file_path, but in an OOP language the char* or how we read from whatever the char* is can be abstracted too.

                            1. 2

                              Yes, but how do you swap it out from outside your code? The file system interface allows you to effectively do (to use some OOP jargon) dependency injection from outside of your program, without teaching any of your tools about what you’re injecting or how you need to wire it up. It’s all just names in a namespace.

                              1. 0

                                without teaching any of your tools about what you’re injecting or how you need to wire it up

                                LD_PRELOAD, JVM ClassPath…

                          3. 6

                            So is there really any difference between open(read(”/net/clone”)) and net_clone();?

                            Yes, there is. ”/net/clone” is data, while net_clone() is code.

                            1. 4

                              I don’t buy it because the real protocol is what you read and write from the file, not that you can read and write files

                              Yes - but the read()/write() layer allows you to do useful things without understanding that higher-level protocol.

                              It’s a similar situation to text-versus-binary file formats. Take some golang code for example. A file ‘foo.go’ has meaning at different levels of abstraction:

                              1. golang code requiring 1.10 compiler or higher (uses shifted index expression
                              2. golang code
                              3. utf-8 encoded file
                              4. file

                              You can interact with ‘foo.go’ at any of these levels of abstraction. To compile it, you need to understand (1). To syntax-highlight it you only need (2). To do unicode-aware search and replace, you need only (3). To count the bytes, or move/delete/rename the file you only need (4).

                              The simpler interfaces don’t allow you to do all the things that the richer interfaces do, but having them there is really useful. A user doesn’t need to learn a new tool to rename the file, for example.

                              If you compare that to an IDE, it could perhaps store all the code in a database and expose operations on the code as high-level operations in the UI. This would allow various clever optimisations (e.g. all caller/callee relationships could be maintained and refactoring could be enhanced).

                              However, if the IDE developer failed to support regular expressions in the search and replace, you’re sunk. And if the IDE developer didn’t like command line tools, you’re sunk.

                              (Edit: this isn’t just one example. Similar affordances exist elsewhere. Text-based internet protocols can be debugged with ‘nc’ or ‘telnet’ in a pinch. HTTP proxies can assume that GET is idempotent and various cacheing headers have their standard meanings, without understanding your JSON or XML payload at all.)

                            1. 4

                              Too bad this article is totally Linux/mdadm-centric and does not mention other sofware RAID solutions, like the “rampant layering violation” called ZFS, FreeBSD’s graid, OpenBSD’s softraid, etc.

                              1. 7

                                I found that it caused me to check my phone/inbox/Slack more, not less.

                                This behavior is called extinction burst.

                                1. 6

                                  The idea that a stable API surface area for kernel modules is “nonsense” is definitely one reason I’m keen to avoid seriously working on or even with Linux for the rest of my career. The article itself doesn’t appear to have actually used the word “nonsense”, but it definitely offers some false dichotomies; e.g., the idea that innovation and improvement can’t be had if you also want to provide stability, or that API/ABI stability is somehow a burden for those closed source operating systems to bear.

                                  In illumos, we have inherited a rich, stable interface for various kinds of kernel modules from our Solaris heritage. Though Solaris was closed source, illumos is not, and yet we continue to support the same kind of stability. We remove ancient things from time to time, but if you’re working on a device driver for a PCI network card or certain other classes of kernel module, and you stick to the stable interfaces, you should be well-served. Most recently we extended our USB framework to support XHCI controllers and USB 3.0, without needing to ditch support for existing USB device drivers.

                                  API/ABI stability is a critical variety of engineering empathy for the people who build software on top of the platform you provide.

                                  1. 2

                                    The article itself doesn’t appear to have actually used the word “nonsense”

                                    The HTML file itself is named stable-api-nonsense.html.

                                      1. 1

                                        Ah, so it is!

                                        Also, I had somehow missed this gem from the executive summary:

                                        You think you want a stable kernel interface, but you really do not, and you don’t even know it.

                                        The author must be great fun at parties.

                                    1. 5

                                      If you like this talk you might also be interested in his previous ones:

                                      1. 11

                                        The same guy also wrote an article about laptops (

                                        But the real future of the laptop computer will remain in the specialized niche markets. Because no matter how inexpensive the machines become, and no matter how sophisticated their software, I still can’t imagine the average user taking one along when going fishing.

                                        1. 1

                                          He’s right, but of course we’re all carrying computers vastly more powerful than anything available in 1985 around with us every day. Everything is impossible until it is obvious.

                                          1. 3

                                            != is probably the wrong operator in bcmp. Most implementations use xor.

                                            1. 1

                                              Using xor is just a tiny optimization (makes it 3 bytes smaller and, in theory, saves a clock cycle per iteration):

                                              < cmp    %al,%dl
                                              < setne  %al
                                              > xor    %edx,%eax
                                              1. 5

                                                Until you use a different compiler/CPU and end up with the equivalent of:

                                                if (a[i] == b[i])
                                                     rv |= 1;

                                                And suddenly you have data dependent branches again. That’s very unlikely, but since the point of the exercise is to avoid branching instructions, best to avoid potentially branching operators entirely.