1. 1

    Wow, 28 people run BSD.

    1. 3

      Last time I posted a link to my blog on Lobsters, 1 out of 39 visitors came from OpenBSD!

      If the link was not about bash, I might have gotten even more visitors using OpenBSD.

    1. 4

      I absolutely love that the BSDs are switching to llvm. This makes me giddy like a school child.

      By switching to a full llvm toolchain, the BSDs will be able to do some really nifty things that simply cannot be done in Linux. HardenedBSD, for example, is working on integrating Cross-DSO CFI in base (and, later, ports). NetBSD is looking at deeper sanitizer integration in base. From an outsider’s perspective, it seems OpenBSD is playing catch up right now, but they’ve got the talent and the manpower to do so within a reasonable period of time.

      It’s my dream that all the BSDs switch fully to llvm as the compiler toolchain, including llvm-ar, llvm-nm, llvm-objdump, llvm-as, etc. Doing so will allow the BSDs to do add some really nifty security enhancements. Want an enterprise OS that secures the entire ecosystem? Choose BSD.

      Linux simply cannot compete here. A userland that innovates in lockstep with the kernel is absolutely required to do these kinds of things. Go BSD!

      1. 3

        You’re overstating it. Most of the mitigation development of past decade or two was for Linux. Most of the high-security solutions integrated with Linux, often virtualizing it. The most-secure systems you can get right now are separation kernels running Linux along-side critical apps. Two examples. Some of the mitigation work is also done for FreeBSD. Of that, some is done openly for wide benefit and some uses BSD license specifically to lock-down/patent/sue when commercialized. Quick pause to say thanks for you own work on the open side. :)

        So, what’s best for people depends on a lot of factors from what apps they want, what they’re trying to mitigate, stance on licensing, whether they have money for proprietary solutions or custom work, time for custom work or debugging if FOSS, and so on. One is not superior to the other. That could change if any company builds a mobile/desktop/server-class processor with memory safety or CFI built checking every sensitive operation. Stuff like that exists in CompSci for both OS’s. Hardware-level security could be an instant win. Past that, all I can say is it depends.

        On embedded side, Microsemi says CodeSEAL works with Linux and Windows. CoreGuard, based on SAFE architecture, currently runs FreeRTOS. The next solution needs to be at least as strong at addressing root causes.

        1. 4

          Thanks for making me think a bit deeper on this subject. And thanks for the kind words on my own work. :)

          With a historical perspective, I agree with you. grsecurity has done a lot with regards to Linux security (and security in general). I think the entire computing industry owes a lot to grsecurity, especially those of us in infosec.

          With the BSDs (except FreeBSD) having the core exploit mitigations in place (ASLR, W^X), it’s time to move on to other, more advanced mitigations. There’s only so much the kernel can do to harden userland and keep performance in check. Thus, these more advanced exploit mitigations must be implemented in the compiler. The BSDs are positioning themselves to be able to adopt and tightly integrate compiler-based exploit mitigations like CFI. Due to Linux’s fragmentation, it’s simply not possible for Linux to position itself in the same way. HardenedBSD has already surpassed Linux as far as userland exploit mitigations are concerned. This is due in part because of using llvm as the compiler toolchain.

          Microsoft is making huge strides as well. However, the PE file format, which allows individual PE objects to opt-in or opt-out of the various exploit mitigations, is a glaring weakness commonly abused by attackers. All it takes is for one DLL to not be compiled with /DYNAMICBASE, and boom goes the dynamite. Recently, VLC on Windows was found not to have ASLR enabled, even though it was compiled with /DYNAMICBASE and OS-enforced ASLR enabled, due to the .reloc section being stripped. Certain design decisions made decades ago by Microsoft are still biting them in the butt.

          I completely agree with you about hardware-based exploit mitigations. The CHERI project from the University of Cambridge in England is doing just that: hardware-enforced capabilities and bounds enforcement. However, it’ll probably take another 20+ years for their work to be available in silicon and an additional 20+ years for their work to be used broadly (and thus, actually usable/used). In the meantime, we need these software-based exploit mitigations.

        2. 3

          I absolutely love that the BSDs are switching to llvm.

          What does this news story have to do with LLVM?

          1. 1

            UBSan (and NetBSD’s new micro-UBSan) is a sanitizer found in llvm.

            1. 5

              And gcc.

              1. 3

                Yes. So the tirade about LLVM could have been about GCC and it would make as much sense here.

                1. 1

                  I guess I view it differently, due to newer versions of gcc being GPLv3, which limits who can adopt it. With llvm being permissively licensed, it can be adopted by a much wider audience. The GPL is driving FreeBSD to replace all GPL code in the base operating system with more permissively-licensed options.

                  For the base OS, gcc is dead to me.

                2. 2

                  (Speaking from the perspective of a FreeBSD/HardenedBSD user): gcc has no real future in the BSDs. Because of licensing concerns (GPLv3), the BSDs are moving towards a permissively-licensed compiler toolchain. Newer versions of gcc do contain sanitizer frameworks, they’re not usable in the BSD base operating system.

                  1. 2

                    NetBSD base uses GPLv3 GCC and builds it with sanitizers libraries etc.

                    1. 1

                      Good to know! Thanks! Perhaps my perception is a bit skewed towards FreeBSD lines of thinking.

                      I know NetBSD is working on incorporating llvm. I wonder why if they use newer versions of gcc.

          1. 1

            Would you put TenDRA on this list?

            1. 8

              a.out binaries are smaller than elf binaries, so let’s statically link everything into one big executable ala busybox.

              Similarly, a modular kernel with all its modules takes up more total space than a single kernel with everything built in. So don’t even bother implementing modules. Linux 1.2 was the last great Linux before they ruined everything with modules.

              64-bit code takes up more space than 32-bit, so let’s build for 32-bit instruction sets. Who has more than 4GB of addressable memory anyway?

              Optimized code usually takes up more space, often a lot more when inlining is enabled. Let’s build everything with -Os so we can fit more binaries on our floppies.

              Icons are really superfluous anyway, but maybe we’ll want X11R5 or some other GUI on a second floppy. (I’d say X11R6 but all those extensions take up too much space). Make sure to use an 8-bit storage format with a common palette – 24-bit or 32-bit formats are wasteful.

              (I lament the bloated nature of the modern OS as much as the next Oregon Trail Generation hacker, but really – is “fits on a 1.7MB floppy really the right metric? Surely we can at least afford to buy 2.88MB drives now?)

              1. 5

                64-bit code takes up more space than 32-bit, so let’s build for 32-bit instruction sets. Who has more than 4GB of addressable memory anyway?

                Most programs don’t need more than 4GB of addressable memory, and those that do, know it. Knuth flamed about this some, but while you can use X32 to get the big registers and little memory, it’s not very popular because people don’t care much about making fast things.

                I lament the bloated nature of the modern OS as much as the next Oregon Trail Generation hacker, but really – is “fits on a 1.7MB floppy really the right metric? Surely we can at least afford to buy 2.88MB drives now?

                No, but smaller is better. If you can fit inside L1, you’ll find around 1000x speed increase simply because you’re not waiting for memory (or with clever programming: you can stream it).

                There was a time when people did gui workstations in 128kb. How fast would that be today?

                1. 2

                  Most programs don’t need more than 4GB of addressable memory, and those that do, know it.

                  All the integer overflows with values under 64-bits suggests otherwise. I know most programmers aren’t doing checks on every operation either. I prefer 64-bit partly to cut down on them. Ideally, I’d have an automated tool to convert programs to use it by default where performance or whatever allowed.

                  “No, but smaller is better. If you can fit inside L1, you’ll find around 1000x speed increase simply because you’re not waiting for memory”

                  Dave Long and I agreed on 32-64KB in a bootstrapping discussion for that very reason. Making that the maximum on apps kept them in the fastest cache even on lots of older hardware. Another was targeting initial loaders to tiny, cheap ROM (esp to save space for updates). Those were only memory metrics we could find that really mattered in general case. The rest were highly situation-specific.

                  1. 1

                    Most programs don’t need more than 4GB of addressable memory, and those that do, know it.

                    All the integer overflows with values under 64-bits suggests otherwise.

                    How?

                    I know most programmers aren’t doing checks on every operation either. I prefer 64-bit partly to cut down on them. Ideally, I’d have an automated tool to convert programs to use it by default where performance or whatever allowed.

                    What does that mean?

                    1. 0

                      My point isn’t about the addressable memory: it’s about even being able to represent a number. Programs are usually designed with assumption that the arithmetic they do will work like real-world arithmetic on integers. In machine arithmetic, incrementing a number past a certain value will lead to an overflow. That can cause apps to misbehave. Another variation is a number coming from storage with many bits goes to one with fewer bits which caller didn’t know had fewer bits. That caused the Ariane 5 explosion.

                      Overflows happen more often with 8-16-bit fields since their range is so small. They can happen to 32-bit values in long running systems or those with math pushing numbers up fast. They either won’t happen or will take a lot longer with 64-bit values. I doubt most programmers are looking for overflows throughout their 32-bit applications. So, I’d rather just default on 64-bit for a bit of extra, safety margin. That’s all I was saying.

                2. 1

                  Linux 1.2 was the last great Linux before they ruined everything with modules.

                  https://twitter.com/1990sLinuxUser :P

                  1. 2

                    Why has systemd deprecated support for /usr on a different filesystem!!

                    That issue bit me last month! I moved my /usr because it was too large, and the damned system couldn’t even boot into an init=/bin/sh shell! It dropped me into an initrd shell. I had to boot off a live CD to fix it. (If the initrd shell should have been sufficient, pardon me. I tried, but lvm wasn’t working.)

                1. 2

                  I hope there’s been enough publicity about C gotchas, at this point, that most professional programmers know to do a quadruple-take whenever handling pointer arithmetic or type-punning. And that’s hopefully only when they need to use C.

                  1. 2

                    Fun part for me is how people describe it as a “close to the metal” language that lets you “work directly with memory operations.” Then, I see an article talking about how pointers, a critical part of it, are abstract things that might not work the way you think they do. They also find the language specs’ meaning debatable in a language whose programmers often oppose using formal specs for things like languages or compilers. That part was totally predicted by prior results in formal specification.

                    1. 3

                      Fun part for me is how people describe it as a “close to the metal” language that lets you “work directly with memory operations.” Then, I see an article talking about how pointers, a critical part of it, are abstract things that might not work the way you think they do.

                      This is because the language is more abstract than people generally think it is. They often talk about the speed of C (because “it’s a compiled language”), or the stack, or the heap (or memory layout in general), or how “the size of char is 8 bits and size of int is 32 or 64 bits, depending on how old your Intel CPU is”, or about any of many other implementation details attributing them to a programming language that never defines strictly such things or sometimes even doesn’t even talk about them, leaving them for the language implementors.

                  1. 8

                    Fascinating read with good, detect work, Sir! :)

                    My worries kicked in when you said it was at over 200 libraries. If it sounds true and is horribly wrong, then I thought it could still be polluting peoples’ minds as they start programming. Maybe they’ll think they’re not smart enough as the examples don’t run. Maybe they’ll get past that with bad habits that lead to data corruption, crashes, and/or hacks. Maybe others will correct their bad practices.

                    I don’t know but the book at libraries ain’t good. Made me think about sending out emails to those in bigger areas to ask that they throw the book in the trash. Maybe send a recommendation for a good book for newcomers, too. What’s the best one for that you think? In that hypothetical scenario, I’d also note that the language doesn’t change a whole lot over time like most IT stuff. So, whatever they buy they won’t have to replace every year. Lots of small libraries avoid computer books if they obsolete too much.

                    1. 10

                      Maybe they’ll think they’re not smart enough as the examples don’t run.

                      I couldn’t shake that thought the whole time I was reading it and it’s why I really detest sloppy tech books.

                      As for the books in the library, I’m thinking of finding the copies (1st and 2nd edition!) in my alma mater’s library and at least leaving a warning note in them.

                      1. 1

                        Good idea. I might at least do a few on the list. Again, what’s the current, best recommendation for a beginner book in your mind in case they ask about one?

                        1. 5

                          Truthfully, I don’t know a good C book for beginners. I have not looked at such books in a long time. I learned from K&R, but I was also learning it in university from different books at the time. None of them stand out in my mind. (I don’t think K&R is a great fit for beginners, especially with modern C.)

                          If anyone has suggestions, please feel free to comment.

                          1. 4

                            There is 21st century C, though it assumes some general programming experience.

                            There is also Head First C, though I have no direct experience with it, as I read through the C# book in the same series, rather than the C book. I recall learning a lot from the book Head First C#, so that gives me some hope for Head First C.

                            To be fair, I don’t know that C is a very good first programming language these days, unless you’re starting in a domain that doesn’t require working with care around allocations, or unless you have someone helping you learn.

                            1. 4

                              To be fair, I don’t know that C is a very good first programming language these days

                              I hope most of us agree on that by now. :) I always encourage them to learn something like Python first just to get into the mental habits of breaking problems down, implementing solutions precisely, and debugging others’ code. That’s hard enough by itself to learn without the low-level stuff in there. Once comfortable with that, then learn the low-level stuff with a book on something like C language.

                              I guess I’m looking for best intro(s) to C to recommend for someone with some programming experience in a high-level language.

                              1. 3

                                21st Century C seemed like a decent introduction to C, and takes exactly that angle. It doesn’t have the abrasiveness of Learn C the Hard way, and starts out explaining the C environment so that you aren’t left ignorant of how to work with makefiles and the like.

                                It does recommend autotools, however.

                                1. 4

                                  While it’s a great book, I see it more as a ‘refresher’ than an intro text to a novice. Would still recommend K&R 2nd ed. for essentials, followed by this to get up to date practices.

                        2. 4

                          My worries kicked in when you said it was at over 200 libraries. If it sounds true and is horribly wrong, then I thought it could still be polluting peoples’ minds as they start programming. Maybe they’ll think they’re not smart enough as the examples don’t run. Maybe they’ll get past that with bad habits that lead to data corruption, crashes, and/or hacks. Maybe others will correct their bad practices.

                          The same is true for multiple other wrong books and other texts. Best to avoid anything by Herb Schildt, Yashwant Kanetkar, Zed Shaw, Richard Reese, many tutorials like Beej’s guide to C, etc.

                          What’s the best one for that you think?

                          K&R 2nd ed. was definitely the best 20 years ago. Now I don’t know, but perhaps “C How to Program” by Harvey Deitel and Paul Deitel or “C Programming: A Modern Approach” by Kim King. And of course “C: A Reference Manual” by Harbison and Steele for reference. Also books written by Richard Stevens for UNIX programming.

                          1. 2

                            Why exactly would you say Beej’s guide to C isn’t a good tutorial?

                            1. 3

                              I’d actually like an explanation for all of those—assuming of course, that they aren’t all making the exact same mistake, which seems highly unlikely. I was under the impression that Zed Shaw, albeit controversial, wrote good introductory material.

                              1. 2

                                I don’t have the time to review them again. Now I think that I should write reviews them once and for all and then share every time they come up in a discussion. Like Ed’s thoughts on “Learn C The Hard Way”.

                        1. 5

                          I must be the only person ever who would want the reverse: C for C++ users. I honestly find C a lot weirder than C++.

                          Due to a curious historical accident, in 1998 the AP Board (a US high school educational thing) decided to use C++ for a single year to teach AP Computer Science. Normally they go with Pascal, Java and I think right now it’s Python. But for one glorious year, they decided to go with C++. So that was my first “real” programming language, i.e. the first I learned in a classroom setting. This means I really did learn C++ without knowing C.

                          This makes me a feel a bit different from nearly everyone else who considers C to be the foundation and is baffled by C++.

                          1. 5

                            Really to get to C from C++ you mostly just have to remove features. I’m curious what specific confusions you’ve run into that such a guide could cover?

                            1. 1

                              I don’t understand memcpy, malloc, free, strcpy, strncpy.These are not functions you use in C++. Why do I have to do struct foo x; if foo is a struct type instead of just doing foo x;?.The general syntax for structs is really weird in C. You have to typedef, you can’t just declare new structs? And I can’t declare looping variables, I have to instead declare it outside the loop. Ah, and no single-line coments, // is not a valid comment starter. What’s with all the pointers? We don’t use nearly that many pointers in C++. Arrays are confusing as shit. I always forget when do arrays become pointers and when do they not. Why is there no actual array type?

                              I think newer versions of C have made some of these things more like C++, but you still run into a lot of C that has to be written in an older way. C is just very foreign if you use C++.

                              1. [Comment removed by author]

                                1. 1

                                  I think I already understand most of these things, but what I was trying to convey was that C is foreign, not so much looking for an explanation.

                                  It is entirely possible to know C++ and find C to be weird and foreign.

                                2. 1

                                  I’ll try to summarize these.

                                  memcpy, malloc, free, strcpy – the best source for these is their manpages, which are generally well-written

                                  You do not use typedef to define structs in C (though some people confusingly choose to add a typedef as well). The name of the type is struct foo so that’s why you say struct foo x to declare variable.

                                  The reason C89 doesn’t allow variable declaration in for loop is for consistency (new things can only be added to the stack after an open brace) – there was no one “right way” semantic for the special case – however C99 dispensed with this so it’s the same as C++ now.

                                  Single line comments weren’t in C89 probably because no one thought having two comment syntaxes would be easier than just having one. C99 added the second one if you prefer it.

                                  All the C++ I’ve ever seen is still full of pointers – do you routinely pass full object instances on the stack? Anytime you say new in C++ you have created a pointer.

                                  Arrays in C are always “converted” to pointers when used in a value context. Declared arrays have their own semantics for sizeof, etc (since they are of “array type”). In arguments, int x[] is just sugar for int *x

                                  Hopefully those are helpful.

                                  1. 1

                                    Arrays are confusing as shit. I always forget when do arrays become pointers and when do they not. Why is there no actual array type?

                                    http://www.torek.net/torek/c/pa.html

                                    1. 1

                                      That still confuses me. I always need to refer to this document which I always forget.

                                      http://c-faq.com/aryptr/index.html

                                      Btw, I wasn’t looking for an explanation. I was trying to impress upon you how weird and foreign C is for a C++ person. I can look up these explanations, but C will always be weird and foreign to me because I was raised on C++.

                                3. 2

                                  I went from Pascal straight to C++. My parents happened to buy me Stroustrup’s C++ book. I only had to learn to read C later as I started encountering C libraries.

                                1. 7

                                  I always laugh when people come up with convoluted defenses for C and the effort that goes into that (even writing papers). Their attachment to this language has caused billions if not trillions worth of damages to society.

                                  All of the defenses that I’ve seen, including this one, boil down to nonsense. Like others, the author calls for “improved C implementations”. Well, we have those already, and they’re called Rust, Swift, and, for the things C is not needed for, yes, even JavaScript is better than C (if you’re not doing systems-programming).

                                  1. 31

                                    Their attachment to this language has caused billions if not trillions worth of damages to society.

                                    Their attachment to a language with known but manageable defects has created trillions if not more in value for society. Don’t be absurd.

                                    1. 4

                                      [citation needed] on the defects of memory unsafety being manageable. To a first approximation every large C/C++ codebase overfloweth with exploitable vulnerabilities, even after decades of attempting to resolve them (Windows, Linux, Firefox, Chrome, Edge, to take a few examples.)

                                      1. 2

                                        Compared to the widely used large codebase in which language for which application that accepts and parses external data and yet has no exploitable vulnerabilities? BTW: http://cr.yp.to/qmail/guarantee.html

                                        1. 6

                                          Your counter example is a smaller, low-featured, mail server written by a math and coding genius. I could cite Dean Karnazes doing ultramarathons on how far people can run. That doesn’t change that almost all runners would drop before 50 miles, esp before 300. Likewise with C code, citing the best of the secure coders doesn’t change what most will do or have done. I took author’s statement “to first approximation every” to mean “almost all” but not “every one.” It’s still true.

                                          Whereas, Ada and Rust code have done a lot better on memory-safety even when non-experts are using them. Might be something to that.

                                          1. 2

                                            I’m still asking for the non C widely used large scale system with significant parsing that has no errors.

                                            1. 3

                                              That’s cheating saying “non-c” and “widely used.” Most of the no-error parsing systems I’ve seen use a formal grammar with autogeneration. They usually extract to Ocaml. Some also generate C just to plug into the ecosystem since it’s a C/C++-based ecosystem. It’s incidental in those cases: could be any language since the real programming is in the grammar and generator. An example of that is the parser in Mongrel server which was doing a solid job when I was following it. I’m not sure if they found vulnerabilities in it later.

                                          2. 5

                                            At the bottom of the page you linked:

                                            I’ve mostly given up on the standard C library. Many of its facilities, particularly stdio, seem designed to encourage bugs.

                                            Not great support for your claim.

                                            1. 2

                                              There was an integer overflow reported in qmail in 2005. Bernstein does not consider this a vulnerability.

                                          3. 3

                                            That’s not what I meant by attachment. Their interest in C certainly created much value.

                                          4. 9

                                            Their attachment to this language has caused billions if not trillions worth of damages to society.

                                            Inflammatory much? I’m highly skeptical that the damages have reached trillions, especially when you consider what wouldn’t have been built without C.

                                            1. 12

                                              Tony Hoare, null’s creator, regrets its invention and says that just inserting the one idea has cost billions. He mentions it in talks. It’s interesting to think that language creators even think of the mistakes they’ve made have caused billions in damages.

                                              “I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

                                              If the billion dollar mistake was the null pointer, the C gets function is a multi-billion dollar mistake that created the opportunity for malware and viruses to thrive.

                                              1. 2

                                                He’s deluded. You want a billion dollar mistake: try CSP/Occam plus Hoare Logic. Null is a necessary byproduct of implementing total functions that approximate partial ones. See, for example, McCarthy in 1958 defining a LISP search function with a null return on failure. http://www.softwarepreservation.org/projects/LISP/MIT/AIM-001.pdf

                                                1. 3

                                                  “ try CSP/Occam plus Hoare Logic”

                                                  I think you meant formal verification, which is arguable. They could’ve wasted a hundred million easily on the useless stuff. Two out of three are bad examples, though.

                                                  Spin has had a ton of industrial success easily knocking out problems in protocols and hardware that are hard to find via other methods. With hardware, the defects could’ve caused recalls like the Pentium bug. Likewise, Hoare-style logic has been doing its job in Design-by-Contract which knocks time off debugging and maintenance phases. The most expensive. If anything, not using tech like this can add up to a billion dollar mistake over time.

                                                  Occam looks like it was a large waste of money, esp in the Transputer.

                                                  1. 1

                                                    No. I meant what I wrote. I like spin.

                                                2. 1

                                                  Note what he does not claim is that the net result of C’s continued existence is negative. Something can have massive defects and still be an improvement over the alternatives.

                                                3. 7

                                                  “especially when you consider what wouldn’t have been built without C.”

                                                  I just countered that. The language didn’t have to be built the way it was or persist that way. We could be building new stuff in a C-compatible language with many benefits of HLL’s like Smalltalk, LISP, Ada, or Rust with the legacy C getting gradually rewritten over time. If that started in the 90’s, we could have equivalent of a LISP machine for C code, OS, and browser by now.

                                                  1. 1

                                                    It didn’t have to, but it was, and it was then used to create tremendous value. Although I concur with the numerous shortcomings of C, and it’s past time to move on, I also prefer the concrete over the hypothetical.

                                                    The world is a messy place, and what actually happens is more interesting (and more realistic, obviously) than what people think could have happened. There are plenty of examples of this inside and outside of engineering.

                                                    1. 3

                                                      The major problem I see with this “concrete” winners-take-all mindset is that it encourages whig history which can’t distinguish the merely victorious from the inevitable. In order to learn from the past, we need to understand what alternatives were present before we can hope to discern what may have caused some to succeed and others to fail.

                                                      1. 2

                                                        Imagine if someone created Car2 which crashed 10% of the time that Car did, but Car just happened to win. Sure, Car created tremendous value. Do you really think people you’re arguing with think that most systems software, which is written in C, is not extremely valuable?

                                                        It would be valuable even if C was twice as bad. Because no one is arguing about absolute value, that’s a silly thing to impute. This is about opportunity cost.

                                                        Now we can debate whether this opportunity cost is an issue. Whether C is really comparatively bad. But that’s a different discussion, one where it doesn’t matter that C created value absolutely.

                                                  2. 8

                                                    C is still much more widely used than those safer alternatives, I don’t see how laughing off a fact is better than researching its causes.

                                                    1. 10

                                                      Billions of lines of COBOL run mission-critical services of the top 500 companies in America. Better to research the causes of this than laughing it off. Are you ready to give up C for COBOL on mainframes or you think both of them’s popularity were caused by historical events/contexts with inertia taking over? Im in latter camp.

                                                      1. 7

                                                        Are you ready to give up C for COBOL on mainframes or you think both of them’s popularity were caused by historical events/contexts with inertia taking over? Im in latter camp.

                                                        Researching the causes of something doesn’t imply taking a stance on it, if anything, taking a stance on something should hopefully imply you’ve researched it. Even with your comment I still don’t see how laughing off a fact is better than researching its causes.

                                                        You might be interested in laughing about all the cobol still in use, or in research that looks into the causes of that. I’m in the latter camp.

                                                        1. 5

                                                          I think you might be confused at what I’m laughing at. If someone wrote up a paper about how we should continue to use COBOL for reasons X, Y, Z, I would laugh at that too.

                                                          1. 3

                                                            Cobol has some interesting features(!) that make it very “safe”. Referring to the 85 standard:

                                                            X. No runtime stack, no stack overflow vulnerabilities
                                                            Y. No dynamic memory allocation, impossible to consume heap
                                                            Z. All memory statically allocated (see Y); no buffer overflows
                                                            
                                                            1. 3

                                                              We should use COBOL with contracts for transactions on the blockchains. The reasons are:

                                                              X. It’s already got compilers big businesses are willing to bet their future on.

                                                              Y. It supports decimal math instead of floating point. No real-world to fake, computer-math conversions needed.

                                                              Z. It’s been used in transaction-processing systems that have run for decades with no major downtime or financial losses disclosed to investors.

                                                              λ. It can be mathematically verified by some people who understand the letter on the left.

                                                              You can laugh. You’d still be missing out on a potentially $25+ million opportunity for IBM. Your call.

                                                              1. 1

                                                                Your call.

                                                                I believe you just made it your call, Nick. $25+ million opportunity, according to you. What are you waiting for?

                                                                1. 4

                                                                  You’re right! I’ll pitch IBM’s senior executives on it the first chance I get. I’ll even put on a $600 suit so they know I have more business acumen than most coin pitchers. I’ll use phrases like vertical integration of the coin stack. Haha.

                                                            2. 4

                                                              That makes sense. I did do the C research. Ill be posting about that in a reply later tonight.

                                                              1. 10

                                                                Ill be posting about that in a reply later tonight.

                                                                Good god man, get a blog already.

                                                                Like, seriously, do we need to pass a hat around or something? :P

                                                                1. 5

                                                                  Haha. Someone actually built me a prototype a while back. Makes me feel guilty that I dont have one instead of the usual lazy or overloaded.

                                                                    1. 2

                                                                      That’s cool. Setting one up isn’t the hard part. The hard part is doing a presentable design, organizing the complex activities I do, moving my write-ups into it adding metadata, and so on. I’m still not sure how much I should worry about the design. One’s site can be considered a marketing tool for people that might offer jobs and such. I’d go into more detail but you’d tell me “that might be a better fit for Barnacles.” :P

                                                                      1. 3

                                                                        Skip the presentable design. Dan Luu’s blog does pretty well it’s not working hard to be easy on the eyes. The rest of that stuff you can add as you go - remember, perfect is the enemy of good.

                                                                        1. 0

                                                                          This.

                                                                          Hell, Charles Bloom’s blog is basically an append-only textfile.

                                                                        2. 1

                                                                          ugh okay next Christmas I’ll add all the metadata, how does that sound

                                                                          1. 1

                                                                            Making me feel guilty again. Nah, I’ll build it myself likely on a VPS.

                                                                            And damn time has been flying. Doesnt feel like several months have passed on my end.

                                                                  1. 1

                                                                    looking forward to read it:)

                                                            3. 4

                                                              Well, we have those already, and they’re called Rust, Swift, ….

                                                              And D maybe too. D’s “better-c” is pretty interesting, in my mind.

                                                              1. 3

                                                                Last i checked, D’s “better-c” was a prototype.

                                                              2. 5

                                                                If you had actually made a serious effort at understanding the article, you might have come away with an understanding of what Rust, Swift, etc. are lacking to be a better C. By laughing at it, you learned nothing.

                                                                1. 2

                                                                  the author calls for “improved C implementations”. Well, we have those already, and they’re called Rust, Swift

                                                                  Those (and Ada, and others) don’t translate to assembly well. And they’re harder to implement than, say, C90.

                                                                  1. 3

                                                                    Is there a reason why you believe that other languages don’t translate to assembly well?

                                                                    It’s true those other languages are harder to implement, but it seems to be a moot point to me when compilers for them already exist.

                                                                    1. 1

                                                                      Some users of C need an assembly-level understanding of what their code does. With most other languages that isn’t really achievable. It is also increasingly less possible with modern C compilers, and said users aren’t very happy about it (see various rants by Torvalds about braindamaged compilers etc.)

                                                                      1. 4

                                                                        “Some users of C need an assembly-level understanding of what their code does.”

                                                                        Which C doesnt give them due to compiler differences and effects of optimization. Aside from spotting errors, it’s why folks in safety- critical are required to check the assembly against the code. The C language is certainly closer to assembly behavior but doesnt by itself gives assembly-level understanding.

                                                                  2. 2

                                                                    So true. Every time I use the internet, the solid engineering of the Java/Jscript components just blows me away.

                                                                    1. 1

                                                                      Everyone prefers the smell of their own … software stack. I can only judge by what I can use now based on the merits I can measure. I don’t write new services in C, but the best operating systems are still written in it.

                                                                      1. 5

                                                                        “but the best operating systems are still written in it.”

                                                                        That’s an incidental part of history, though. People who are writing, say, a new x86 OS with a language balancing safety, maintenance, performance, and so on might not choose C. At least three chose Rust, one Ada, one SPARK, several Java, several C#, one LISP, one Haskell, one Go, and many C++. Plenty of choices being explored including languages C coders might say arent good for OS’s.

                                                                        Additionally, many choosing C or C++ say it’s for existing tooling, tutorials, talent, or libraries. Those are also incidental to its history rather than advantages of its language design. Definitely worthwhile reasons to choose a language for a project but they shift the language argument itself implying they had better things in mind that werent usable yet for that project.

                                                                        1. 4

                                                                          I think you misinterpreted what I meant. I don’t think the best operating systems are written in C because of C. I am just stating that the best current operating system I can run a website from is written in C, I’ll switch as soon as it is practical and beneficial to switch.

                                                                          1. 2

                                                                            Oh OK. My bad. That’s a reasonable position.

                                                                            1. 3

                                                                              I worded it poorly, I won’t edit though for context.

                                                                    1. 2

                                                                      Open positions currently lists a Linux job, should we add a linux tag?

                                                                      1. 2

                                                                        The title has Linux but the actual job description has OpenBSD listed. I guess that Linux-only won’t be added to this job board.

                                                                        1. 4

                                                                          True. Thank you for the comment.

                                                                          I should clarify that for job posters on the site. BSD should be in description or/and in the title of a job post.

                                                                      1. 18

                                                                        I love postgres (I’m a postgres DBA), and really dislike mysql (due to a long story involving a patch-level release causing server crashes and data loss).

                                                                        That said, there is still a technical reason to choose mysql over postgres. Mysql’s replication story is still significantly better than postgres’. Multi-master, in particular, is something that’s relatively straightforward in mysql, but which requires third-party extensions and much more fiddling in postgres.

                                                                        Now, postgres has been catching up on this front. Notably, the addition of logical replication over the last couple major versions really expands the options available. There’s a possibility that this feature will even be part of postgres 11, coming out this year (it’s on a roadmap). But until it does, it’s a significant feature missing from postgres that other RDBMSes have.

                                                                        1. 7

                                                                          There’s a possibility that this feature will even be part of postgres 11

                                                                          PG 11 is in feature freeze since April. I don’t think there was anything significant for multi-master committed before that.

                                                                          1. 3

                                                                            Good point. I’d seen the feature freeze deadline, but wasn’t sure if it had actually happened, and what had made it in (I haven’t followed the -hackers mailing list for a while). I was mostly speculating based on the fact that they’d announced a multi-master beta for last fall.

                                                                            I’m not surprised it’s taking a long time – it’s a hard problem – but it means that “clustering” is going to be a weak point for postgres for a while longer.

                                                                          2. 3

                                                                            Once you take all the other potential issues and difficulties with MySQL into account though, surely Postgres is a better choice on balance, even with more difficult replication setup?

                                                                            1. 5

                                                                              It really depends. If you need horizontally-scalable write performance, and it’s important enough to sacrifice other features, then a mysql cluster is still going to do that better than postgres. It’s possible that a nosql solution might fit better than mysql, but overall that’s a decision that I can’t make for you.

                                                                              I’ll add that there are bits of postgres administration that aren’t intuitive. Specifically, bloat of on-disk table size (and associated slowdowns) under certain loads can really confuse people. If you can’t afford to have a DBA, or at least a dev who’s a DB expert, mysql can be very attractive. I’m not saying that’s a good reason to choose it, but I understand why some people do.

                                                                              1. 1

                                                                                What are your thoughts on MySQL vs MariaDB, especially the newer versions?

                                                                                1. 3

                                                                                  Honestly, I haven’t looked closely at MariaDB lately. The last time I did was just to compare json datatypes – at the time, both mysql and mariadb were just storing json as parsed/verified text blobs without notable additional functionality.

                                                                                  I have to assume it’s better than mysql at things like stability, data safety, and other boring-but-necessary features. That’s mostly because mysql sets such a low bar, though, that it would take effort to make it worse.

                                                                                2. 1

                                                                                  You clearly know more about databases than me, but I would question idea that MySQL is a good choice when you lack a DB expert. If anything, it is then when you shouldn’t use it. I still carry scars from issues caused by such lack of expertise at one of my previous employers.

                                                                            1. 5

                                                                              So, it’s working really well but the complex stacks in Linux are breaking a lot. Especially on hardware stuff. I’m curious if anyone in NetBSD or OpenBSD camps plans to try to get those working on it. That might be a better experience esp for console users.

                                                                              1. 4

                                                                                There is a FreeBSD effort to port to POWER9, I think one developer has also received a TALOS II.

                                                                                Hopefully the market will expand and we will see lower prices. I am all for diversity, but damn $7000 for any machine is a lot of money.

                                                                                1. 4

                                                                                  That’s a good thing. A high-quality, high-performance UNIX on POWER9. Especially for high-security since a contender for most secure UNIX is CheriBSD which is FreeBSD on capability-secure, MIPS-based CPU. One might use TALOS II to develop for it. More importantly, in long term such security enhancements like CHERI could get merged into an OpenPOWER core that becomes a future TALOS workstation. A significant drop in performance from checks might not hurt if the starting point is what a POWER9 or up can do. ;)

                                                                                2. 3

                                                                                  I think the price tag might put a damper on it for both devs and users. Not much demand, so why sink the development time?

                                                                                  1. 3

                                                                                    Some people (outside the “NetBSD or OpenBSD camps”, too…) spend their time on such things because they are able to and they value diversity (POWER9 is the most interesting part in this case, at least for me).

                                                                                    1. 1

                                                                                      I think that this was true for Apple’s PPC hardware supports your point. It cost a bit extra with small market share. A lot less software was developed for it. However, the groups that stayed on it after Apple ditched PPC made a lot of software work on it. So, it’s possible a niche effort could make these usable.

                                                                                      Im not sure what the overlap is between those kind of devs and people that can drop almost ten grand on a machine.

                                                                                  1. 2

                                                                                    I see that it was fixed in 2016 to support int const *x, but static struct foo { short bar; } * footab[16]; (Debian bug #431027) still isn’t supported.

                                                                                    I’m not sure about “especially useful for people learning C” as it requires basic knowledge of the language (which is the biggest problem for people learning C as they often learn it wrong from teachers, silly tutorials and incorrect books), but it has its uses.

                                                                                    1. 7

                                                                                      I think DHH has a lot of accurate points here, but I think he’s wrong about not needing to write SQL or have an understanding of the database technology supporting your application. With applications that store few data in their database, then I agree it may be possible to have a completely opaque perception of the database technology. However, I don’t see a way to forgo knowledge of the database layer and avoid writing SQL for things like running migrations on tables with millions of rows.

                                                                                      As a simple example, creating an index on a large Postgres table without the CONCURRENTLY keyword is a surefire way to block reads and writes on a table while the index is created and cause downtime. I don’t work with ActiveRecord, but it appears there is an abstraction for this keyword (algorithm: :concurrently). But how would you know to use this option if you don’t have an understanding of the database and its transactional isolation behavior?

                                                                                      As another example, adding a NOT NULL constraint on a large table will also block reads and writes in Postgres, while it validates that all current rows conform to the constraint. You’re better off creating an INVALID check constraint to ensure a column is non-null, and then VALIDATE’ing that constraint later to avoid downtime. These are the type of things where knowledge of just an abstraction layer and not of the underlying database will cause problems.

                                                                                      To be fair, DHH does only mention that Basecamp 3 has no raw SQL in “application logic”, and he never mentions migrations in the post, so maybe he is ignoring migration-type SQL commands in this context.

                                                                                      1. 3

                                                                                        As another example, adding a NOT NULL constraint on a large table will also block reads and writes in Postgres, while it validates that all current rows conform to the constraint. You’re better off creating an INVALID check constraint to ensure a column is non-null, and then VALIDATE’ing that constraint later to avoid downtime.

                                                                                        And this (among other things) is why I just can’t believe the claim that they could move from MySQL to Postgres and “not give a damn”.

                                                                                        1. 1

                                                                                          I interpreted that as he wouldn’t care what underlying technology he used not that the migration process would be trivial.

                                                                                          1. 1

                                                                                            But I’m not talking about the migration process either. He will care about the underlying technology when, for example, his team will have to tackle vacuum issues – long after the move has been done.

                                                                                            1. 1

                                                                                              But if your claim is that you do have to care about the technology then your problem is with the entire blog post, not just if he would give a damn about running postgresql.

                                                                                      1. 5

                                                                                        Those are some pretty flaky arguments regarding OpenBSD. What is “theoretical” SMP? I’m running this from a 4-core OpenBSD laptop. You know, non-theoretically. Same language snark goes with vmm: they tried to implement a hypervisor? I’ll be sure to inform mlarkin of his failure to execute. It may not be what the author wants, but that’s a different story. Anyway, if there are good comparisons between the two systems security-wise, they look like they’re in that chart from https://hardenedbsd.org/content/easy-feature-comparison. Is it up to date with the recent anti-ROP efforts?

                                                                                        1. 2

                                                                                          It is. OpenBSD has an SROP mitigation, whereas HardenedBSD doesn’t. HardenedBSD has non-Cross-DSO CFI (Cross-DSO CFI is actively being worked on), whereas OpenBSD doesn’t. HardenedBSD also applies SafeStack to applications in base. CFI provides forward-edge safety while SafeStack provides backward-edge safety (at least, according to llvm’s own documentation.)

                                                                                          HardenedBSD inherits MAP_STACK from FreeBSD. The one thing about OpenBSD’s MAP_STACK implementation that HardenedBSD may lack (I need to verify) is that the stack registers (rsp/rbp) is checked during syscall enter to ensure it points to a valid MAP_STACK region. If FreeBSD’s syscall implementation doesn’t do this already, doing so would be a good addition in HardenedBSD.

                                                                                          So, there’s room for improvement by both BSDs, as should be expected. It looks like OpenBSD is starting the migration towards an llvm toolchain, which would allow OpenBSD to catch up to HardenedBSD with regards to CFI and SafeStack.

                                                                                          Sorry for the excessive use of commas. I enjoy them perhaps a bit too much. ;)

                                                                                          1. 1

                                                                                            I haven’t read the whole article, because I’m not interested in HardenedBSD.

                                                                                            What is “theoretical” SMP? I’m running this from a 4-core OpenBSD laptop. You know, non-theoretically.

                                                                                            The article is indeed vague about it, but I think the author meant scalability issues. Too much time spent in the kernel space.

                                                                                            Same language snark goes with vmm: they tried to implement a hypervisor? I’ll be sure to inform mlarkin of his failure to execute.

                                                                                            I don’t have any experience with virtualization, but the point seems to be that you can only have OpenBSD and Linux guests under an OpenBSD host which compares less than something like bhyve.

                                                                                            1. 1

                                                                                              SMP

                                                                                              From what I have read about SMP on OpenBSD its not that it would not detect 4 or 64 cores, its that its subsystems (like FreeBSD 5.0 for example) were not entirely rewritten to fully itilize all cores, that in many places still so called GIANT LOCK is used, may have changed recently, sorry if information is not up to latest date.

                                                                                              vmm

                                                                                              Now ints very limited, can You run Windows VM on it? … or Solaris VM? Last I read about it only OpenBSD and Linux VMs worked.

                                                                                              Is it up to date with the recent anti-ROP efforts?

                                                                                              I am not sure, You may ask here - https://www.twitter.com/HardenedBSD - or on the HardenedBSD forums - https://groups.google.com/a/hardenedbsd.org/forum/#!forum/users

                                                                                              1. 3

                                                                                                or Solaris VM? Last I read about it only OpenBSD and Linux VMs worked.

                                                                                                It runs Illumos derivatives (eg. OpenIndiana). There’s a speicific feature missing that FreeBSD/NetBSD need which is being worked on. It doesn’t run Windows because Windows needs graphics.

                                                                                                1. 2

                                                                                                  Thanks for clarification, I hope that graphics support/emulation will also came to vmm soon.

                                                                                                  I added that information to the post.

                                                                                              2. 1

                                                                                                I’m not sure, the article seems like it makes an honest enough comparison between hardenedBSD and OpenBSD that I make OpenBSD a priority to consider the next time I need truly secure OS.

                                                                                                1. 3

                                                                                                  The “One may ask…” paragraph is so slanted toward HardenedBSD over OpenBSD that I’d have immediately assumed a HardenedBSD developer or fan was writing it.

                                                                                                  1. 1

                                                                                                    Tried my best, I thought that it was clean enough from the article that OpenBSD is secure for sure while HardenedBSD aspires to that target with FreeBSD codebase as start …

                                                                                                  2. 1

                                                                                                    Tried my best, I thought that it was clean enough from the article that OpenBSD is secure for sure while HardenedBSD aspires to that target with FreeBSD codebase as start …

                                                                                                1. 7

                                                                                                  When was the last time you productively used an octal number-representation?

                                                                                                  Whenever I terminate a C string with a null character.

                                                                                                  1. 1

                                                                                                    I’m curious, why do you need octal for that? 0, 0x0, 0b0, and 00 all act the exact same, don’t they?

                                                                                                    1. 2

                                                                                                      It’s a joke, because 0 is an octal literal.

                                                                                                  1. 2

                                                                                                    It’s a shame it’s open-core.

                                                                                                    1. 7

                                                                                                      Spanner/F1 and FoundationDB were closed. CochroachDB was first (AFAIK) of those competing with Spanner to give us anything at all. Let’s give them credit, eh? ;)

                                                                                                      1. 3

                                                                                                        FoundationDB was open (in some form) before it disappeared into Apple.

                                                                                                        1. 4

                                                                                                          I don’t believe any of the interesting tech was open source. It was sort of the opposite of open-core, with a proprietary core but some ancillary stuff like an SQL parser that was open source. That other stuff is what disappeared when Apple bought them (GitHub deleted, packages pulled from repos, etc.), which caused a bit of a stir as they disappeared with no warning and some people had been depending on the packages.

                                                                                                          1. 2

                                                                                                            I never heard that. Ill look into it further. Thanks.

                                                                                                            1. 1

                                                                                                              Looking into it, the core DB that was what was really valuable was closed with some peripheral stuff open. This write-up goes further to say it was kind of fake FOSS that lured people in. I don’t have any more data to go on since Apple pulled all the Github repos.

                                                                                                            2. 1

                                                                                                              It doesn’t seem to me that CockroachDB competes with Spanner. I’d thought of MongoDB before CockroachDB.

                                                                                                              1. 8

                                                                                                                It’s explicitly the origin for CockroachDB. Spanner less the cesium clocks.

                                                                                                          1. 2

                                                                                                            Great post; this is one of my pet peeves.

                                                                                                            One minor point: ‘This interactivity is usually missing in “compiled” languages’ <- I would say it’s true that this is missing from several mainstream compiled languages, (to their great detriment) but the majority of compiled languages do support it.

                                                                                                            Edit: I guess the scare quotes around “compiled” might indicate it’s not meant to apply to most compiled languages, just the ones that are typically thought of as compiled?

                                                                                                            1. -1

                                                                                                              Great post; this is one of my pet peeves.

                                                                                                              Mine as well. Even the local safe languages guru tends to conflate programming languages with their implementations.

                                                                                                              1. 13

                                                                                                                That’s unlikely. What’s more likely is that humans don’t need to be 100% unambiguously precise in every form of communication, and instead can usually rely on other humans to know what they mean.

                                                                                                                1. 4

                                                                                                                  In addition to what burntsushi said, while it’s true that languages and their implementations are separable, it’s often not an accident that languages have their particular implementations. Bytecode interpreters and/or JIT compilers fit with Java, Python, etc in a way they just don’t for C. You can write them for C, but it’s typically less valuable.

                                                                                                              1. 2

                                                                                                                Good article. I’ve been tentatively looking for a slim, performant and asynchronous web-service framework, considering various languages and this article repels me from using Rust due to its syntax.

                                                                                                                One of the first scaling problems you’re likely to run into with Postgres is its modest limits around the maximum number of allowed simultaneous connections. Even the biggest instances on Heroku or GCP (Google Cloud Platform) max out at 500 connections, and the smaller instances have limits that are much lower (my small GCP database limits me to 25). Big applications with coarse connection management schemes (e.g., Rails, but also many others) tend to resort to solutions like PgBouncer to sidestep the problem.

                                                                                                                pgBouncer is not really “a solution you resort to in order to sidestep the problem”, it’s something that should be built into core Postgres but isn’t.

                                                                                                                At the end of the day, your database is going to be a bottleneck for parallelism, and the synchronous actor model supports about as much parallelism as we can expect to get from it, while also supporting maximum throughput for any actions that don’t need database access.

                                                                                                                I don’t know why. Especially since all the Postgres parallelization work that’s been put into PG 9.6 and 10.

                                                                                                                1. 18

                                                                                                                  I believe that the “single entry, single exit” coding style has not been helpful for a very long time, because other factors in how we design programs have changed.

                                                                                                                  Unless it happens that you still draw flowcharts, anyway. The first exhortation for “single entry, single exit” I can find is Dijkstra’s notes on structured programming, where he makes it clear that the reason for requiring that is so that the flowchart description of a subroutine has a single entry point and a single exit point, so its effect on the whole program state can be understood by understanding that subroutine’s single collection of preconditions and postconditions. Page 19 of the PDF linked above:

                                                                                                                  These flowcharts share the property that they have a single entry at the top and a single exit at the bottom: as indicated by the dotted block they can again be interpreted (by disregarding what is inside the dotted lines) as a single action in a sequential computation. To be a little bit more precise” we are dealing with a great number of possible computations, primarily decomposed into the same time-succession of subactions and it is only on closer inspection–i.e, by looking inside the dotted block–that it is revealed that over the collection of possible computations such a subaction may take one of an enumerated set of distinguished forms.

                                                                                                                  These days we do not try to understand the behaviour of a whole program by composing the behaviour of each expression. We break our program down into independent modules, objects and functions so that we only need to understand the “inside” of the one we’re working on and the “outside” of the rest of them (the “dotted lines” from Dijkstra’s discussion), and we have type systems, tests and contracts to support understanding and automated verification of the preconditions and postconditions of the parts of our code.

                                                                                                                  In other words, we’re getting the benefits Dijkstra wanted through other routes, and not getting them from having a single entry point and a single exit point.

                                                                                                                  1. 7

                                                                                                                    An interesting variant history is described in https://softwareengineering.stackexchange.com/a/118793/4025

                                                                                                                    The description there is that instead of “single exit” talking about where the function exits from, it actually talks about a function only having a single point it returns to, namely the place where it was called from. This makes a lot more sense, and is clearly a good practice. I’ve heard this description from other places to, but unfortunately I don’t have any better references.

                                                                                                                    1. 3

                                                                                                                      Yes, very interesting. It makes sense that “single exit” means “don’t jump to surprising places when you’re done” rather than “don’t leave the subroutine from arbitrary places in its flow”. From the Structured Programming perspective, both support the main goal: you can treat the subroutine as a box that behaves in a single, well-defined way, and that programs behave as a sequence of boxes that behave in single, well-defined ways.

                                                                                                                    2. 3

                                                                                                                      The first exhortation for “single entry, single exit” I can find is Dijkstra’s notes on structured programming

                                                                                                                      Also Tom Duff’s “Reading Code From Top to Bottom” says this:

                                                                                                                      During the Structured Programming Era, programmers were often taught a style very much like the old version. The language they were trained in was normally Pascal, which only allowed a single return from a procedure, more or less mandating that the return value appear in a variable. Furthermore, teachers, influenced by Bohm and Jacopini (Flow Diagrams, Turing Machines and Languages with only two formation rules, Comm. ACM 9#5, 1966, pp 366-371), often advocated reifying control structure into Boolean variables as a way of assuring that programs had reducible flowgraphs.

                                                                                                                      1. 1

                                                                                                                        These days we do not try to understand the behaviour of a whole program by composing the behaviour of each expression. We break our program down into independent modules, objects and functions so that we only need to understand the “inside” of the one we’re working on and the “outside” of the rest of them (the “dotted lines” from Dijkstra’s discussion), and we have type systems, tests and contracts to support understanding and automated verification of the preconditions and postconditions of the parts of our code.

                                                                                                                        Maybe I’m missing something, but it seems to me that Dijkstra’s methodology supports analyzing one program component at a time, treating all others as black boxes, so long as:

                                                                                                                        • Program components are hierarchically organized, with lower-level components never calling into higher-level ones.
                                                                                                                        • Relevant properties of lower-level components (not necessarily the whole analysis) are available when analyzing higher-level components.

                                                                                                                        In particular, although Dijkstra’s predicate transformer semantics only supports sequencing, selection and repetition, it can be used to analyze programs with non-recursive subroutines. However, it cannot handle first-class subroutines, because such subroutines are always potentially recursive.

                                                                                                                        In other words, we’re getting the benefits Dijkstra wanted through other routes, and not getting them from having a single entry point and a single exit point.

                                                                                                                        Dijkstra’s ultimate goal was to prove things about programs, which few of us do. So it is not clear to me that we are “getting the benefits he wanted”. That being said, having a single entry point and a single exit point merely happens to be a requirement for using the mathematical tools he preferred. In particular, there is nothing intrinsically wrong about loops with two or more exit points, but they are awkward to express using ALGOL-style while loops.

                                                                                                                      1. 3

                                                                                                                        That does seem rather clever, though it raises design questions in my mind: Why not define user defined order in some other way, such as a pg_array in a different table?

                                                                                                                        1. 3

                                                                                                                          If by pg_array you mean standard Postgres array type, then my answer would be: mainly because current Postgres lacks support for arrays referring to a foreign key. Also possibly because of increased lock granularity in case of an array.