1. 1

    Question about something tangentially related: does anyone have source code to show what it takes to boot a program (no OS) and draw into the UEFI framebuffer?

    1. 6

      Anyone know of cloud providers (either virtualized or real hardware) that either offer OpenBSD, or allow you to install OpenBSD easily and without hacks?

      I only know of prgmr.com, RootBSD and ARP Networks. I am interested in companies offering real professional support running on server grade hardware (ECC, Xeon, etc) with proper redundant networking, etc, so amateur (but cheap) stuff like Hetzner doesn’t count.

      Somewhat tangential, but I am also interested in European companies. I only know of CloudSigma, Tilaa, Exoscale and cloudscale.ch. Are they any good?

      EDIS and ITL seem to be Russian companies or shells operating in European locations, not interested in those.

      Many thanks!

      1. 5

        https://www.vultr.com/servers/openbsd

        I wouldn’t consider Gilles’ method a hack at this point, now that online.net gives you console access. Like usual, you first have to get the installer on to a disk attached to the machine. Since you can’t walk up to the machine with a stick of USB flash, copying it to the root disk from recovery mode makes all the sense.

        1. 2

          Thanks, I forgot about vultr.

          As for installing, I would vastly prefer PXE boot. It’s not just about getting it installed. It’s about having a supported configuration. I am not interested in running configurations not supported by the provider. What if next year they change the way they boot the machines and you can’t install OpenBSD using the new system anymore? A guarantee for PXE boot ensures forward compatibility.

          Or what if some provider that is using virtualization updates their hypervisor which has a new bug that only affects OpenBSD? If the provider does not explicitly support OpenBSD, it’s unlikely they will care enough to roll back the change or fix the bug.

          You’re not paying for hardware, as Hetzner showed, hardware is cheap, you’re paying for support and for the network. If they don’t support you, then why pay?

          1. 2

            Yeah I share your concerns. That’s why I’ve hesitated to pay for hosting and am still running all my stuff at home. It would suck to pay only to hear that I’m on my own if something changes and my system doesn’t work well after that change.

            Given how often OpenBSD makes it to the headlines on HN and other tech news outlets, it is really disappointing how few seem to actually care enough to run or support it. It’s also disappointing considering that the user base has a healthy disdain for twisting knobs, and the system itself doesn’t suffer much churn. It should be quite easy to find a stable & supported hardware configuration that just works for all OpenBSD users.

            1. 1

              It should be quite easy to find a stable & supported hardware configuration that just works for all OpenBSD users.

              Boom! There it is. The consumer side picks their own hardware expecting whatever they install to work on it. They pick for a lot of reasons other than compatibility, like appearance. OpenBSD supporting less hardware limits it a lot there. I’ve always thought an OpenBSD company should form that uses the Apple model of nice hardware with desktop software preloaded for some market segment that already buys Linux, terminals, or something. Maybe with some must-have software for business that provides some or most of the revenue so not much dependency on hardware sales. Any 3rd party providing dediboxes for server-side software should have it easiest since they can just standardize on some 1U or 2U stuff they know works well with OpenBSD. In theory, at least.

        2. 4

          https://www.netcup.de/

          I run the above setup on a VPS. OpenBSD is not officially supported, but you can upload custom images. Support was very good in the last 3-4 years (didn’t need it recently).

          1. 2

            Looks nice, especially since they are locals :) Do you mind answering some questions?

            • Do they support IPv6 for VPS (/64)?
            • Have you tried to restore a snapshot from a VPS?
            • Mind sharing a dmesg?
            1. 3
          2. 2

            I have two OpenBSD vservers running at Hetzner https://www.hetzner.com . They provide OpenBSD ISO images and a “virtual KVM console” via HTTP. So installing with softraid (RAID or crypto) is easily possible.

            Since one week there is no official vServer product more. Nowadays, they call it … wait for it … cloud server. The control panel looks different, however, I have no clue if something[tm] changed.

            Here is a dmesg from one server: http://dmesgd.nycbug.org/index.cgi?do=view&id=3441

            1. 2

              Joyent started providing a KVM OpenBSD image for Triton last May: https://docs.joyent.com/public-cloud/instances/virtual-machines/images/openbsd

              (This has been possible for some time if you had your own Triton cluster, but there was no official way until this was published.)

              1. 1

                What’s the deal for cloud providers for not making OpenBSD available? Is it technically complex to offer, or just that they don’t have the resources for the support? Maybe just a mention that it’s not supported by their customer service would already help users no?

                1. 11

                  As far as I know, it’s a mix of things. Few people ask for OpenBSD, so there’s little incentive to offer it. Plus a lot of enterprise software tends to target RHEL and other “enterprise-y” offerings. Even in the open source landscape, things are pretty dire:

                  OpenBSD also seems to have pretty bad timing issues on qemu/KVM that have fairly deeply rooted causes. Who knows what other horrors lurk in OpenBSD as a guest.

                  OpenBSD doesn’t get people really excited, either. Many features are security features and that’s always a tough sell. They’d rather see things like ZFS.

                  For better or for worse, OpenBSD has a very small following. For everybody else, it just seems to be the testing lab where people do interesting things with OS development, such as OpenSSH, LibreSSL, KASLR, KARL, arc4random, pledge, doas, etc. that people then take into OSes that poeple actually use. Unless some kind of Red Hat of OpenBSD emerges, I don’t see that changing, too. Subjectively, it feels very UNIX-y still. You can’t just google issues and be sure people have already seen them before; you’re on your own if things break.

                  1. 8

                    Rust’s platform support has OpenBSD/amd64 in tier 3 (“which are not built or tested automatically, and may not work”).

                    I can talk a little about this point, as a common problem: we could support OpenBSD better if we had more knowledge and more people willing to integrate it well into our CI workflow, make good patches to our libc and so on.

                    It’s a damn position to be in: on the one hand, we don’t want to be the people that want to inflict work to OpenBSD. We are in no position to ask. On the other hand, we have only few with enough knowledge to make OpenBSD support good. And if we deliver half-arsed support but say we have support, we get the worst of all worlds. So, we need people to step up, and not just for a couple of patches.

                    This problem is a regular companion in the FOSS world, sadly :(.

                    Also, as noted by mulander: I forgot semarie@ again. Thanks for all the work!

                    1. 7

                      semarie@ has been working upstream with rust for ages now… It would be more accurate to say ‘we need more people to step up’.

                      1. 2

                        Right, sorry for that. I’ll change the wording.

                1. 1

                  Yikes. Meant to put that into the title, obviously, not the URL. Apparently I can’t edit the URL though.

                1. 11

                  I think I mostly agree with the premise here.. I tried freebsd but I hard time being happy with it compared to simply using a systemd-less linux like void or alpine.

                  OpenBSD on the other hand fascinates me, mostly because of the security focus and overall simplicity, I think part of that idea of focused goals is the same reason I’ve been starting to keep up with DragonFlyBSD development, the drive to do something different than the mainstream can be a strong motivator of interest.

                  But realistically, I dont see something like FreeNAS dying anytime soon, some of my IT friends swear only by it.

                  1. 20

                    I love running FreeBSD. I run Void whenever I have to run Linux, but honestly running FreeBSD is so much fun. The system makes so much sense, there are so few running processes. Configs are kept in the right places, packages that are installed just work, upgrades almost never broke anything, and in general there was a lot less fiddliness. I want to run Void from time to time to get the new and shiny (without having to build it for a custom platform), but in both Debian and Void (the systems I run), packages are of varying quality, and upgrades are always stressful (though Void’s running release nature makes it less so). FreeBSD’s consistency also makes me feel a lot less scared about opening it up and fiddling with the insides (such as trying my hand at creating my own rc unit runner or something), whereas with Linux I often feel like I’m peering at the edge of a Rube Goldberg machine.

                    Oh and don’t get me started on the FreeBSD Handbook and manpages. Talk about documentation done right.

                    1. 6

                      “Rube Goldberg machine” is a great description for much of the Linux world. Especially Debian-style packages with their incredibly complex configuration hooks and menus and stuff.

                      My favorite feature of pkgng is that packages do not add post-install actions to other packages :)

                      1. 1

                        I still can’t get over the fact that installing a deb service on a Debian based distribution, starts the service automatically? Why was that ever considering a good design decision?

                        I personally run Gentoo and Void. I had FreeBSD running really well on an older X1 carbon about two years back, but the hardware failed on the X1. I do use FreeBSD on my VPS for my openvpn server, but it seems like FreeBSD is the only one supported on major VPSes (Digital Ocean, Vultr). I wish there was better VPS support for at least OpenBSD.

                      2. 2

                        Dont get me wrong, I like FreeBSD, I’ve just never felt the same fascination towards it that I do with OpenBSD, DragonflyBSD, Haiku, ReactOS or Harvey. But perhaps thats a good thing?

                        I guess the main thing Is I’ve never been in a situation where I didn’t need to use linux / windows and couldn’t use OpenBSD.

                        1. 5

                          FreeBSD seems to do less in-house experimental stuff that gets press. Dragonfly has the single-system image clustering long-term vision, OpenBSD is much more aggressive about ripping out and/or rewriting parts of the core system, etc.

                          I do feel most comfortable with the medium-term organizational future of FreeBSD though. It seems to have the highest bus factor and strongest institutional backing. Dragonfly’s bus factor is pretty clearly 1: Matthew Dillon does the vast majority of development. OpenBSD’s is slightly higher, but I’m not entirely confident it would survive Theo leaving the project. While I don’t think any single person leaving FreeBSD would be fatal.

                          1. 3

                            I’m not entirely confident it would survive Theo leaving the project

                            There is no reason to worry about that: http://marc.info/?l=openbsd-misc&m=137609553004700&w=2

                            1. 2

                              FreeBSD seems to do less in-house experimental stuff that gets press

                              The problem is with the press here. CloudABI is the most amazing innovation I’ve seen in the Unix world, and everyone is sleeping on it ;(

                        2. 3

                          I tried freebsd but I hard time being happy with it compared to simply using a systemd-less linux like void or alpine.

                          The Linux distro that’s closest to the *BSD world is Gentoo - they even named their package management system “Portage” because it’s inspired by *BSD ports.

                          1. 2

                            As a long time OpenBSD & Gentoo user (they were my introduction to BSD & Linux respectively and I’ve run both on servers & desktops for years), I strongly disagree. If I wanted to experience BSD on Linux, Gentoo would be the last thing I’d look at.

                            1. 1

                              If I wanted to experience BSD on Linux, Gentoo would be the last thing I’d look at.

                              Then you are way off the mark, because the closest thing to *BSD ports in the Linux world is Gentoo’s Portage and OpenRC is the natural evolution of FreeBSD’s init scripts.

                              1. 5

                                Over the past decade, I’ve used ports once or twice. Currently I don’t have a copy of the ports tree. At this day and age, ports & package management are among the least interesting properties of an operating system (if only because they all do it well enough, and they all still suck). OpenRC might be ok, but the flavor of init scripts doesn’t exactly define the system either.

                                My idea of BSD does not entail spending hours fucking with configs and compiling third party packages to make a usable system. Maybe FreeBSD is like that? If so, I’m quite disappointed.

                        1. 4

                          Well, this here is the URL I wanted to submit. Somehow “fetch title” changed it to a completely different domain that hosts an uglier version.

                          https://ccrma.stanford.edu/~jos/mdft/mdft.html

                          1. 4

                            Oh, the author was thinking of dead trees when he said ‘immutable’. I thought he was going somewhere else with this, like IPFS.

                            1. 1

                              Heh. This was my immediate thought, and I was waiting for them to bring it up basically the entire time.

                              1. 1

                                Would using IPFS remedy the issue of temporal permanence raised in the first paragraphs of the linked post?

                                I’m asking because I don’t really know enough about IPFS to compare to “normal” content storage.

                                1. 1

                                  IIUC, addressing the impermanence problem was one of the design goals of IPFS. They tout it as a solution that provide permanence. I can’t personally attest to their claim. Here’s some marketing materials: https://ipfs.io/#why

                                  1. 1

                                    Thanks for this link! For some reason I thought IPFS was just the latest iteration of the FreeNet idea. It looks as if there’s a bit more meat in it than that.

                                    1. 2

                                      I don’t see where the meat is. It always looked like a worse freenet to me. Especially with regard to retention (which is a real issue on freenet), I don’t see any solution. “Each network node stores only content it is interested in” sounds like a real problem. In practice, you’d have to always run your server to ensure the content you want available is available.

                              1. 12

                                I’d prefer it if there was no design. Just the content.

                                1. 16

                                  Which you get with a (full) RSS feed.

                                  1. 7

                                    Yup!

                                    Take care if you use hugo, the default rss template does not render the full article.

                                    Here’s a modified one that renders the full article in the feed:

                                    <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
                                      <channel>
                                        <title>{{ .Title}} </title>
                                        <link>{{ .Permalink }}</link>
                                        <description>Recent posts</description>
                                        <generator>Hugo -- gohugo.io</generator>{{ with .Site.LanguageCode }}
                                        <language>{{.}}</language>{{end}}{{ with .Site.Author.email }}
                                        <managingEditor>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</managingEditor>{{end}}{{ with .Site.Author.email }}
                                        <webMaster>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</webMaster>{{end}}{{ with .Site.Copyright }}
                                        <copyright>{{.}}</copyright>{{end}}{{ if not .Date.IsZero }}
                                        <lastBuildDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</lastBuildDate>{{ end }}
                                        {{ with .OutputFormats.Get "RSS" }}
                                            {{ printf "<atom:link href=%q rel=\"self\" type=%q />" .Permalink .MediaType | safeHTML }}
                                        {{ end }}
                                        {{ range .Data.Pages }}
                                        <item>
                                          <title>{{ .Title }}</title>
                                          <link>{{ .Permalink }}</link>
                                          <pubDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</pubDate>
                                          {{ with .Site.Author.email }}<author>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</author>{{end}}
                                          <guid>{{ .Permalink }}</guid>
                                          <description>{{ .Content | html }}</description>
                                        </item>
                                        {{ end }}
                                      </channel>
                                    </rss>
                                    

                                    I probably should have included that in the article… Too late now.

                                    1. 3

                                      Why is it too late now?

                                      1. 5

                                        I was about to answer laziness and realized this is not something that should be celebrated.

                                        It’s now included.

                                        1. 1

                                          Thank you for adding it, it will be handy for me when I go back to your post in a few weeks and look for how to do this. :)

                                    2. 1

                                      Blogs should just be an XSLT transform applied to RSS ;)

                                    3. 2

                                      The built-in Firefox Reader mode is a godsend. I feel much more comfortable reading long texts in the same font, page width, background color + the scrollbar on the right now gives me a pretty good estimate of reading time.

                                      1. 1

                                        Weirdly, though, that all comes down to the good design of Firefox reader mode :D.

                                      2. 1

                                        RSS, lightweight versions (light.medium.com/usual/url ?), heck even gopher, perfectly does the job! we need these things.

                                      1. 5

                                        Wild guess without reading article: the city did something and we’re going to pretend they did something else so we can be extra mad. Close?

                                        1. 2

                                          I have received a threatening letter from the city’s government in which they are warning me that I have to pay them a certain amount of money for the usage of the name ******** in the name of my Facebook page. If I don’t start paying them, they will start their Facebook teams and contact the court so that they could shut down my Facebook profile.

                                          All other Facebook pages that promote the city and have the noun ******** in its name got the identical threats.

                                          1. 6

                                            So it’s not a blanket ban on typing the name of the city, but we’re going to act like it is?

                                            Is this any different from the fact I might have trouble creating a Facebook page called Xerox, but yet the internet police aren’t going to come get me for writing this comment with the word Xerox in it?

                                        1. 2

                                          GG Flip is a Golang library which generates the Javascript sign flip library. I preferred Go because lack of generics seemed like a good design choice.

                                          satire tag?

                                          1. 2

                                            Spoils. av does not want to add that tag.

                                          1. 11

                                            So you want a starting paragraph that emphasizes:

                                            Your ability to solve real, relevant projects with your skills. Your ability to work independently. Your ability to learn quickly. Your relevant domain knowledge, if any. Your ability to work with others.

                                            So this is how everyone comes up with the same tired list of unsubstantiated lies?

                                            1. 3

                                              My resume used the functional style illustrating skills using a list of things I’ve done. As in, I demonstrated business value in there showing what I can deliver for them in various ways with various methods. I used to tell people to do theirs that way. Quite a few got better jobs. Samples were too small and scattered to say it proves anything past it can work.

                                              You don’t have to lie. Although, you can leave off a detail or so to make it look better than it is in minds of some people who assume too much. That can had negative repercussions down the road. So, I tell people just to get shit done and learn how to write that on paper in terms that show business side as much as tech.

                                            1. 9

                                              Counterpoints: Correctness is Easily Defined (maybe), Levels Mix Up, and People Issues Over Tech Issues

                                              (or “Brace Yourself Peer Review is Coming!”)

                                              My typical model for describing correctness is Abstract, State Machine since ASM’s are more like Turing machines for structures. Really easy to learn for ordinary programmers compared to most formal models. Moore and Mealy state machines are basic for this on finite side. If modeling as such, then correctness is simply that your system produces the intended behavior in terms of external output and/or internal states/transitions upon given input(s). Yours are subsets of that at best. Definition 1 is subset observing one or more state machines give incorrect output for input. Definition 2 is subset identifying specific inputs and transitions that led to incorrect output. Definition 3 kind of jumps sideways to focus on arguments made about specifications rather than the specifications themselves. Both might be incorrect. Correctness still reduces to same thing that I could model each of your definitions with. Even modifying it to say “correct under reasonable operating assumptions” is same where “CPU, RAM, or HD aren’t faulty” becomes another input that must be true when CurrentState or Output are correct.

                                              In objective case, your argument that correctness is hard to define or has many definitions seems wrong. If applied to subjective definitions, it’s true where people come up with definitions narrow enough to miss important aspects of correctness. You do a good job illustrating several perspectives such as runtime, code, and logical analysis that can help or hurt depending on what aspect of correctness one wants to achieve. These levels might be too simplistic in real world to tech correctness, though. Some examples follow.

                                              In Level 2: Code, we do consider some behaviors that can not happen directly in code when writing reliable or secure programs. We might write code to get the algorithm correct first, then do an assessment of known issues outside of code that affect code, modify the code to include those solutions, and then we’re done. It still happens at code level even if originally the knowledge came from a different level. A good example is software caches that aren’t really about the semantics of the code but we do modify the code we write to reduce misses. It’s a requirement/design detail that, when applicable to code, causes a transformation of that code. Eventually, the coder might automatically write code that way as part of coding style (esp with caches). Other examples include running out of memory at random points, bit flips, or side channels in CPU’s. These have code styles to mitigate them people can use without even understanding the logic behind it which can be enforce by code analysis tools.

                                              In Level 3: Design/Logic, it assumes logic specs and implementation are a different thing. The most successful by the numbers use of formal methods in imperative programs are contracts and assertions. Most practical that subsets formal methods being Design-by-Contract. These tightly integrate logic dictating correctness with the source code where they’re both created and considered at Level 2. In some implementations, the assertions become runtime checks that are activated in Level 1. So, the assertion-oriented methods operate at Levels 2-3 simultanteously when writing specs/code with 1-3 happening together when testing specs/code.

                                              “That can only be seen when doing formal verification, where a program’s properties and assumptions are written just as concretely as its source code. “

                                              Or Design-by-Contract with verification condition generator. The more specs and VC’s they see, the more complex the behavior might be. These things could possibly be converted to English statements about the software, too. There’s ongoing work in natural-language provers. You’re right that the common case is the specs are implicit in code or in developers’ heads “scattered” about in inconsistent way. That’s a strong argument for explicit specs.

                                              “Level 3 is all about how modular are the interactions between the components of your software”

                                              They’re going to tell you OOP or FP plus dynamic languages solve that. It’s why Smalltalk and LISP systems are so malleable. They also supported live debugging and updates. Many long-running systems with acceptable level of failures achieved with English specs plus memory-safe, dynamic, maintainable code. Not a formal spec in sight. Lots of half-assed versions of aforementioned concept in mainstream languages with bolted-on abstractions, middleware, clusters, and so on. You might want to think on this more to determine how to describe why you’re solution is superior to that for practical reasons. If it even is given that static, verified code vs safe, easy-to-modify, dynamic code being better in general case for “acceptable correctness” is ongoing debate among developers in CompSci, industry, etc.

                                              “At Level 2, this was totally fine, since freed memory in DOS was valid until the next malloc, and so the program worked. At Level 3, this was a defect, “

                                              Another reason I don’t like the levels for these subjects is we already have the SDLC models and other stuff like that which has a way to specify requirements, design, and code with traceability. There’s piles of methodologies and tools for doing this. What this section says is that, ignoring best practice, a developer only considers the code rather than the hardware or OS considerations that will be introduced in “requirements” or “design” documents saying the code has to fit within that model. Then, you illustrate the problem that ignoring everything but code creates. I know there’s plenty of developers out there doing it but it doesn’t mean we need new levels or other models for the basics. Just illustrate with good examples like yours why we have those basic concepts there for software development lifecycle. Note that I’m not pushing order of Waterfall so much as the categories of development.

                                              “The HTTP “referer” is forever misspelled, and that SimCity special-casing code is still there 30 years later.”

                                              You’re treating it all as one thing. The SimCity software might have been cheap to the people who built it. The company still makes games. So, I’m guessing SimCity made them a lot of money, too. That software was totally successful as a product at its goal “for that group at that point in time.” It’s true the software might become a problem for another group down the line needing it to do a different thing due to technical concerns in the code. However, it was at acceptable quality and successful before with it not being in another context later. What you’re seeing here is the effect of economic/human factors dominating technical ones when deciding if specific methods are good methods (esp good enough).

                                              We should always consider such things in our exhortations because those listening in FOSS or industry sure as hell will. Many’s model of operation means they don’t need to care about long-term quality. FOSS folks or CompSci students (esp w/ non-reproducible research) might not need to care about even current quality. Others don’t need high runtime quality so much as “it works often enough with it easy to fix.” Others add fast rate of change to that. Others, esp bigger companies, will talk developer costs for specific lines of code on top of that. The goals and metrics vary across groups. If you want to convince them, you need to target your message to those goals and metrics. If you don’t want to relent, you’ll have to target groups that are using your goals and metrics such as low-defect rate, higher flexibility for change, increased predictability (aka lower liability), and so on. That’s a smaller segment.

                                              Like you, I spent a lot of time talking about objective metrics focused on the tech when the groups’ intended goals, failure tolerances, and subjective metrics are where the ability to influence is at. I wasted a lot of time. Now, I focus on teaching them how defects can be found faster with little extra work (esp at interfaces), how software can be changed faster, how it can be diagnosed faster, how it can be fixed faster, and how rollback can work well and/or fast. And with what cost, level of support, etc. This is their language.

                                              “I now have two years experience teaching engineers a better understanding of how to avoid complexity, improve encapsulation, and make code future-proof.”

                                              Glad you’ve been doing that. Keep at it. All I’m doing here is showing you how to improve the message.

                                              Part Two

                                              “ The three levels deal with different views of a program: executions, code, and specifications. Each corresponds to its own kind of reasoning.1 Let’s bring in the math!”

                                              Now this is true and where you’r use of levels shines with good examples like overflow. However, the analysis seems to miss that these are common problems at the code level where simple checks can take care of them. Even your malloc example might be handled in a library implementation where some things the spec tracks were tracked by compile-time analyses or run-time library. Such non-formal methods ares typical of memory-safe languages, embedded systems catching integer overflows in code, watchdog timers to dodge termination proofs, and so on. The overhead they have is acceptable to many or maybe most given dominance of such methods in industry and FOSS. Others who want less overhead or more determinism might like the techniques your bring to the table for Level 3. I already refuted separating them too much with examples such as Design-by-Contract that can do all of it at once with lots of automation (eg property-based testing or fuzzing plus runtime checks). I agree prior work shows many tools work best focusing on just one level, though.

                                              I think you prefer clean, isolated, and static ways of analyzing or looking at system correctness when real world is just messier in terms of solutions available for one or multiple levels. It’s a lot messier. Embrace the mess! Like a consultant, factor it into your analyses so your recommendations show the various options at each level for different classes of problem with the trade-offs they take: time/money saved now or later via less debugging or fixes earlier in SDLC, extra time/money for specific verifications (these two biggest where development pace is biggest), compile-time vs runtime analysis, needs runtime checks or not at what cost, and support in their tooling of choice. It’s more work for people like us to do this. However, as some see value and more uptake happens, network effects might kick in where they do some of the evangelizing and tool work for us. That’s on top of potential niche market for specific solutions that work well like we see in DO-178B/C for static analyzers and so on.

                                              1. 11

                                                You should make a blog or something and keep all the longer form writing there. Then you could reference it easily on forums.

                                                1. 11

                                                  I’m actually closer to having a blog than before. It’s probably going to be up this year after a decade of procrastination on that.

                                                  1. 2

                                                    That would be a blog, where I would subscribe before you even publish something. Please do. :)

                                                    1. 1

                                                      What has been your primary hindrance, if you don’t mind my asking? There are (and have been) many free blog hosting platforms, that make it quite trivial to start writing. Also, if you desired to host your own, services like github pages and netlify offer very compelling free tiers.

                                                      1. 1

                                                        There are (and have been) many free blog hosting platforms

                                                        Is there a good one?

                                                        1. 1

                                                          I dont mind you asking but rather not say. You could just say my background, current circumstances, and procrastination habit all added up to hold off since I already had places to share ideas. On that latter point, I also found it easier to help others by going where crowds were already at than trying to do lots of self-promotion. I was drafting the heavy hitters instead of building a turbo-charged, 18 wheeler of my own. Whether good or bad idea who knows.

                                                          I am switching gears on quite a few things this year. Gonna try a piece or just a few at a time to avoid overwhelming myself.

                                                    2. 3

                                                      Hi, OP here.

                                                      Thanks for the long response. The typography of this comments section is really not meant for text of this length. As such, I’m having a lot of trouble reading this, for multiple reasons. I can’t tell how you disagree with me. In fact, I can’t even figure out if you disagree with me.

                                                      1. 6

                                                        Okay, I’ve read it through again. Here’s my attempt at a summary of what you wrote:

                                                        • Automata are a good way of writing specifications. Levels 1 and 2 have analogues in the automata-based style of specification.
                                                        • Something about me saying that correctness has many definitions (which is only kind of true)
                                                        • Changes to code are informed by the design
                                                        • Partial specifications may be embedded in the code via assertions
                                                        • Something about OOP. Who is the “they” that’s telling me it’s going to solve modularity? Is this responding to something I said? (Sidenote: Objects are very misunderstood and underrated among the intelligentsia. I highly recommend reading William Cook.) Also, something about “my solution.” I don’t know what “my solution” is supposed to be; I’m just providing a nomenclature.
                                                        • Something about how developers hopefully don’t think about nothing but the code when programming.
                                                        • For some kinds of software, most of the value to the producer occurs in a short time horizon.
                                                        • Different kinds of software have different value/time curves.
                                                        • Bug catching tools exist.
                                                        • The real world is messy.

                                                        None of this seems at all opposed to anything I wrote. Am I missing a key point?

                                                        There is one minor point of difference though: I’m not sure that Level 3 has an analogue in some variations of automata-based specification. This is because an automata-based specification is a very whole-system view of the world. There’s not exactly a notion of “This implementation automaton is a correct refinement of that specification automaton, but for the wrong reasons” unless you have a hierarchical construction of both. Hence, if you try to think of your system as implementing some kind of transition system, you may not realize if you have poorly-defined component boundaries.

                                                        I expect this phenomenon is related to why LTL-based program synthesis performs so poorly.

                                                        1. 4

                                                          Sidenote: Objects are very misunderstood and underrated among the intelligentsia. I highly recommend reading William Cook.

                                                          Sidenote to your sidenote: while we’ve mostly accepted that there’s several divergent strands of Functional Programming (ML, Haskell, Lisp, APL), we haven’t really done the same with OOP. That’s why people commonly conflate OOP with either Java ObjectFactoryFactory or say “that’s really not OOP, look at Smalltalk.” While Java is a Frankenstein of different strands, Smalltalk isn’t the only one: I’d argue Simula/Eiffel had a very different take on OOP, and to my understanding so did CLU. But a lot of CLU’s ideas have gone mainstream and Eiffel’s ideas have gone niche, so we’re left with C++ and Smalltalk as the main ‘types’ of OOP.

                                                          1. 5

                                                            There are old programming concepts called objects, but they don’t necessarily correspond to meaningful constructs. Cook and Cardelli did discover something that does capture what’s unique about several strands of OOP, and also showed why CLU’s ideas are actually something else. I consider this a strong reason to consider their work to be the defining work on what is an “object.”

                                                            This is largely summarized in this essay, one of my all-time favorites: http://www.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf

                                                            1. 1

                                                              By the numbers, I’d consider it whatever C++, Java, and C# did since they had the most OOP programmers in industry. Then, there’s the scripting languages adding their take. So, my hypothesis about mainstream definition is that what most people think OOP is will be in one of those camps or a blend of them.

                                                          2. 3

                                                            I’ll take the main comment paragraph by paragraph showing what I did for context:

                                                            1. I disagreed with your three models for correctness in favor of one that says “Do states, transitions, and outputs happen as intended on specific inputs?” All yours fit into that. There’s also formal models for it deployed in high-assurance CompSci and industry.

                                                            2. I agreed your levels represent different vantage points people might use. I also agreed some tools work best focused only on them.

                                                            3. I disagreed you need these to explain why code alone isn’t sufficient. The standard software, development lifecycle (even Waterfall) shows places where problems show up. Lots of articles online to pull examples from with your DOS thing being one. Best to just use what’s widely-used and well-understood for that.

                                                            4. I disagreed that stuff in the three levels stayed separate during development. I gave specific examples of where stuff at one was solved at another without mentally leaving that other level. I also showed methods that do multiple ones simultaneously with strong, code focus.

                                                            5. I disagreed that strong correctness of software (a) mattered for many groups or (b) mattered in long term. You have to examine the priorities of people behind products or projects to know if it matters or how much. Then, tie your recommendations to their priorities instead of yours. Ours if it’s software quality improving. :)

                                                            6. I disagreed that strong, logical methods were needed for long-term when specific examples from Smalltalk/LISP to industry’s knock-offs regularly produce and maintain software that meet organization’s goals. Since they’re status quo, you must show both those kind of solutions and yours side-by-side arguing why your costlier or stranger (to them) methods are better.

                                                            7. I agreed some would have your view in niche segments of FOSS and industry that are willing to sacrifice some upfront-cost and time-to-market to increase quality for higher levels of (insert good qualities here). You will do best marketing to them again tying the message to their interests and metrics. I gave DO-178B (now DO-178C) market as example where they invest in better tooling or buy better-made software due to regulations. Ada/SPARK, Astree Analyzer, formal methods, and so on have more uptake in safety-critical embedded than anywhere else except maybe smartcards.

                                                            So there’s a summary that will hopefully be more clear by itself or if you read the original post with it in mind.

                                                            1. 5

                                                              Thanks. That I can read.

                                                              First, please tell me where I say that strong correctness is important or that we should be using heavyweight formal methods in today’s industry. I don’t believe either of those statements except in a small number of high-assurance cases, but you’re not the first to think that I do. Also, what is the “your solution” that you speak of? Points 5-7 seem to all be arguing against something I don’t believe. (As an aside, where are you getting your claims about tool adoption? My understanding from talking to their competitors is that Astree has no market share.)

                                                              Point 4: Obviously, you need to think about code (level 3), write code (level 2), and run code (level 1) concurrently with each other. Thinking about these three levels of abstraction absolutely do happen concurrently. What should set off alarm bells is writing code that special-cases a certain input (level 1 influencing level 2), and shaping a design around a poorly-thought-out implementation (level 2 influencing level 3). Obviously, your value/time curve will determine when exceptions are worth making.

                                                              Point 3: Ummm…..I don’t know what to say. Obviously, there are a lot of people with no training in formal methods who are nonetheless very good software designers. I do believe that the concepts of PL and formal methods, in various incarnations, whether logic-based or automata-based, are the correct way to think about many software design concepts that were previously only vaguely-defined. These all provide a notion there is a layer of reasoning which shapes the code, but is invisible when looking only at the code. This is what I call Level 3, or “the hidden layer of logic,” and closely related to what Don Batory calls “Dark Knowledge.” (Someone on HNH just linked me to a Peter Naur article which might discuss the same concept.) Probably the clearest example of this concept is ghost code, which many static analysis tools rely on. I also believe that understanding the hidden layer of logic is a shortcut to the higher levels of software design skill, as I’m proving in my coaching practice.

                                                              1. 3

                                                                I could’ve misread the intent of your post. I thought you were promoting that people consider, maybe adopt, specific ideas. If you’re just brainstorming, then you could read me as liking or shooting down some of the brainstormed ideas. It all depends on what your goal is. I’ll answer some of this.

                                                                First, one thing about your post that looks like an evangelistic push of specific solutions like applying your concepts of Level 1-3 or software engineering practices in general. Here’s some examples of why I thought that:

                                                                “No, our goal is to be able to continue to deliver working software far into the future.”

                                                                “People sometimes tell me how software is easy and you can just… Hogwash… This is why it’s important to get programs right at Level 3, even if it passes all your testing and meets external requirements.”

                                                                “This is why it’s important to make your APIs conform as strictly to the spec as possible”

                                                                “the most important parts of our craft deal not with code, but with the logic underneath. When you learn to see the reasoning behind your system as plainly as the code, then you have achieved software enlightenment.”

                                                                “always think of the components of your… So much becomes clearer when you do, for logic is the language of software design.”

                                                                Your replies to me were talking like it’s idle categorization or brainstorming but those quotes sound like pushing specific practices to some audience of programmers to solve their problems. I assume industrial and FOSS programmers as default audience since they account for most working, practical software out there. If you’re (a) talking to real-world programmers and (b) pushing specific advice, then I responded to that: in terms of what their requirements are; what methods they use (eg design review, OOP, patches, runtime checks); told you to compare what you were pushing to good versions of what they did; and, if your methods still sound better for them, then modify your pushes to argue stuff more meaningful to them, esp project managers or FOSS volunteers. That what I meant by your “solutions” or “recommendations” with my counterpoints on suggested modifications.

                                                                I should’ve added that this quote that was 100% in agreement with my recommendations:

                                                                “I always recommend trying to think in pure concepts, and then translate that into the programming language, in the same way that database designers write ER diagrams before translating them into tables. So whether you’re discussing coupling or security, always think of the components of your software in terms of the interface, its assumptions and guarantees”

                                                                Back to other points.

                                                                “Obviously, you need to think about code (level 3), write code (level 2), and run code (level 1) concurrently with each other. “

                                                                I’m glad you intended for them to know that. It would be obvious if you’re audience already knows benefits of level 1-3 activities but then you wouldn’t have to explain the basics. Problem was your audience (as I perceived them!) was composed of people you were convincing to use more than coding and a debugger. Given Design-by-Contract reactions, I know it’s not going to be obvious for many of them that logic and code can work together in a way that’s also checked at runtime. Maybe it should be in a similar, future write-up if you do them that they can interleave or that some methods do several at once.

                                                                “These all provide a notion there is a layer of reasoning which shapes the code, but is invisible when looking only at the code.”

                                                                It’s a neat way of describing the other factors. The industrial developers do have methods for that, though, that you might leverage to help them understand. Big attention goes to modeling languages with UML’s probably being most widespread. In safety-critical, Esterel SCADE is good example. Anyway, the industrial side has modeling techniques and diagrams for various parts of the lifecycle that show relationships, constraints, and so on. Adoption varies considerably just like with strong design in general. Many will have seen that stuff, though. Starting with something like it might increase how fast they understand what you’re saying. You can then show how formal logic has benefits of simplicity, consistency, and tooling. And I tend to especially highlight the automated analyses that can happen since resource-constrained groups love solutions that involve a button push. :)

                                                                “I also believe that understanding the hidden layer of logic is a shortcut to the higher levels of software design skill, as I’m proving in my coaching practice.”

                                                                Definitely! In that case, you also have people who are trying to improve rather than randomly reading your blog. It’s a different audience willing to invest time into it. It’s definitely beneficial to teach them about the implicit stuff so they see just what kind of complexity they’re dealing with. I fully support that. An example I give to people learning formal verification is this report doing a landing system in Event-B. Specifically, I tell them to note how the small number of requirement and design details balloons into all the stuff in section C onward. I tell them, “Although some is due to the logic, much of that is just you explicitly seeing the huge pile of details you have to keep track of throughout your system to make it correct in all situations. It’s always there: you just don’t see it without the formal specs and explicit over implicit modeling.” That always gets a reaction of some kind. Heck, for me, it made me want to reconsider both not using formal methods (what am I ignoring?) and using formal methods (wow, that’s a lot of work!). :)

                                                                1. 3

                                                                  Oh boy, this is getting unwieldy. Replying in separate comments.

                                                                  1. 3

                                                                    I view this post as saying “Here’s a good way to think about software.” This does make it easier to come up with a number of design claims, such as “conforming strictly to an API reduces the complexity of future clients.” Obviously, everything has tradeoffs, and needs to be tailored to your situation. For the better engineers I’ve taught, the reaction to a good fraction of this material is “It clarified something I already had a vague intuition for,” and that is also the aim of this post.

                                                                    This lens does lead to a number of recommendations. In the follow-up post, currently #3 on Hacker News, I give a counterintuitive example of reducing coupling. This is, of course, only useful to those who desire to reduce coupling in their code.

                                                                    1. 3

                                                                      I’m compressing most of the second half of your post into “For people already familiar with certain design-capture methodologies, you can help explain your ideas by comparing them to what they’re familiar with.” Although I once came close to taking an internship with the SEI, I can’t say I’ve had the (mis)fortune of working anywhere that used one of these methodologies (startups gotta iterate, yo), and I’ve read 0 books on UML or any of them.

                                                                      So, if I’m going to go this route, do you have anything in mind that, say, more than 10% of Hacker News readers would likely be familiar with? Otherwise, referencing it won’t fill the desired purpose.

                                                                      Thanks for sharing the Event-B report, BTW.

                                                          1. 4

                                                            As far as I can tell, the speedup with learned indexes had little to do with the learning, and a good deal to do with the ease of implementing them in parallel. With massive parallelism comes the ability to speed up the algorithm by using GPUs.

                                                            Classical data structures with parallel lookups should work just as well (if not far better) than learned indexes in this case.

                                                            1. 6

                                                              The execution time of 50-ish nanoseconds is already in the range where memory latency may be dominant. If the mythical TPUs that execute a sufficiently large NN in a single cycle were glued on CPUs, perhaps the learned indexes could shave off a few cycles. AIUI using GPUs as they are now would make little sense because of massive latency.

                                                              Now could we have useful hash functions that execute in a single cycle?

                                                              1. 3

                                                                There are already many fast hash functions (clhash and falkhash) some of which take advantage of specialized hardware instructions.

                                                                It doesn’t seem worth it to add dedicated instructions when there are existing algorithms that hash at a fraction of a cycle per byte.

                                                                1. 2

                                                                  The things I’ve seen before are usually measured in cycles per byte with a low number of cycles. So, what do you mean execute in a single cycle? Are you saying you want it a byte a cycle, a specific number of bytes done in a cycle on a CPU, or that on a parallelized machine like a GPU? And how good does that hash function have to be in terms of collisions? Can it be like the textbook examples people use in day to day programming or a unique as cryptographic kinds provide?

                                                                  1. 2

                                                                    I wasn’t looking for an actual answer to that. The particular numbers selected for the examples are somewhat arbitrary.

                                                                    My line of thought was that if we’re doing hardware acceleration, then maybe instead of throwing a big neural net and a 120 teraflops TPU at the problem [and be still restrained by memory latency], we could use cuckoo hashes (or whatever) and accelerate some other link in the chain with far less hardware. A fast and secure hardware-accelerated hash function could actually prove useful, but that’s just one possibility.

                                                                    A fancy version could take a base address and a key and compute some number of hashes, fetch all the resulting addresses from memory, and then compare all the keys in parallel. Call it crazy for a CPU instruction. Such a massive TPU is pretty crazy too, and I’m not yet 100% convinced we’re going to get them on the CPU die :-)

                                                              1. 4

                                                                I understand what the author is getting at, but downhill skateboarding is a strange example because anybody can buy a real life skateboard, start practicing, and work their way up to whatever skill level they want to achieve. A person doesn’t have to fly down steep slopes at 100 km/h to have fun skateboarding.

                                                                I mountain bike and ski, and can say with certainty there’s no way to capture the experience in a video game. A video game can be enjoyable in it’s own way, but it’s not even close to the same as actually participating.

                                                                IMO games like Starcraft and Diablo make better examples because there’s really no real life equivalent to those, they’re just all imagination.

                                                                1. 1

                                                                  anybody can buy a real life skateboard, start practicing, and work their way up to whatever skill level they want

                                                                  Assuming everyone has money, nobody is disabled, everybody has an appropriate venue withing their reach (including acceptable weather conditions), and enough time to practice. Hmm

                                                                  There are lots of real life sports that just aren’t very accessible for everybody. Downhill skateboarding isn’t the worst offender, but I’d like to see the disabled kid do it on ice and snow in winter on the flat-as-pancake public roads of his rural home town.

                                                                1. 4

                                                                  Can we please have a tag for posts to sites that require you to have signed up before they can be read?

                                                                  1. 2

                                                                    AIUI such stories don’t belong here anyway.

                                                                    In this case, you can click “not now” or even use browser devtools to delete the annoying popup.

                                                                  1. 10

                                                                    Matt Dillon hardly needs an introduction, however, I’d just like to point out that he’s one of the few people that I really do trust to have actual knowledge on these issues, as a few years ago it was him who found some obscure processor bug that resulted in an errata from the vendor — AMD in 2012.

                                                                    He was also involved in providing a public analysis of the Intel Core bugs back in 2007:

                                                                    1. 2

                                                                      I’d really appreciate a tl;dw.

                                                                      1. 8

                                                                        I can’t understand why tech interviews involve implementing some sort of tree, regular expression engine, sorting algorithm or parser

                                                                        I don’t have the history to back it up, but I strongly suspect tree and linked-list questions are a holdover from a time where almost everything was done in C, and you had to know how to hand-roll data structures to get anything done. If that’s the case, then it would have been a “do you know how to use the language” question, not a “can you come up with new algorithms on the spot” question.

                                                                        1. 10

                                                                          Specifically, programming in C requires understanding pointers, and the difference between and object and a pointer to an object, and a pointer to a pointer. These distinctions are essential to solving any problem in C. Basic data structures are a simple self contained problem that involves pointers. The linked list question is not about linked lists. It’s about understanding why your append function takes a node star star.

                                                                          1. 2

                                                                            Basic data structures are a simple self contained problem

                                                                            (I agree with your whole assessment, but find this part particularly compelling.)

                                                                            If you interview for a position that involves programming, you are ultimately going to be forced to solve problems—sometimes brand new problems that have not been solved before. So how does one assess a person’s ability to do that? You can’t give someone a problem that’s never been solved before…

                                                                            I don’t know. The thing that I find most awful about data structures questions is that the distribution of those with knowledge is normal, so it’s either impossible, derive it from scratch, or memorized/almost rehearsed because the candidate just knows it.

                                                                            The best questions I’ve had have been generally open ended. There might be a data structure that solves it well that the interviewer has in mind, but there are also 10 other ways to solve the problem, with different levels of sophistication, that would probably be good, in practice, at least until hockey stick growth hits. The best interviewers are open minded, and good enough on their feet to adapt their understanding of the problem to a successful understanding of a potential solution that the candidate is crafting.

                                                                            Maybe the fact that algorithms, and data structures have but one answer is the actual drawback… hmmm.

                                                                            1. 3

                                                                              WRT the normal distribution, I think modern interviewing has forgotten that such questions aren’t supposed to be textbook tests and have drifted away from reasonable questions. Even if you’ve never heard of a binary tree, I can explain the concept in 30 seconds such that you should be able to implement search and insert. (Rebalance may be harder.) I can’t argue it won’t be easier if you’ve done it before, but it should never be impossible.

                                                                              1. 4

                                                                                That’s probably true. But the concept isn’t useful without intuition about how to apply it, and I think that’s part of the problem, too. Often, criticism of these questions is that “libraries already have a binary tree, and I’ll just use that.” I think it’s likely that these types of interview questions poorly proxy the question of: “does this person know when to use a binary tree? They must if they know how to implement one!”

                                                                            2. 1

                                                                              They are asked such questions to differentiate between a flood of new programmers each without experience. Such aptitude type questions can be be automated as a way to cull the majority of applicants.

                                                                              Sad that they are being applied across the board regardless of experience, even though nearly all experienced programmers have never had to actually implement a CS/text book algorithm, preferring instead to use the tried, tested and optimized implementation in a language’s base class library, or those readily available elsewhere.

                                                                              1. 1

                                                                                But trivia about algorithms and data structures says practically nothing about experience.

                                                                                EDIT: Nevermind, I just read your post again.

                                                                                Still, I feel like focusing on more realistic problems could be a much better predictor of aptitude.

                                                                          1. 6

                                                                            So types’ complexity is related to their size. An interface using chars is simpler than one using ints. Using ints is simpler than unsigned ints because signed is the default if you do not explicitly specify signedness (except in the case of char).

                                                                            Structs with pointers to structs of the same type double the struct’s complexity. So if you have a big struct, and you need to put it in a linked list, well you’d better create another struct that points to your big struct. To avoid doubling the complexity! Alternatively, you could just sneak a pointer-to-char in to your big struct, because that’s simple. Or maybe pointer-to-void?

                                                                            What else. Use more typedefs, because they’re good and halve complexity. Meanwhile, object-like macros are as good as typedefs, so use more macros. Function-like macros are better than functions because they have no return type.

                                                                            EDIT: Typedefs and macros are “good” because they’re abstractions. But enums are just integers so they aren’t any better than using plain ints. The logic.

                                                                            I think this is absolute bullcrap. Also I don’t see any science (compsci or otherwise) here. These definitions are out of thin air and poorly motivated.

                                                                            This complexity alone offers developers hints to which libraries should be studied to have potential efficiency gains. It can also be used to keep code complexity to a maintainable level.

                                                                            What a conclusion. Supported by no evidence.

                                                                            EDIT2: There’s hardly anything on OpenBSD here. A doubly linked list (with the LIST_* macros) is used as one example somewhere.

                                                                            1. 21

                                                                              Compiling Firefox on 8 core ryzen targetting the host arch takes between 10 and 15 minutes.

                                                                              1. 10

                                                                                Wow that is fast, takes ~2h on a build server that takes 7 hours for chromium. All the recent rust stuff really slowed it down.

                                                                                1. 6

                                                                                  All the recent rust stuff really slowed it down.

                                                                                  Oof, yeah, I bet. Rust is notoriously slow to compile. Luckily, incremental compilation is in the nightly compiler right now. I’ve been using it wherever I can and it really does make a difference. But I suppose it wouldn’t do much for an initial compilation of a project. :(

                                                                                  1. 4

                                                                                    In this case a large chunk of this is just bindgen; we need to generate bindings so we throw a libclang-based-bindings generator at all of the header files. Twice (complicated reasons for this).

                                                                                    It’s also pretty single threaded (codegen units will help with this, but currently isn’t, and I have to investigate why).

                                                                                    Incremental compilation and working codegen units and cleaning up the bindgen situation will help a lot. Going to take a while, though.

                                                                                    1. 3

                                                                                      But I suppose it wouldn’t do much for an initial compilation of a project.

                                                                                      Also not going to help packagers who use only temporary compilation environments which are discarded after a package is built.

                                                                                      1. 6

                                                                                        Package managers (and regular builds also) should not be starting from scratch every time. Even if we insist on doing all source editing in ASCII, we need to be delivering modules as fully-typed, parsed ASTs.

                                                                                        This insistence on going back to plain source code every chance we get and starting over is easily wasting more energy than the bitcoin bubble.

                                                                                        1. 9

                                                                                          Package managers (and regular builds also) should not be starting from scratch every time.

                                                                                          They should if you want reproducible builds.

                                                                                          1. 3

                                                                                            These are completely orthogonal. There’s nothing stopping reproduceable builds where you run the entire pipeline if you insist on comparing the hash of the source and the hash of the output. And you would still get the same benefit by comparing source<->ast and ast<->binary

                                                                                            1. 1

                                                                                              Yeah, ideally a compiler would take a difference in source to a difference in output. Compiler differentiation?

                                                                                            2. 2

                                                                                              I believe you can do a decent amount of caching in this space? Like MSFT and co. have compile serverse that will store incrementally compiled stuff for a lot of projects so you’re only compiling changes

                                                                                        2. 7

                                                                                          My non-beefy lenovo x series laptop takes ~45 minutes for a complete recompile, ~12min for average changes w/ ccache etc. and ~min for JS-only changes (which you can do as artifact builds, so they’re always 3min unless you need to build C++/Rust components

                                                                                        1. 4

                                                                                          In C you can mix an int and an int, but you can’t mix a struct serial_number and a struct port both of which contain an int.

                                                                                          It’s easy enough to make an opaque type, and we can just as well sprinkle some assert in its implementation, for “bug traps.”

                                                                                          I’d love to see more reasons for loving Ada, but I get a little tired of this “see we got types and C doesn’t” straw man. Evidently both languages have types and both languages provide a way to subvert the type system with unchecked conversions. Ostensibly, C programmers are not too keen on programming with a straight jacket, that’s why they keep mixing ints rather than turning every possible application of a numeric value to a new type.

                                                                                          The other thing is, how far do you go with these types? Mixing up units is bad, but so is mixing different measurements (variables!) that use the same unit. Interior pressure vs exterior pressure? Both may be measured in bars, but get them confused, and you might just have a catastrophic accident ahead of you..

                                                                                          Based on the few articles on Ada & Spark that I’ve seen, the way I imagine this kind of code would go is, well, you have function dealing with a variable exterior_pressure_bar. But you don’t trust yourself, so you also make it the type Exterior_Pressure_Bar. But you don’t trust your code once so you code the function again in a different language (Spark). Very redundant. Say the same thing thrice, maybe more than thrice, and maybe it will be correct… Perhaps appropriate for certain applications, but it doesn’t seem like much joy. Types are just one (relatively shallow) dimension of correctness, so focusing too heavily on that one part and introducing tons and tons of machinery to that end might just be a lot of work for little gain.

                                                                                          I guess what I’m after is examples that are not so trivial. Yeah I know you can make types. And that you can write a function and a “proof” which says exactly the same thing the function’s body says, in slightly different words.

                                                                                          The other thing I’d like to know is what features are there that actually save me typing (or just reduce bugs in some way that doesn’t require me to add more code or repeat myself saying I really really really mean to do this) and just make the code more readable overall? It seems that introducing such a maze of types and conversions is backfiring on readability & joy of writing department, so there’s got to be something to make up for it, no? And who checks that all the conversions I’m doing are the right thing to do? Maybe in that maze you’ll find code that converts interior pressure to bar to exterior pressure to mix them up.

                                                                                          Admittedly the fixed point types are nice to have, and in C I’d prefer to have FP over the built-in complex type any day. Then again, it’s not terribly hard to implement your own fixed point code (or use one of the three billion existing implementations). Range types don’t seem too compelling; when was the last time I couldn’t decide which integer type was appropriate for my range? Now if they do overflow checking (in a way that isn’t more verbose than doing it in C), that’s a little better.