1. 55
  1.  

  2. 20

    But when you try to argue with me that it is normal for my government’s online tax submission system to require Adobe Flash and the delivery of that overprivileged opaque blob to my browser, that’s when you get the “public money, public source code” response, liberally drizzled with the “holy fuck, how do you breathe” stare.

    Comedy gold (and so true)

    It does make one wonder how in this day and age there so many companies using free software but never truly contributing back, and then daring to complain when shit like this happens.

    I’ve spoken to quite a few people working at wildly different companies recently and all of them responded the same when I asked them if they contributed to open source: Well, when there’s a nasty bug in the code we might fix it and contribute it, but as a rule, we do not.

    Most of these companies are afraid to give up their so-called “competitive edge” so they keep all their code completely closed.

    1. 9

      It does make one wonder how in this day and age there so many companies using free software but never truly contributing back

      Basic economics and common sense: 100% selfish organizations dedicated to maximizing profits and externalizing costs maximize use of FOSS but minimize development. The licensing they choose makes it easy. They’re literally just telling companies to screw them over. Some then try to justify contradictory positions of “company should give back” and “we should use licenses that encourage them not to.”

      This problem made me push licenses like Prosper. It says you can do anything with and share the software so long as you’re not making money off it. If you make money, you negotiate a commercial license which might require just money, sharing code back, or whatever. These licenses can be refined to have more of the software freedoms in them. I already nailed down most of the concepts with the only thing that’s a problem is transition from shared source to FOSS if company stops maintaining the software. There’s tricks for them to stretch it. Most of the freedoms are easy to simulate, though, in a non-commercial, free-to-use, shared-source license.

      1. 5

        I see two problems with Prosper:

        1. The software cannot get into Debian and FOSS distributors, because it is not free enough.
        2. How can you handle contributions with requiring contributors to hand over their copyright? This is a barrier to participate.
        1. 3
          1. That’s their choice. They’ll reject it, allow it as a proprietary inclusion like blobs/flash/whatever, or it becomes a standalone thing users install. One of my ideas was using paid, add-on software released under Prosper to fund [F]OSS dependencies. So, it not making it into Debian could ironically be a good thing for Debian. ;)

          2. I don’t know enough about legalities to answer that. Some of it would need to be fleshed out by experts. I will say I was going to setup the Prosper-like license to work the same way OSS licenses worked. For non-commercial use, their contributions automatically create a derivative, it’s licensed the same way irrevocably/perpetually, and they get a copy. Like original, the derivative is still non-commercial. Some contributors won’t participate in that. Some will as we see with people building on Microsoft and IBM technologies.

      2. 8

        At some previous jobs we relied on FOSS projects up and down the stack, but there was enough red tape in place that to contribute anything back to those projects would take months, if you were allowed to do it at all. It didn’t matter whether the changes actually had anything to do with the company’s “competitive edge”—the decision makers were too conservative to let anything that “belonged to us” leak out to anyone else. The devs were mostly in favor of contributing back, but are you really going to burn time (and goodwill with the higher-ups) to submit a small new feature to some logging library?

      3. 10

        The latter half reminds me of this and this essay of Viznut, as they give more insights in why software - and technology in general - should be understandable. (I posted them once in the comment section of a submission on lobste.rs, but they’re still interesting.)

        A quote from the former essay:

        The mainstream of open source / free software, for example, is a copycat culture, despite its strong ideological dimension. It does not actively question the philosophies and methodologies of the growth-obsessed industry but actually embraces them when creating duplicate implementations of growth-obsessed software ideas.


        EDIT: a similar essay, but fits better in the context.

        1. 3

          Damn. Reading those essays makes me want to get out of the whole software industry so I don’t contribute more to the problem. But I don’t think I’m good enough at anything else to get paid.

          1. 2

            Ah, incredible to see someone mention viznut here. He was quite involved in the first online programming community I got into back when I was but a teenager.

            1. 2

              It’s a small world :). I know him from his demoscene productions and the discovery of bytebeat (which has been featured here before).

          2. 8

            We must make software simpler. Much much simpler.

            I am not sure whether this is feasible. Abstractions exist for reasons, one of them is to forget. Realistically, if Node.js developers are required to understand V8, not many things would get done. If depending on V8 (de-facto proprietary software, if there was any) is okay, and I think it is, I don’t see much difference for depending on other things.

            1. 6

              Abstractions exist for reasons, one of them is to forget.

              Indeed, but most of the obstacles for understandably don’t come from abstractions but from layers of indirection which do little to provide more introspection affordances to help the programmer understand the system and increase the surface area for bugs immensely. As noted by another article posted under this story:

              Accumulation of unnecessary code dependencies also makes software more bug-prone, and debugging becomes increasingly difficult because of the ever-growing pile of potentially buggy intermediate layers.

              We programmers are so accustomed to intermediate layers that most of us conflate the two. A good example of the complexity reducing power of abstraction is Plan9’s commitment to everything is a file (for real, no ioctls).

              Realistically, if Node.js developers are required to understand V8

              The whole point of the article is that something like V8 shouldn’t exist in the first place. There are more than enough examples of smaller code bases, from VPRI’s work to the the Tiny C Compiler, that show that such a world is possible today.

              1. 1

                Abstractions exist for reasons, one of them is to forget.

                But all abstractions are leaky (to varying degrees), so details from lower levels will bleed into the upper levels no matter what you do. This includes restrictions, (wanted or unwanted) features, performance limitations and security problems. If you can cut out a layer of abstraction without losing too much expressiveness, it is almost always better.

                1. 4

                  Case in point: We were hit with a problem caused by a fix for the recent Ghostscript vulnerability in ImageMagick. The fix simply disabled PDF processing in ImageMagic’s policy.xml, which caused our perfectly functioning code to stop working. The code would send out an e-mail with a PDF containing some information. After the fix, ImageMagick would silently start creating an empty PDF file (it’s PHP, did you expect it to signal an error in a decent way?).

                  This involves two or even three levels of abstractions, depending on how you look at it: Our image writer happened to use ImageMagick to make the PDF, and ImageMagick happened to use Ghostscript, so it had to be disabled at the configuration level by the sysadmin. If ImageMagick didn’t use Ghostscript, this problem would not have existed. Of course, the tradeoff is that this would mean it would need to use its own PDF processing instead. And the vulnerability could’ve been there too (but that’s perhaps less likely if it didn’t contain a full fledged Postscript interpreter, which in our case was completely unnecessary as we were only generating PDFs, not reading them).

                  1. 4

                    I agree with what you are trying to say but I see no abstractions here, just layers of indirection. I don’t think the problem is leaky abstractions, but rather that we (myself included) don’t know how to properly abstract. Because understanding the current system is unfeasible we just want to add a layer on top that treats the underlying system as a black box to do what we want and call it a day. And so the onion keeps growing and growing.

                    1. 2

                      I highly recommend Richard Gabriel’s book Patterns of Software, the first three chapters (“Reuse versus Compression”, “Habitability and Piecemeal Growth” and “Abstraction Descant”) are about exactly this topic. Written by the author of the “Worse is Better” essay, probably the most profound thinker on software development that I know of.

                  2. 1

                    But all abstractions are leaky (to varying degrees)

                    Most are. DSL’s show us we can have non-leaky abstractions for a lot of things.

                    1. 1

                      Could you elaborate with some specific examples? From the top of my head, I don’t see how DSLs would be inherently better than any other abstractions; to me they feel mostly the same, i.e. most of them are leaky, though from time to time one may strike a pure one (like say, SQL? Dockerfile DSL?). Isn’t every function call a “DSL statement”, and every “framework” a DSL, just with a covoluted syntax?

                      1. 1

                        State, machine compilers are an easy example where you just list the functions, states, and transitions. It generates all the boilerplate. Then, you mode switch to fill in the blanks.

                        Similar stuff can be done with GUI’s since they’re FSM’s or close. Might even count a subset of HTML with CSS 1 or 2 in there. They stay really close to document with styling metaphor. It only got leaky if including Javascript. That makes sense, though, given that transforms it into an executable program. I still used to use well-designed snippets from DynamicDrive.com, like menus, before I learned JavaScript since you just filled in the blanks. XML later fixed that abstraction by switching from document to data models.

                        SQL is another many cite. Might fit here long as you are just doing queries.

                        1. 3

                          Hmm, I think I’m still not really convinced. In my opinion:

                          1. FSMs are maybe closest to what I’d call a good abstraction, in that they embrace the leakiness - by limiting themselves to what they’re good at modelling. The “blanks filling” in my eyes is where they beautifully support good cooperation with other abstractions.
                          2. HTML, then with CSS 1, then 2, in my opinion is an example where the abstraction was seriously leaky and thus eventually broke apart (via JS). The CSSes already are examples of trying to patch HTML into submission. Infamously, creating non-rectangle shapes was always a problem here. Also motions. I’d say TeX is a good example how the seemingly simple task of text/page layout is surprisingly hard, bordering on impossible.
                          3. As to SQL, I think what it benefits from is the consistent and complete (I think) mathematical abstraction of relational DBs it’s based on. That said, in my opinion it’s still leaky, by virtue of needing an interpreter/VM. Thus for any practical use, one still has to employ a lot of divination and guesswork (or often cargo culting) trying to please the optimizer spirits.

                          I think my main takeaway from the classic article about leaky abstractions is that all of them are, and thus the fact should be embraced by explicitly designing them to allow clean and painless escape valves. See the FSMs. Also unsafe keyword in Rust, or ports in Elm.

                          1. 1

                            Good points. I guess it depends on one’s definition of leaky. For me, Im fine with an abstraction so long as I can ignore what’s in it. Just call it conforming to the contracts. Others might have something different in mind.

                          2. 1

                            You must have a curious conception of non-leaky abstractions if SQL queries are to be included. And one where all of practical computing being built on non-leaky abstractions wouldn’t be all that much of an improvement on the status quo.

                            1. 1

                              Just the parts about describing what to pull from or put in what. It looks really high-level and purpose-built compared to platform-specific, low-level, imperative code it replaced. We even taught folks with no programming experience how to use that model with Access, 4GL’s, etc.

                              1. 2

                                That’s true, but when used in anger sooner or later you’ll still have to spend time with the implementation details of the database. Certainly if you want to make effective use of the hardware’s capabilities.

                                1. 1

                                  Oh yeah, definitely true!

                      2. 1

                        I don’t really think most systems warrant as much complexity as they have. Alan Kay had a talk about building the right language for each layer of an operating system. He has a demo of two languages, one is a DSL to implements a rasterizer, one is a DSL for implementing network protocols. To be fair, I haven’t looked deeply into the code or run it, but I think the idea works.

                        For something that I have looked a bit closer at, Interim OS, is an OS implemented in C99 and Lisp. I’ve taken a weekend to read through the JIT code, and with a few more days, could probably start working in the codebase (on small to medium tasks). The entire kernel is written in 12K lines of C code and a few thousand of lisp. It’s capable of running on a Raspberry Pi, with a keyboard driver, basic VGA, TCP stack, and even an IRC client.

                        Granted, I can’t say I fully understand this code either, but it’s definitely reasonable for me to grok the whole OS in a few weeks. I don’t think most people have time to vet every piece of software. That’s why we have sandboxing. However, the OS, my browser, my crypto libraries, my programming languages, my developer tools, and the base software running on my system should all be written in a way that I can understand and extend it.

                        I feel like software changes too fast, and just keeps changing for no good reason – we very rarely reach the point where we can just stop messing with it and let it do it’s job. Feature creep is a real problem.

                      3. 1

                        Abstractions exist for reasons

                        Some do, some don’t, some have reasons but everyone would still be better off without them. Nobody is arguing we should just drop all the abstractions.

                      4. 4

                        Here are my thoughts on concrete actions we can take.

                        To make it feasible to support the open-source projects we depend on (with money and/or code depending on circumstances), we need to have fewer dependencies. In light of this, the dependency graph of a typical Node.js project is insane. Some newer languages, like Rust, seem to have the same problem.

                        I suggest that for software that runs on GNU/Linux, we limit ourselves to what’s packaged in Debian, because Debian seems to do a very good job of vetting the legal status of the software it packages.

                        Debian should disable Internet access during package builds, if it doesn’t already do so, so we can’t cheat and pull in unvetted dependencies directly from npm or the like when building our own packages.

                        Then we should definitely support Debian itself in any way we can.

                        For web development in particular, these restrictions eliminate many popular frameworks and libraries. What does that leave us with? Python, PHP, Java, or Go on the back end, and jQuery on the front end? Have I overlooked anything?

                        On making software simpler, there is some complexity we simply can’t eliminate without causing real harm to users. For example, when developing a GUI, we have to make it accessible to people with disabilities (e.g. blind people), or else some people can’t do their jobs. A GUI isn’t as simple as drawing pixels on screen and taking input from a keyboard and mouse (probably only in Western languages), as one did in the “good old days”. But a terminal-based UI will be rejected by practically everyone. So we’re stuck with our current complex GUI platforms. At least we can limit the extra complexity we add on top.

                        1. 3

                          Debian does build without internet access. That is why you can still build ancient Debian versions. It comes with all its dependencies.

                        2. 4

                          “but lets roll for a bit with the assumption that a small amount of extra care on Tarr’s part could have avoided this mess.”

                          We can’t. Essay comes to full stop if the prior consensus on Lobsters is used: open-source releases are a gift with maintainer owing nothing. Hell, I put this one on the users, too. If it’s so important and maintainers often quit, then they should’ve asked for or offered a contingency plan way before that. They could’ve had details worked out. Supporting point 1, the maintainer doesn’t have to go along with that. Many would, though, if it was some quick conversations about that here and there to assure their work didn’t fall into bad hands.

                          “Code is the only thing you can trust, and by not reading it, you’ve forfeited the most important benefit provided by this ecosystem: the choice of not having to trust the authors regarding behavior or continuity.”

                          Well-done summary of main benefit of OSS/FOSS.

                          “I’m not fundamentally opposed to closed source software, so as long as it runs on someone else’s computer.”

                          I’d love to see what machine this person is running with open OS, CPU firmware, microcode, and peripheral firmware. I have a feeling they mean “I’ll use closed source when I want to accomplish X, Y, and Z but no FOSS exists.” Like a lot of folks do for proprietary software. Still more aligned with claimed values, though, since it’s not all or nothing game.

                          “This paints a bleak picture of the future in the case where the growth of the number of lines of code or number of patches per time period far outpaces the growth of the number of developers and maintainers.”

                          This is why the founders of INFOSEC created security certifications in the first place. The idea is you just have to trust the evaluators did their job. That was made a bit easier by standardizing any techniques/tools that (a) made it easier to prove the code implemented the model and security policy, and (b) eliminated entire classes of problems. Those methods worked. In high-assurance security and PLT, the methods have only grown to knock out more classes of problems and preserve more properties both with high automation. They should be mandated for any project where you want to be sure something will or won’t happen. Also, it dramatically reduces the review burden where you just gotta follow a checklist, run some tools, read some reports, and that covers much of the risk.

                          From there, you have the risk that the evaluator is malicious or something. That’s why I proposed multiple, mutually-suspicious evaluators who all publish the same signed hash of what they reviewed. Also, they and the spooks they review for have to use some of that software themselves for critical assets. Anything they let slip can hit them. The resulting product can be anything from paid, shared source to free, open source. As mandated in TCSEC’s highest levels, the customers must always get the source plus the analyses so they can re-check and build themselves.

                          I will quickly note there another potential, business model to support FOSS if we can get software quality regulated or courts enforce liability. As soon as there’s an incentive, we’ll see these methods go up plus with private or official evaluations showing they’re used for regulators or courts. This already happens in safety-critical under DO-178C, etc. In those industries, there’s two things you can buy: the software itself for any use; the evaluation package for use in regulated market. The latter, required by evaluators, can get pricier to recoup cost of extra assurance activities and certification.

                          Likewise, a FOSS business model might do rigorously-developed, independently-vetted software where you get (free or cheap) the software. Then, they charge extra for assurance documents, warranties about those activities, and/or trusted delivery. That Praxis (now Altran) charged a premium (I heard 50%) for warrantied code with market success is a precedent. Some Cleanroom supposedly did warranties, too. A local one down here does, but with less formality: they just fix free or cheaply problems in expensive work that was supposed to work.

                          1. 2

                            I recall there was some attempt to provide a set of popular FOSS Go libraries as a paid package with vetting being the added value. It was a project by some popular Go programmer, but I can’t remember enough details to be able to google it up now… :/

                            Also I wonder if it could make sense to create an open platform a la GitHub where anyone could claim to vet some libraries, and people could select who they trust as vetters, akin to web of trust. Maybe linked to keybase? Could be part of GitHub, but I don’t expect them to write this until they see a successful existing platform, and can copycat the success then (they were already slow even before the MS acquisition).

                            1. 2

                              Yeah, I thought about Keybase for that, too. Im not sure how well it would work given state of social media and web of trust. Probably still safer to have well-known, skilled folks making lists and links of each others’ work. A web of trust of sorts might get bootstrapped by that.

                              It could also help to have a tracker per person of bugs found, security papers published, and vetted code written. That could help people assess whether certain person’s skills were relevant or if they had any.

                          2. 3

                            I see complexity as coming from several parts of the software development ecosystem right now.

                            1. The web & the consuming need to make everything a web app. Looking at you Chrome Embedded.

                            2. Simultaneously, the enormous proliferation of languages instead of a lingua franca. The lack of standardization means that effort is amortized: instead of collaborating, there is competition, and often not even useful competition.

                            3. Software libraries are janky enough that often in-house libs are made to do the job better, for that company’s work. Then they are released, but are, often, too janky for other companies to use.

                            3.1 Corporate lawyers interested in IP hate when engineers contribute to open source. Hoops are made to inhibit this.

                            3.2 Often, corporate open source release is considered as a recruiting tool, not an actual maintained bit of software.

                            1. CADT - see jwz.

                            Gathered all together, you wind up with an enormous graph of interconnection, with most nodes in a state of not-quite-all-the-way.

                            Yeah, it’s gonna break. A lot. That it works as reliably as it does is a minor miracle.


                            Consider this: I have not had a meaningful improvement in my web browsing experience since ad-blockers were created for Firefox in the early 2000s. I have bookmarks, I have history, I have tabs, and I run without ads - on desktop. Mobile is stuffed with ads. The only reasons to upgrade are (1) security hole because the cost of complexity is security and (2) javascript has a new syscall and new sites are using it.

                            Really sit with that. 15+ years of work, and not one piece of change that has improved my web use life. And people think this is innovative?


                            My perspective is that what is mostly needed is some serious research in computer systems, leading to a unified single system; a single language; a single suite of applications, all extensible and improvable - to collect effort into a single place instead of spraying it around.

                            1. 2

                              “security hole because the cost of complexity is security”

                              I counter that you mostly have to upgrade because they’re not using methods that prevent classes of errors. Especially language-based security, static/dynamic analysis, and fuzzing. Most of the errors I see would’ve been blocked by going with Ada or Modula-3 instead of C/C++. Then, analysis and reviews on top of it. After those errors, the rest is inherent complexity and new kinds of problems. Some of those can be reduced to denial-of-service using proven methods like isolation and recovery. Then there’s this tiny bit left that’s damaging, unavoidable problems demanding updates.

                              I mean, Mozilla was making $300 million a year with around $7-8 million in profit last I checked. They can afford to buy best of class tools for checking their code plus people to review it. They just don’t. They don’t care that much outside a small percentage of their engineers that work on those things and whoever in management facilitates such things. You do so many security updates due to apathy, not complexity. That’s the biggest problem in INFOSEC by far.

                              1. 2

                                You’re quite right in the sense of a particular program - web browser here. The browser system has such a complex set of stakeholders that it has completely stagnated for something like two decades.

                                Re security and complexity: I’m contemplating more about the system involved: this thing uses that thing uses the other thing, and suddenly there’s a side path to data/compute that isn’t cleared.

                                Fundamentally software management and leaders would rather be writing Java/Go/Python/JS than tools like Haskell or Rust, due to the perceived training overhead and perceived difficulty of hiring. The costs of insecurity and hasty work are externalized onto the customer.

                                Further remark: Even a tool like Scala is close to being a categorical improvement over Java - the ML family is a categorical improvement. If a consortium of Interested Major Players invested 100 developers at the usual 200K/year fully loaded cost for mid-career devs- that’s 20M or so- nearly pocket change vs the eye-popping profits of Major Players- then those developers would be able to deliver a consistent standard library with standard tools covering the entire stack for a specific Modern Technology. E.g., let’s say they fund Rust & Haskell development - imagine what that kind of money and narrow focus could unlock.

                                But they don’t, because the current situation is Good Enough For Massive Profit.

                                I would thus argue that software programs need to be regulated & licensed according to criticality, with critical programs failing being considered up to & including criminal liability. Professional credentials for licensed programs should be part and parcel of this. Like, IDK, professional civil engineers………… this is how we don’t have skyscrapers fall down, yannow…

                                1. 1

                                  “The browser system has such a complex set of stakeholders that it has completely stagnated for something like two decades.”

                                  Again, that’s not quite true. Google hired a team of top talent, straight-up redid a browser for speed/security, kept improving it, and their work innovations into that once-stagnant ecosystem. Hell, Google’s QUIC combined with HTTP is going to be the new, official standard. Mozilla could’ve done this at any time and can still do it now for other innovations. Their management was what held them back as always.

                                  With that myth countered, now we see Mozilla could do something. They already are with QUANTUM for gradual improvements with language-based safety. If I were them, I’d have already done something like a Chromebook built on an IBOS-like architecture using Mozilla’s components. Offer a paid service for private, encrypted backups and/or priority updates. There’s some other easy wins that would just take money and labor to knock out lots of problems. I’m holding off on them since I might try to sell it to them in the future to reinvest that money into making stuff other companies can use.

                                  “Fundamentally software management and leaders would rather be writing Java/Go/Python/JS than tools like Haskell or Rust, due to the perceived training overhead and perceived difficulty of hiring. The costs of insecurity and hasty work are externalized onto the customer.”

                                  This is true. It will drive most managers’ decisions, too. I do want to highlight an alternative, though: Modula-3 and other Pascal-like languages were so much simpler that programmers could learn them in a week. That solves the problem way better than Java or C#. This was proven later by a language from that philosophy, Go (which you named), which Pike said was specifically designed to facilitate easy take-up by average programmer.

                                  The adoption of C# and Java wasn’t for technical reasons either: big companies with enterprise influence invested massive sums of money to convince managers that listen to them to use those languages. I remember trying Java early on to find it was an unnecessarily complex language with slow startup and performance that could be 10-15x slower than alternatives. People weren’t adopting it cuz it was easy to learn. That marketing-based adoption, though, did create huge ecosystems around them will tons of useful tools and libraries. Now (or at some point), those factor into adoption decisions. The LISP’s, Scala, Clojure, and so on show smart developers still didn’t need to use them to solve business problems or reap ecosystem benefits: just transpile to and integrate with them.

                                  “would be able to deliver a consistent standard library with standard tools covering the entire stack for a specific Modern Technology. E.g., let’s say they fund Rust & Haskell development - imagine what that kind of money and narrow focus could unlock. But they don’t, because the current situation is Good Enough For Massive Profit.”

                                  Totally true if we add social ineratia: people doing what they’re doing because those that taught them and/or around them are doing it. Some combination of these is what I’m accusing Mozilla of when saying many of your security updates are unnecessary.

                                  “I would thus argue that software programs need to be regulated & licensed…”

                                  It’s already happening in some sectors. I’m all for expanding it further. Gotta be careful on the specifics. Doesn’t take a lot to vastly improve the situation, though.

                                2. 2

                                  They can afford to buy best of class tools for checking their code plus people to review it. They just don’t.

                                  But they do. I’m not aware of any better static analysis for c++ than Coverity.

                                  And obviously they are trying to migrate to language which is memory safe.

                                  1. 1

                                    My recent favorite is RV-Match since it has (a) low false positives and (b) built on an open framework others can build on. Lots of open, verification tools exist, too, with different strengths. Most of the best ones are for C: a good reason to replace C++ with C and/or have a transpiler for whatever they’re using. If staying in C++, it’s worth remembering that different tools often catch or miss different problems in the same code. So, if they cared a lot, they’d be running a bunch of tools like NASA does plus reviews and tests. Also, diverse in how they do things: Coverity’s meta-methods focused on finding a lot really fast; PVS-Check’s sound analyzer; concurrency checkers esp for races; several forms of automated, test generation.

                                    @ All

                                    Those are just some examples. Anyone looking for a Master’s or Ph. D. project should strongly consider a Rust to C compiler with equivalence tests (not proof!). Optimize for language coverage and compatibility over formality. It doesn’t have to work perfectly given the analyzers don’t: just let us check as much Rust, esp unsafe Rust, using C’s ecosystem as we can as quickly as we can to find as many bugs as we can. Use plenty of parallelism in the toolchain. This is another project Mozilla could fund. Might propose it in the future but Mozillans feel free so long as you give me credit. :)

                                    Note: Shoutout to ZL, Nim, and SPARK Ada for being ahead by already transpiling to C or something close. Companies could adopt such a Brute-Force Assurance strategy more easily with languages like that.

                              2. 2

                                Can understandable software be aided by better languages with less boilerplate? Languages like Luna, where the flow of data is made explicit instead of just greppable?

                                1. 1

                                  To be clear, boilerplate to me includes the code necessary to implement a pattern. In other words, all code that can be generated from an appropriate XML specification.

                                2. 1

                                  Good points and although they are more widely applicable, NPM is a special case of crazy.