1. 24

    You either die a hero, or live long enough to see yourself become the villain.

    1. 21

      This mindset of replacing a language to remove a class of errors is naive at best.

      I hate null with a passion and I do think Rust memory safety is a valuable feature. But lets take the biggest problem of that class as an example, the heartbleed bug. If you look at the vulnerability code, it is a very basic mistake. If you took an introductory course in C, you would learn how not to do that.

      To argue that it was just a matter of using a language that doesn’t allow for that kind of error is the solution is to defend an impossible solution. Without doubting the good intentions of whomever wrote that piece of code, let us call a spade a spade, it was objectively poor code with basic flaws.

      You don’t solve bad engineering by throwing a hack at it such as changing the language. It will manifest itself in the form of other classes of bugs and there is no evidence whatsoever that the outcome isn’t actually worse than the problem one is trying to fix.

      Java doesn’t allow one to reference data by its memory address, precisely to avoid this whole class of problems, why isn’t everyone raving about how that magically solved all problems? The answer is: because it obviously didn’t.

      I love curl and use it intensively, but this post goes down that whole mindset. Running scripts to find bugs and so on.

      1. 67

        I’m not convinced by this argument. Large C and C++ projects seem to always have loads of memory vulns. Either they’re not caused by bad programming or bad programming is inevitable.

        I think the core question of whether memory unsafe languages result in more vulnerable code can probably be answered with data. The only review I’m aware of is this fairly short one by a Rust contributor, but there are probably others: https://alexgaynor.net/2020/may/27/science-on-memory-unsafety-and-security/

        1. 17

          Good article, the writer sums it up brilliant:

          Until you have the evidence, don’t bother with hypothetical notions that someone can write 10 million lines of C without ubiquitious memory-unsafety vulnerabilities – it’s just Flat Earth Theory for software engineers.

          1. 14

            There should be a corollary: until you have the evidence, don’t bother with hypothetical notions that rewriting 10 million lines of C in another language would fix more bugs than it introduces.

            1. 9

              Agreed. But nuance is deserved on “both sides” of the argument.

              It’s fair to say that rewriting 10 million lines of C in a memory safe language will, in fact, fix more memory bugs than it introduces (because it fix them all and wont introduce any).

              It’s also fair to acknowledge that memory bugs are not the only security bugs and that security bugs aren’t the only important bugs.

              It’s not fair to say that it’s literally impossible for a C program to ever be totally secure.

              My tentative conclusion is this: If your C program is not nominally related to security, itself, then it very likely will become more secure by rewriting in Rust/Zig/Go/whatever. In other words, if there are no crypto or security algorithms implemented in your project, then the only real source of security issues is from C, itself (or your dependencies, of course).

              If you C program is related to security in purpose, as in sudo, a crypto library, password manager, etc, then the answer is a lot less clear. Many venerable C projects have the advantage of time- they’ve been around forever and have lots of battle testing. It’s likely that if they stay stable and don’t have a lot of code churn that they wont introduce many new security bugs over time.

              1. 1

                if there are no crypto or security algorithms implemented in your project, then the only real source of security issues is from C, itself

                I don’t think this is true. All sorts of programs accept untrusted input, not just crypto or security projects, and almost any code that handles untrusted input will have all sorts of opportunities to be unsafe, regardless of implementation language.

                1. 1

                  Theoretically yes. But, in practice, if you’re not just passing a user-provided query string into your database, it’s much, MUCH, harder for bad input to pose a security threat. What’s the worst they can do- type such a long string that you OOM? I can be pretty confident that no matter what they type, it’s not going to start writing to arbitrary parts of my process’s memory.

                  1. 1

                    It’s not just databases, tho, it’s any templating or code generation that uses untrusted input.

                    Do you generate printf format strings, filesystem paths, URLs, HTML, db queries, shell commands, markdown, yaml, config files, etc? If so, you can have escaping issues.

                    And then there are problems specific to memory unsafety: buffer overturns let you write arbitrary instructions to process memory, etc.

                    1. 1

                      Did you forget that my original comment was specifically claiming that you should not use C because of buffer overruns? So that’s not a counter-point to my comment at all- it’s an argument for it.

                      My overall assertion was that if you’re writing a program in C, it will almost definitely become more secure if you rewrote it in a memory-safe language, with the exception of programs that are about security things- those programs might already have hard-won wisdom that you’d be giving up in a rewrite, so the trade-off is less clear.

                      I made a remark that if your C program doesn’t, itself, do “security stuff”, that the only security issues will be from the choice of C. That’s not really correct, as you pointed out- you can surely do something very stupid like passing a user-provided query right to your database, or connect to a user-provided URL, or whatever.

                      But if that’s the bar we’re setting, then that program definitely has no business being written in C (really at all, but still). There’s certainly no way it’s going to become less secure with a rewrite in a memory-safe language.

        2. 63

          Your argument is essentially a form of “victim shaming” where we slap the programmers and tell them to be better and more careful engineers next time.

          It is an escapism that stiffles progress by conveniently opting to blame the person making the mistake, rather than the surrounding tools and environment that either enabled, or failed to prevent the error.

          It can be applied to all sorts of other contexts including things such as car safety. You could stop making cars safer and just blame the drivers for not paying more attention, going too fast, drink driving, etc…

          If we can improve our tools of the trade to reduce or - better yet - eliminate the possibility of mistakes and errors we should do it. If it takes another whole language to do it then so be it.

          That’s similar to a car manufacturer using a different engine or chassis because somehow it reduces the accidents because of the properties that it has.

          The way we can make that progress is exactly by blaming our “tools” as the “mistake enablers”. Not the person using the tools. Usually they’ve done their best in good faith to avoid a mistake. If they have still made one, that’s an opportunity for improvement of our tools.

          1. 38

            Your argument is essentially “you can’t prevent bad engineering or silly programmer errors with technical means; this is a human problem that should be fixed at the human level”. I think this is the wrong way to look at it.

            I think it’s all about the programmer’s mental bandwidth; humans are wonderful, intricate, and beautiful biological machines. But in spite of this we’re also pretty flawed and error-prone. Ask someone to do the exact same non-trivial thing every Wednesday afternoon for a year and chances are a large amount of them will fail at least once to follow the instructions exactly. Usually this is okay because most things in life have fairly comfortable error margins and the consequences of failure are non-existent or very small, but for some things it’s a bit different.

            This is why checklists are used extensively in aviation; it’s not because the pilots are dumb or inexperienced, it’s because it’s just so damn easy to forget something when dealing with these complex systems, and the margin for error is fairly low if you’re 2km up in the sky and the consequences can be very severe.

            C imposes fairly high mental bandwidth: there are a lot of things you need to do “the right way” or you run in to problems. I don’t think anyone is immune to forgetting something on occasion; who knows what happened with the Heartbleed thing; perhaps the programmer got distracted for a few seconds because the cat jumped on the desk, or maybe their spouse asked what they wanted for dinner tonight, or maybe they were in a bad mood that day, or maybe they … just forgot.

            Very few people are in top form all day, every day. And if you write code every day then sooner or later you will make a mistake. Maybe it’s only once every five years, but if you’re working on something like OpenSSL the “make a silly mistake once every five years” won’t really cut it, just as it won’t for pilots.

            The code is now finished and moves on to the reviewer(s); and the more they need to keep in mind when checking the code the more chance there is they may miss something. Reviewing code is something I already find quite hard even with “easy” languages: how can I be sure that it’s “correct”? Doing a proper review takes almost as much time as writing the code itself (or longer!) The more you need to review/check for every line of code, the bigger the chance is that you’ll miss a mistake like this.


            I don’t think that memory safety is some sort of panacea, or that it’s a fix for sloppy programming. But it makes it frees up mental bandwidth and mistakes will be harder and their consequences less severe. It’s just one thing you don’t have to think about, and now you have more space to think about other aspects of the program (including security problems not related to memory safety).

            @x64k mentioned PHP in another reply, and this suffers from the same problem; I’ve seen critical security fixes which consist of changing in_array($list, $item) to in_array($list, $item, true). That last parameters enable strict type checking (so that "1" == 1 is false). The root cause of these issues is the same as in C: it imposes too much bandwidth to get it right, every time, all the time.

            NULLs have the same issue: you need to think “can this be NULL?” every time. It’s not a hard question, but it’s sooner or later you’ll get it wrong and asking it all the time takes up a lot of bandwidth probably best spent elsewhere.

            1. 30

              Java does magically solve all memory problems. People did rave about garbage collection: garbage collection is in fact revolutionary.

              1. 5

                That was a long time ago so lots of people don’t remember what OP is talking about anymore. The claim wasn’t that Java would magically solve all memory problems. That was back when the whole “scripting vs. systems” language dichotomy was all the rage and everyone thought everything would be written in TCL, Scheme or whatever in ten years or so. There was a more or less general expectation (read: lots of marketing material, since Java was commercially-backed, but certainly no shortage of independent tech evangelists) that, without pointers, all problems would go away – no more security issues, no more crashes and so on.

                Unsurprisingly, neither of those happened, and Java software turned out to be crash-prone in its own unpleasant ways (there was a joke about how you can close a Java program if you can’t find the quit button: wiggle the mouse around, it’ll eventually throw an unhandled exception) in addition to good ol’ language-agnostic programmer error.

              2. 29

                If you took an introductory course in C, you would learn how not to do that.

                Yet somehow the cURL person/people made the mistake. Things slip by.

                Java doesn’t allow one to reference data by its memory address, precisely to avoid this whole class of problems, why isn’t everyone raving about how that magically solved all problems? The answer is: because it obviously didn’t.

                That, actually, was one of the the biggest selling points of Java to C++ devs. It’s probably the biggest reason that Java is still such a dominant language today.

                I also take issue with your whole message. You say that you can’t fix bad engineering by throwing a new language at it. But that’s an over generalization of the arguments being made. You literally can fix bad memory engineering by using a language that doesn’t allow it, whether that’s Java or Rust. In the meantime, you offer no solution other than “don’t do this thing that history has shown is effectively unavoidable in any sufficiently large and long-lived C program”. So what do you suggest instead? Or are we just going to wait for heartbleed 2.0 and act surprised that it happened yet again in a C program?

                Further, you throw out a complaint that we can’t prove that rewriting in Rust (or whatever) won’t make things worse than they currently are. We live in the real world- you can’t prove lots of things, but is there any reason to actually suspect that this is realistically possible?

                1. 27

                  This mindset of replacing a language to remove a class of errors is naive at best.

                  I’d rather say that your post is, charitably, naive at best (and it continuing to dominate the conversation is an unfortunate consequence of the removal of the ability to flag down egregiously incorrect posts, sadly).

                  I hate null with a passion and I do think Rust memory safety is a valuable feature. But lets take the biggest problem of that class as an example, the heartbleed bug. If you look at the vulnerability code, it is a very basic mistake. If you took an introductory course in C, you would learn how not to do that.

                  Do you really believe that the OpenSSL programmers (whatever else you can say about that project) lack an introductory knowledge of C? Do you feel the linux kernel devs, who have made identical mistakes, similarly lack an introductory knowledge of C? Nginx devs? Apache? Etc, etc.

                  This is an extraordinary claim.

                  You don’t solve bad engineering by throwing a hack at it such as changing the language.

                  Probably one of the most successful fields in history at profoundly reducing, and keeping low, error rates has been the Aviation industry, and the lesson of their success is that you don’t solve human errors by insisting that the people who made the errors would have known not to if they’d just taken an introductory course they’d already long covered, or in general just be more perfect.

                  The Aviation industry realized that humans, no matter how well tutored and disciplined and focused, inevitably will still make mistakes, and that the only thing that reduces errors is looking at the mistakes that are made and then changing the system to account for those mistakes and reduce or eliminate their ability to recur.

                  When your dogma leads to you making extraordinary (indeed, ludicrous) claims, and the historical evidence points the polar opposite of the attitude you’re preaching being the successful approach, it’s past time to start reconsidering your premise.

                  1. 13

                    The Aviation industry realized that humans, no matter how well tutored and disciplined and focused, inevitably will still make mistakes, and that the only thing that reduces errors is looking at the mistakes that are made and then changing the system to account for those mistakes and reduce or eliminate their ability to recur.

                    I’d like to stress that this is only one part of Aviation’s approach, at least as driven by the FAA and the NTSB in the US. The FAA also strives to create a culture of safety, by mandating investigations into incidents, requiring regular medical checkups depending on your pilot rating, releasing accident findings often, incentivizing record-keeping on both aircraft (maintenance books) and pilots (logbooks), encouraging pilots to share anonymous information on incidents that occurred with minor aircraft or no passenger impact, and many more. This isn’t as simple as tweaking the system. It’s about prioritizing safety at every step of the conversation.

                    1. 7

                      A fair point. And of course all of this flies directly in the face of just yelling “be more perfect at using the deadly tools!” at people.

                      1. 4

                        Yup, I meant this more to demonstrate what it takes to increase safety in an organized endeavor.

                      2. 1

                        Dropping the discussion of “the problem is human nature” in this comment. I’m explicitly not rhetorically commenting on it or implying such.

                        These “other parts”, and culture of safety - how would we translate that across into programming? Actually, come to think of it that’s probably not the first question. The first question is, is it possible to translate that across into programming?

                        I think it’s fair to say that in e.g. webdev people flat-out just value developer velocity over aerospace levels of safety because (I presume) faster development is simply more valuable in webdev than it is in aerospace - if the thing crashes every tuesday you’ll lose money, but you won’t lose that much money. So, maybe it’s impractical to construct such a culture. Maybe. I don’t know.

                        But, supposing it is practical, what are we talking about? Record-keeping sounds like encouraging people to blog about minor accidents, I guess? But people posting blogs is useless if you don’t have some social structure for discussing the stuff, and I’m not sure what the analogous social structure would be here.

                        “Prioritizing safety at every step of the conversation” sounds like being able to say no to your boss without worry.

                        “This isn’t as simple as tweaking the system” sounds like you’re saying “treat this seriously and stop chronicly underserving it both financially and politically”, which sounds to me like “aim for the high-hanging fruit of potential problems”, which I don’t think anyone with the word “monetize” job description will ever remotely consider.

                        What are the low-hanging fruit options in this “stop excessively focusing on low-hanging fruit options” mindset you speak of?

                        Actually, it sounds like that sort of thing would need sort of government intervention in IT security or massive consumer backlash. Or more likely both, with the latter causing the former.

                        1. 1

                          The first question is, is it possible to translate that across into programming?

                          It most certainly is. The “easiest” place to see evidence of this is to look into fields of high-reliability computing. Computing for power plants, aviation, medical devices, or space are all good examples. A step down would be cloud providers that do their best to provide high availability guarantees. These providers also spend a lot of engineering effort + processes in emphasizing reliability.

                          But, supposing it is practical, what are we talking about? Record-keeping sounds like encouraging people to blog about minor accidents, I guess? But people posting blogs is useless if you don’t have some social structure for discussing the stuff, and I’m not sure what the analogous social structure would be here.

                          Thing of the postmortem process posted on the blogs of the big cloud providers. This is a lot like the accident reports that the NTSB releases after accident investigation. I think outside of the context of a single entity coding for a unified goal (whether that’s an affiliation of friends, a co-op, or a company), it’s tough to create a “culture” of any sort, because in different contexts of computing, different tradeoffs are desired. After all, I doubt you need a high reliability process to write a simple script.

                          “This isn’t as simple as tweaking the system” sounds like you’re saying “treat this seriously and stop chronicly underserving it both financially and politically”, which sounds to me like “aim for the high-hanging fruit of potential problems”, which I don’t think anyone with the word “monetize” job description will ever remotely consider.

                          You’d be surprised how many organizations, both monetizing and not, have this issue. Processes become ossified; change is hard. Aiming for high-hanging fruit is expensive. But a mix of long-term thinking and short-term thinking is always the key to making good decisions, and in computing it’s no different. You have to push for change if you’re pushing against a current trend of unsafety.

                          What are the low-hanging fruit options in this “stop excessively focusing on low-hanging fruit options” mindset you speak of?

                          There needs to be a feedback mechanism between failure of the system and engineers creating the system. Once that feedback is in place, safety can be prioritized over time. Or at least, this is one way I’ve seen this done. There are probably many paths out there.

                          I think it’s fair to say that in e.g. webdev people flat-out just value developer velocity over aerospace levels of safety because (I presume) faster development is simply more valuable in webdev than it is in aerospace - if the thing crashes every tuesday you’ll lose money, but you won’t lose that much money. So, maybe it’s impractical to construct such a culture. Maybe. I don’t know.

                          This here is the core problem. Honestly, there’s no reason to hold most software to a very high standard. If you’re writing code to scrape the weather from time to time from some online API and push it to a billboard, meh. What software needs to do is get a lot better about prioritizing safety in the applications that require it (and yes, that will require some debate in the community to come up with applications that require this safety, and yes there will probably be different schools of thought as there always are). I feel that security is a minimum, but beyond that, it’s all application specific. Perhaps the thing software needs the most now is just pedagogy on operating and coding with safety in mind.

                          1. 1

                            A step down would be cloud providers that do their best to provide high availability guarantees. These providers also spend a lot of engineering effort + processes in emphasizing reliability.

                            Google’s SRE program and the SRE book being published for free are poster examples of promoting a culture of software reliability.

                    2. 18

                      You don’t solve bad engineering by throwing a hack at it such as changing the language.

                      Yes, you absolutely do. One thing you can rely on is that humans will make mistakes. Even if they are the best, even if you pay them the most, even if you ride their ass 24 hours a day. Languages that make certain kinds of common mistakes uncommon or impossible save us from ourselves. All other things being equal, you’d be a fool not to choose a safer language.

                      1. 8

                        I wrote a crypto library in C. It’s small, only 2K lines of code. I’ve been very diligent every step of the way (save one, which I paid dearly). I reviewed the code several times over. There was even an external audit. And very recently, I’ve basically fixed dead code. Copying a whopping 1KB, allocating and wiping a whole buffer, wasting lines of code, for no benefit whatsoever. Objectively a poor piece of code with a basic flaw.

                        I’m very careful with my library, and overall I’m very proud of its overall quality; but sometimes I’m just tired.

                        (As for why it wasn’t noticed: as bad as it was, the old code was correct, so it didn’t trigger any error.)

                        1. 7

                          All programmers are bad programmers then, otherwise why do we need compiler error messages?

                          Software apparently can’t just be written correctly the first time.

                          1. 15

                            I’m half joking here but, indeed, if language-level memory safety were all it takes for secure software to happen, we could have been saved ages ago. We didn’t have to wait for Go, or Rust, or Zig to pop up. A memory-safe language with no null pointers, where buffer overflow, double-frees and use-after-free bugs are impossible, has been available for more than 20 years now, and the security track record of applications written in that language is a very useful lesson. That language is PHP.

                            I’m not arguing that (re)writing curl in Go or Rust wouldn’t eventually lead to a program with fewer vulnerabilities in this particular class, I’m just arguing that “this program is written in C and therefore not trustworthy because C is an unsafe language” is, at best, silly. PHP 4 was safer than Rust and boy do I not want to go back to dealing with PHP 4 applications.

                            Now of course one may argue that, just like half of curl’s vulnerabilities are C mistakes, half of those vulnerabilities were PHP 4 mistakes. But in that case, it seems a little unwise to wager that, ten years from now, we won’t have any “half of X vulnerabilities are Rust mistakes” blog posts…

                            1. 13

                              Language-level anything isn’t all it takes, but from my experience they do help and they help much more than “a little”, and… I’ll split this in two.

                              The thing I’ve done that found the largest number of bugs ever was when I once wrote a script to look for methods (in a >100kloc code base) that that three properties: a) Each method accepted at least one pointer parameter b) contained null in the code and c) did not mention null in the documentation for that method. Did that find all null-related errors? Far from it, and there were several false positives for each bug, and many of the bugs weren’t serious, but I used the output to fix many bugs in just a couple of days.

                              Did this fix all bugs related to null pointers? No, not even nearly. Could I have found and fixed them in other ways? Yes, I could. The other ways would have been slower, though. The script (or let’s call it a query) augmented my capability, in much the same way as many modern techniques augment programmers.

                              And this brings me to the second part.

                              We have many techniques that do do nothing capable programmers can’t do. (I’ve written assembly language without any written specification, other documentation, unit tests or dedicated testers, and the code ran in production and worked. It can be done.)

                              That doesn’t mean that these techniques are superfluous. Capable programmers are short of time and attention; techniques that use CPU cycles, RAM and files, and that save brain time are generally a net gain.

                              That includes safe languages, but also things like linting, code queries, unit tests, writing documentation and fuzzing (or other white-noise tests). I’d say it also includes code review, which can be described as using other ream members’ attention to reduce the total attention needed to deliver features/fix bugs.

                              Saying “this program is safe because it has been fuzzed” or “because it uses unit tests” doersn’t make sense. But “this program is unsafe because it does not use anything more than programmer brains” makes sense and is at least a reasonable starting assumption.

                              (The example I used above was a code query. A customer reported a bug, I hacked together a code query to find similar possible trouble spots, and found many. select functions from code where …)

                              1. 2

                                PHP – like Go, Rust and many others out there – also doesn’t use anything more than programmer brains to avoid off-by-one errors, for example, which is one of the most common causes of bugs with or without security implications. Yet nobody rushes to claim that programs written in one of these languages are inherently unsafe because they rely on nothing but programmer brains to find such bugs.

                                As I mentioned above: I’m not saying these things don’t matter, of course they do. But conflating memory safety with software security or reliability is a bad idea. There’s tons of memory-safe code out there that has so many CVEs it’s not even funny.

                                1. 17

                                  But conflating memory safety with software security or reliability is a bad idea. There’s tons of memory-safe code out there that has so many CVEs it’s not even funny.

                                  Who is doing this? The title of the OP is explicitly not conflating memory safety with software security. Like, can you find anyone with any kind of credibility conflating these things? Are there actually credible people saying, “curl would not have any CVEs if it were written in a memory safe language”?

                                  It is absolutely amazing to me how often this straw man comes up.

                                  EDIT: I use the word “credible” because you can probably find a person somewhere on the Internet making comments that support almost any kind of position, no matter how ridiculous. So “credible” in this context might mean, “an author or maintainer of software the other people actually use.” Or similarish. But I do mean “credible” in a broad sense. It doesn’t have to be some kind of authority. Basically, someone with some kind of stake in the game.

                                  1. 7

                                    Just a few days ago there was a story on the lobster.rs front page whose author’s chief complaint about Linux was that its security was “not ideal”, the first reason for that being that “Linux is written in C, [which] makes security bugs rather common and, more importantly, means that a bug in one part of the code can impact any other part of the code. Nothing is secure unless everything is secure.” (Edit: which, to be clear, was in specific contrast to some Wayland compositor being written in Rust).

                                    Yeah, I’m tired of it, too. I like and use Rust but I really dislike the “evangelism taskforce” aspect of its community.

                                    1. 9

                                      I suppose “nothing is secure unless everything is secure” is probably conflating things. But saying that C makes security bugs more common doesn’t necessarily. In any case, is this person credible? Are they writing software that other people use?

                                      I guess I just don’t understand why people spend so much time attacking a straw man. (Do you even agree that it is a straw man?) If someone made this conflation in a Rust space, for example, folks would be very quick to correct them that Rust doesn’t solve all security problems. Rust’s thesis is that it reduces them. Sometimes people get confused either because they don’t understand or because none are so enthusiastic as the newly converted. But I can’t remember anyone with credibility making this conflation.

                                      Like, sure, if you see someone conflating memory safety with all types of security vulnerabilities, then absolutely point it out. But I don’t think it makes sense to talk about that conflation as a general phenomenon that is driving any sort of action. Instead, what’s driving that action is the thesis that many security vulnerabilities are indeed related to memory safety problems, and that using tools which reduce those problems in turn can eventually lead to more secure software. While some people disagree with that, it takes a much more nuance argument and it sounds a lot less ridiculous than the straw man you’re tearing down.

                                      Yeah, I’m tired of it, too. I like and use Rust but I really dislike the “evangelism taskforce” aspect of its community.

                                      I’m more tired of people complaining about the “evangelism taskforce.” I see a lot more of that than I do the RESF.

                                      1. 7

                                        Sorry, I think I should have made the context more obvious. I mean, let me start with this one, because I’d also like to clarify that a) I think Rust is good and b) that, as far as this particular debate is concerned, I think writing new things in Rust rather than C or especially C++ is a good idea in almost every case:

                                        (Do you even agree that it is a straw man?)

                                        What, that experienced software developers who know and understand Rust are effectively claiming that Rust is magic security/reliability dust? Oh yeah, I absolutely agree that it’s bollocks, I’ve seen very few people who know Rust and have more than a few years of real-life development experience in a commercial setting making that claim with a straight face. There are exceptions but that’s true of every technology.

                                        But when it comes to the strike force part, here’s the thing:

                                        If someone made this conflation in a Rust space, for example, folks would be very quick to correct them that Rust doesn’t solve all security problems.

                                        …on the other hand, for a few years now it feels like outside Rust spaces, you can barely mention an OS kernel or a linker or a window manager or (just from a few days ago!) a sound daemon without someone showing up saying ugh, C, yeah, this is completely insecure, I wouldn’t touch it with a ten-foot pole. Most of the time it’s at least plausible, but sometimes it’s outright ridiculous – you see the “not written in Rust” complaint stuck on software that has to run on platforms Rust doesn’t even support, or that was started ten years ago and so on.

                                        Most of them aren’t credible by your own standards or mine, of course, but they’re part of the Rust community whether they’re representative of the “authoritative” claims made by the Rust developers or not.

                                        1. 4

                                          Fair enough. Thanks for the reply!

                                          1. 4

                                            …on the other hand, for a few years now it feels like outside Rust spaces, you can barely mention an OS kernel or a linker or a window manager or (just from a few days ago!) a sound daemon without someone showing up saying ugh, C, yeah, this is completely insecure, I wouldn’t touch it with a ten-foot pole. Most of the time it’s at least plausible, but sometimes it’s outright ridiculous – you see the “not written in Rust” complaint stuck on software that has to run on platforms Rust doesn’t even support, or that was started ten years ago and so on.

                                            As I mentioned in my below comment on this article, this is a good thing. I want people who decide to write a novel sound daemon in C to see those sorts of comments, and (ideally) rethink the decision to write a novel C program to begin with. Again, this doesn’t necessarily imply that Rust is the right choice of language for any given project, but it’s a strong contender right now.

                                            1. 4

                                              Even now though, there still is significant tension between “don’t use C” and “make it portable”. Especially if you’re targetting embedded, or unknown platforms. C is still king of the hill as far as portability goes.

                                              What we really want is dethrone C at its own game: make something that eventually becomes even more portable. That’s possible: we could target C as a backend, and we could formally specify the language so it’s clear what’s a compiler bug (not to mention the possibility of writing formally verified compilers). Rust isn’t there yet.

                                              1. 6

                                                One of the (many) reasons I don’t use C and use Rust instead is because it’s easier to write portable programs. I believe every Rust program I’ve written also works on Windows, and that has nearly come for free. Certainly a lot cheaper than if I had written it in C. I suppose people use “portable” to mean different things, but without qualification, your dichotomy doesn’t actually seem like a dichotomy. I suppose the dichotomy is more, “don’t use C” and “make it portable to niche platforms”?

                                                1. 3

                                                  I think we (as in both me and the parent poster) were talking about different kinds of portability. One of the many reasons why most of the software I work on is (still) in C rather than Rust is that, while every Rust program I’ve written works on Windows, lots of the ones I need have to work on architectures that are, at best, Tier 2. Proposing that we ship something compiled with a toolchain that’s only “guaranteed to build” would at best get me laughed at.

                                                  1. 9

                                                    Yes. The point I’m making is that using the word “portable” unqualified is missing the fact that Rust lets you target one of the most popular platforms in the world at a considerably lower cost in lots of common cases. It makes the trade off being made more slanted than what folks probably intend by holding up “portability” as a ubiquitously good thing. Well, if we’re going to do that, we should acknowledge that there is a very large world beyond POSIX and embedded, and that world is primarily Windows.

                                                    1. 5

                                                      For the record, if I were writing desktop GUI applications or games, of course the only relevant platforms are Windows, Linux, and MacOSX. Or Android and iOS, if the application is meant for palmtops. From there “portability” just means I chose middleware that have backends for all the platforms I care about. Rust, with its expanded standard library, does have an edge.

                                                      If however I’m writing a widely applicable library (like a crypto library), then Rust suddenly don’t look so good any more. Because I know for a fact that many people still work on platforms that Rust doesn’t support yet. Not to mention the build dependency on Rust itself. So either I still use C, and I have more reach, or I use Rust, and I have more safety (not by much if I test my C code correctly).

                                                      Well, if we’re going to do that, we should acknowledge that there is a very large world beyond POSIX and embedded, and that world is primarily Windows.

                                                      Of course, my C library is also going to target Windows. Not doing so would defeat the point.

                                                      1. 6

                                                        I don’t think I strongly disagree with anything here. It’s just when folks say things like this

                                                        C is still king of the hill as far as portability goes.

                                                        I would say, “welllll I’m not so sure about that, because I can reach a lot more people with less effort using Rust than I can with C.” Because if I use C, I now need to write my own compatibility layer between my application and the OS in order to support a particularly popular platform: Windows.

                                                        And of course, this depends on your target audience, the problem you’re solving and oodles of other things, as you point out. But there’s a bit of nuance here because of how general the word “portable” is.

                                                        1. 3

                                                          Yeah, I was really talking about I/O free libraries. I believe programs should be organised in 3 layers:

                                                          • At the bottom, you have I/O free libraries, that depend on nothing but the compiler and maybe the standard library. That lack of dependency can make them extremely portable, and easy to integrate to existing projects. The lack of I/O makes them easy to test, so they have the potential to be very reliable, even if they’re written in an unsafe language.
                                                          • In the middle, you have, the I/O compatibility layer. SDL, Qt, Libuv, Rust’s stdlib, even hand written, take your pick. That one cannot possibly be portable, because it has to depend on the quirks of the underlying platform. But it can have several backends, which make the users of this compatibility layer quite portable.
                                                          • At the top, you have the application, that depends on the I/O free library and the compatibility layer. It cannot target platforms the compatibility layers doesn’t target, but at least is should be fairly small (maybe 10 times smaller than the I/O free libraries?), so if a rewrite is needed it shouldn’t be that daunting.

                                                          I believe C is still a strong contender for the bottom layer. There specifically, it is still the king of portability. For the middleware and the top layer however, the portability of the language means almost nothing, so it’s much harder to defend using C there.

                                                          Also note that the I/O free libraries can easily be application specific, and not intended for wider distribution. In that case, C also loses its edge, as (i) portability matters much less, and (ii) it’s still easier to use a safe language than write a properly paranoid test suite.

                                                      2. 0

                                                        So does C#. Targeting popularity does not make you portable.

                                                    2. 3

                                                      I was talking about “runs on a 16-bit micro controller as well as an 64-bit monster”. The kind where you might not have any I/O, or even a heap allocator. The kind where you avoid undefined behaviour and unspecified behaviour and implementation defined behaviour.

                                                      Hard, but possible for some programs. Crypto libraries (without the RNG) for instance are pure computation, and can conform to that highly restricted setting. I’ll even go a bit further: I think over 95% of programs can be pure computation, and be fully separated from the rest (I/O, system calls, networking and all that).

                                                      If you want to print stuff on a terminal, portability drops. If you want to talk to the network or draw pixels on the screen, portability in C is flat out impossible, because the required capabilities aren’t in the standard library. I hear Rust fares far better in that department.

                                                      I suppose the dichotomy is more, “don’t use C” and “make it portable to niche platforms”?

                                                      Open BSD has tier 3 support for Rust, which basically means no support. You assess how “niche” OpenBSD really is, especially in a security context.

                                                      1. 8

                                                        Yes, I would absolutely say OpenBSD is a niche platform. I generally don’t care if the stuff I write works on OpenBSD. I just don’t. I care a lot more that it runs on Windows though. If more people used OpenBSD, then I’d care more. That’s the only reason I care about Windows. It’s where the people are.

                                                        Niche doesn’t mean unimportant.

                                                        To pop up a level, I was making a very narrow point on a particular choice of wording. Namely, that “Rust isn’t as portable as C” is glossing over some really significant nuance depending on what you’re trying to do. If all you’re trying to do is distribute a CLI application, then Rust might not let you target as many platforms as easily as C, but it might let you reach more people with a lot less effort.

                                                  2. 4

                                                    I want people who decide to write a novel sound daemon in C to see those sorts of comments, and (ideally) rethink the decision to write a novel C program to begin with.

                                                    What in the world makes you think they haven’t considered that question and concluded that C, for all its shortcomings, was nonetheless their best option?

                                                    1. 1

                                                      If the conclusion is C, their thinking is wrong.

                                                  3. 1

                                                    stuck on software that has to run on platforms Rust doesn’t even support

                                                    Porting Rust to a platform sounds more achievable than writing correct software in C, so the only thing ridiculous is that people think “I haven’t ported it” is a valid excuse.

                                              2. 2

                                                Who is doing this?

                                                Lots of people. Search for “heartbleed C” or “heartbleed memory safety” or “heartbleed rust”.

                                                Are there actually credible people saying, “curl would not have any CVEs if it were written in a memory safe language”?

                                                They are not credible to me if they make such absurd claims, but they exist in very large numbers. They won’t claim that all problems would go away, but they all point out that heartbleed woulnd’t happen if openssl was written in Rust (for example). Yes, there are hundreds of such claims on the web. Thousands probably. As if a basic error like the one that led to heartbleed could only take the form of a memory safety problem.

                                                As for credibility. I don’t find your definition very useful. There is a lot of software used by millions, much of it genuinely useful that is still badly engineered. I don’t think popularity is a good indicator for credibility.

                                                1. 6

                                                  Can you actually show me someone who is claiming that all security vulnerabilities will be fixed by using Rust or some other memory safe language that would meet your standard of credibility if it weren’t for that statement itself?

                                                  I tried your search queries and I found nobody saying or implying something like, “using a memory safe language will prevent all CVEs.”

                                                  but they all point out that heartbleed woulnd’t happen if openssl was written in Rust (for example)

                                                  This is a very specific claim though. For the sake of argument, if someone were wrong about that specific case, that doesn’t mean they are conflating memory safety for all security vulnerabilities. That’s what I’m responding to.

                                                  As for credibility. I don’t find your definition very useful. There is a lot of software used by millions, much of it genuinely useful that is still badly engineered. I don’t think popularity is a good indicator for credibility.

                                                  So propose a new one? Sheesh. Dispense with the pointless nitpicking and move the discussion forward. My definition doesn’t require something to be popular. I think I was pretty clear in my comment what I was trying to achieve by using the word “credible.” Especially after my edit. I even explicitly said that I was trying to use it in a very broad sense. So if you want to disagree, fine. Then propose something better. Unless you have no standard of credibility. In which case, I suppose we’re at an impasse.

                                                  1. 1

                                                    I tried your search queries and I found nobody saying or implying something like, “using a memory safe language will prevent all CVEs.”

                                                    I never made such claim. You are insisting in the whole “prevent all CVEs”. That is an extreme point that I never made, nor did any other people in this thread. If you take to that extreme, then sure you are right. I never claimed that people say that Rust will magically do their laundry either. Please let’s keep the discussion to a level of reasonability so it stays fruitful.

                                                    FWIW, for “heartbleed rust”, google returns this in the first page:

                                                    • Would the Cloudbleed have been prevented if Rust was used
                                                    • How to Prevent the next Heartbleed
                                                    • Would Rust have prevented Heartbleed? Another look

                                                    All these are absurd. It is not my point to shame or blame. I have no idea who the author of the heartbleed offending code is. And in all honest we all have made mistakes. But let’s not take relativism to the absurd. Let’s be clear, it was objectively very poorly written code with a trivial error. A seasoned engineer should look at that and immediately see the problem. If you think that level of quality is less problematic when you use a ‘safer’ language, you are in for very bad surprises. It was objectively bad engineering, nothing less. The language had nothing to do with that. A problem with the same severity would have the same probability to occur in Rust, it would just take other form. The claims on the titles I quoted from my google are silly. If you jump from a plane without a parachute you also prevent the whole class of accidents that happen when the screen opens up. I am sure people understand that that is a silly claim.

                                                    This is a very specific claim though. For the sake of argument, if someone were wrong about that specific case, that doesn’t mean they are conflating memory safety for all security vulnerabilities. That’s what I’m responding to.

                                                    Again, no one is claiming that rust community is conflating memory safety with “all safety vulns”. I have no clue where you got that from. But to the point, it is as specific as it is pointless, as is the parachute example.

                                                    1. 6

                                                      I never made such claim.

                                                      I didn’t say you did. But that’s the claim I’m responding to.

                                                      You are insisting in the whole “prevent all CVEs”.

                                                      I didn’t, no. I was responding to it by pointing out that it’s a straw man. It sounds like you agree. Which was my central point.

                                                      nor did any other people in this thread

                                                      No, they did:

                                                      But conflating memory safety with software security or reliability is a bad idea. There’s tons of memory-safe code out there that has so many CVEs it’s not even funny.

                                                      The rest of your comment is just so far removed from any reality that I know that I don’t even know how to engage with it. It sounds like you’re in the “humans just need to be better” camp. I’m sure you’ve heard the various arguments about why that’s not a particularly productive position to take. I don’t have any new arguments to present.

                                                      1. 3

                                                        No, they did:

                                                        Specifically, I (sort of) claimed that and expanded upon it here. And I’m just going to re-emphasise what I said in that comment.

                                                        I see from your profile that you’re a member of the Rust library team – I imagine most of the interactions you have within the Rust community are with people who are actively involved in building the Rust environment. That is, people who have the expertise (both with Rust and other aspects software development), the skill, and the free time to make substantial contributions to a game-changing technology, and who are therefore extremely unlikely to claim anything of that sort.

                                                        So I understand why this looks like a straw man argument to you – but this is not the Rust-related interaction that many people have. I was gonna say “most” but who knows, maybe I just got dealt a bad hand.

                                                        Most of “us” (who don’t know/use Rust or who, like me, don’t use it that much) know it via the armchair engineering crowd that sits on the sides, sneers at software like the Linux kernel and casually dismisses it as insecure just for being written in C, with the obvious undertone that writing it in Rust would make it secure. Like this or like this or like this.

                                                        They’re not dismissing it as memory unsafe and the undertone isn’t that (re)writing it in Rust would plug the memory safety holes. When they propose that some 25 year-old piece of software be rewritten in Rust today, the idea, really, is that if you start today, you’ll have something that’s more secure, whatever the means, in like one or two years.

                                                        That’s why there are people who want RIIR flags. Not to avoid useful discussion with knowledgeable members of the Rust community like you, but to avoid dismissive comments from the well ackshually crowd who thinks about something for all of thirty seconds and then knows exactly where and why someone is wrong about a project they’ve been working on for five years.

                                                        1. 5

                                                          I imagine most of the interactions you have within the Rust community are with people who are actively involved in building the Rust environment.

                                                          Not necessarily. It depends on the day. I am also on the moderation team. So I tend to get a narrow view on library matters and a very broad view on everything else. But I also frequent r/rust (not an “official” Rust space), in addition to HN and Lobsters.

                                                          That is, people who have the expertise (both with Rust and other aspects software development), the skill, and the free time to make substantial contributions to a game-changing technology, and who are therefore extremely unlikely to claim anything of that sort.

                                                          Certainly. I am under no illusion about that. I don’t think you or anyone was saying “core Rust engineers have made ridiculous claim Foo.” That’s why I was asking for more data. I wanted to hear about any credible person who was making those claims.

                                                          FWIW, you didn’t just do this with Rust. You kinda did it with Java too in another comment:

                                                          There was a more or less general expectation (read: lots of marketing material, since Java was commercially-backed, but certainly no shortage of independent tech evangelists) that, without pointers, all problems would go away – no more security issues, no more crashes and so on.

                                                          I mean, like, really? All problems? I might give you that there were maybe some marketing materials that, by virtue of omission, gave that impression that Java solved “all problems.” But, a “general expectation”? I was around back then too, and I don’t remember anything resembling that.

                                                          But, like, Java did solve some problems. It came with some of its own, not all of which were as deeply explored as they are today.

                                                          See, the thing is, when you say hyperbolic things like this, it makes your side of the argument a lot easier to make. Because making this sort of argument paints the opposing side as patently ridiculous, and this in turn removes the need to address the nuance in these arguments.

                                                          So I understand why this looks like a straw man argument to you – but this is not the Rust-related interaction that many people have. I was gonna say “most” but who knows, maybe I just got dealt a bad hand.

                                                          Again. If you see someone with a misconception like this—they no doubt exist—then kindly point it out. But talking about it as a sort of general phenomenon just seems so misguided to me. Unless it really is a general phenomenon, in which case, I’d expect to be able to observe at least someone building software that others use on the premise that switching to Rust will fix all of their security problems. Instead, what we see are folks like the curl author making a very careful analysis of the trade offs involved here. With data.

                                                          Like this or like this or like this.

                                                          RE https://news.ycombinator.com/item?id=25921917: Yup, that’s a troll comment from my perspective. If I had seen it, I would have flagged it.

                                                          RE https://news.ycombinator.com/threads?id=xvilka: I can’t tell if they’re a troll, but they definitely post low effort comments. I’d downvote most of them if I saw them. I downvote almost any comment that is entirely, “Why didn’t you write it in X language?” Regretably, it can be a legitimate question for beginners to ask, since a beginner’s view of the world is just so narrow, nearly by definition.

                                                          RE https://news.ycombinator.com/item?id=26398042: Kinda more of the above.

                                                          I note that the first and third links you gave were downvoted quite a bit. So that seems like the system is working. And that there aren’t hordes of people secretly in favor of comments like that and upvoting them.

                                                          FWIW, I don’t recognize any of these people as Rust community members. Or rather, I don’t recognize their handles. And as a moderator, I am at least passively aware of pretty much anyone that frequents Rust spaces. Because I have to skim a lot of content.

                                                          I’m not sure why we have descended into an RESF debate. For every RESF comment you show, I could show you another anti-RESF Lobsters’ comment.

                                                          It’s just amazing to me that folks cannot distinguish between the zeal of the newly converted and the actual substance of the idea itself. Like, look at this comment in this very thread. Calling this post “RIIR spam,” even though it’s clearly not.

                                                          They’re not dismissing it as memory unsafe and the undertone isn’t that (re)writing it in Rust would plug the memory safety holes. When they propose that some 25 year-old piece of software be rewritten in Rust today, the idea, really, is that if you start today, you’ll have something that’s more secure, whatever the means, in like one or two years.

                                                          But that’s very different than what you said before. It doesn’t necessarily conflate memory safety with security. That’s a more nuanced representation of the argument and it is much harder to easily knock down (if at all). It’s at least true enough that multiple groups of people with a financial stake have made a bet on that being true. A reasonable interpretation of “more secure” is “using Rust will fix most or nearly all of the security vulnerabilities that we have as a result of memory unsafety.” Can using Rust also introduce new security vulnerabilities unrelated to memory safety by virtue of the rewrite? Absolutely. But whether this is true or not, and to what extent, really depends on a number of nuanced factors.

                                                          That’s why there are people who want RIIR flags. Not to avoid useful discussion with knowledgeable members of the Rust community like you, but to avoid dismissive comments from the well ackshually crowd who thinks about something for all of thirty seconds and then knows exactly where and why someone is wrong about a project they’ve been working on for five years.

                                                          The “well ackshually” crowd exists pretty much everywhere. Rust perhaps has a higher concentration of them right now because it’s still new. But they’re always going to be around. I’ve been downvoting and arguing with the “well ackshually” crowd for years before I even knew what Rust was.

                                                          If you see a dismissive comment that isn’t contributing to the discussion, regardless of whether it’s about RIIR or not, flag it. I have absolutely no problem with folks pointing out low effort RESF comments that express blind enthusiasm for a technology. Trade offs should always be accounted for. My problem is that the anti-RESF crowd is not doing just that. They are also using it as a bludgeon against almost anything that involves switching to Rust. This very post, by Curl’s author, is not some low effort RESF bullshit. (I should say that not all RESF bullshit is trolling. I’ve come across a few folks that are just new to the world of programming. So they just don’t know how to see the nuance in things yet, even if it’s explicitly stated. There’s only so much novel signal a brain can take in at any point. Unfortunately, it’s difficult to differentiate between sincere but misguided beginners and trolls. Maybe there are people other than trolls and beginners posting RESF bullshit, but I don’t actually know who they are.)

                                                          Either way, if we get RIIR flags, then we should get anti-RIIR flags. See where that leads? Nowhere good. Because people can’t seem to differentiate between flippant comments and substance.

                                                          Sorry I got a bit ranty, but this whole thread is just bush league IMO.

                                                          1. 3

                                                            This very post, by Curl’s author, is not some low effort RESF bullshit.

                                                            No, I’m aware of that. But the RESF bullshit posters have a bit of a history with Curl: https://daniel.haxx.se/blog/2017/03/27/curl-is-c/ .

                                                            Every once in a while someone suggests to me that curl and libcurl would do better if rewritten in a “safe language”. Rust is one such alternative language commonly suggested. This happens especially often when we publish new security vulnerabilities.

                                                            I try to keep away from these bush league threads myself but, uh, sometimes you just go with it, and this was one of those cases precisely because of that context.

                                                            I’ve been slowly trying to nudge people into using Rust on embedded systems ever since I gave up on Ada, so I’m not trying to dismiss it, I have quite an active interest in it. Yet I’ve been at the receiving end of “you should rewrite that in a safe language” many times, too, like most people writing firmware. And I don’t mean on lobster.rs (which has a level-headed audience, mostly :P), I mean IRL, too. Nine times out of ten these discussions are bullshit.

                                                            That’s because nine times out of ten they’re not carried out with people who are really knowledgeable about Rust and firmware development. E.g. I get long lectures about how it’ll vastly improve firmware reliability by eliminating double frees and dangling pointers. When I try to point out that this is true in general, and that Rust’s memory model is generally helpful in embedded systems (e.g. the borrow checker is great!) but this particular problem is a non-issue because this is an embedded system and all allocations are static and we never get a double free because we don’t even malloc! I get long lectures about how two years from now everything will be AArch64 anyway and memory space won’t be an issue.

                                                            (Edit: to be clear – I definitely don’t support “RIIR” flags or anything of the sort, and indeed “the system” works, as in, when one of the RIIR trolls pop up, they get downvoted into oblivion, whether they’re deliberately trolling or just don’t know better. I’m just trying to explain where some of the negativity comes from, and why in my personal experience it’s often try to hold back on it even when you actually like Rust and want to use it more!)

                                                            I mean, like, really? All problems? I might give you that there were maybe some marketing materials that, by virtue of omission, gave that impression that Java solved “all problems.” But, a “general expectation”? I was around back then too, and I don’t remember anything resembling that.

                                                            Oh, yeah, that was my first exposure to hype, and it gave me a healthy dose of skepticism towards tech publications. I got to witness that as a part of the (budding, in my part of the world) tech journalism scene (and then, to some degree, through my first programming gigs). The cycle went basically as follows:

                                                            There were lots of talks and articles and books on Java between ’95 and ’97-‘98 (that was somewhat before my time but that’s the material I learned Java from later) that always opened with two things: it’s super portable (JVM!) and there are no pointers, so Java programs are less likely to crash due to bad memory accesses and are less likely to have security problems.

                                                            These were completely level-headed and obviously correct. Experienced programmers got it and even those who didn’t use Java nonetheless emulated some of the good ideas in their own environments.

                                                            Then 2-4 years later we got hit by all the interns and lovely USENET flamers who’d grown up on stories they didn’t really understand about Java and didn’t really qualify these statements.

                                                            So then I spent about two years politely fending off suggestions about articles on how net appliances and thin clients are using Java because it’s more secure and stable, on why everyone is moving to Java and C++ will only be used for legacy applications and so on – largely because I really didn’t understand these things well enough, but man am I glad I let modesty get the better of me. Most of my colleagues didn’t budge, either, but there was a period during which I read a “Java programs don’t crash” article every month because at least one – otherwise quite respectable – magazine would publish one.

                                                            1. 4

                                                              Aye. Thanks for sharing. I totally get your perspective. Talking about your projects with folks only to have to get into an exhausting debate that you’ve had umpteen times already is frustrating. Happens to me all the time too, for things outside of Rust. So I know the feeling. It happens in the form of, “why didn’t you do X instead of Y?” Depending on how its phrased, it can feel like a low brow dismissal. The problem is that that line of questioning is also a really great way of getting a better understanding of the thing you’re looking at in a way that fits into your own mental model of the world. Like for example, at work I use Elasticsearch. If I see a new project use SOLR, I’m going to be naturally curious as to why they chose it over Elasticsearch. I don’t give two poops about either one personally, but maybe they have some insight into the two that I don’t have, and updating my mental model would be nice. The problem is that asking the obvious question comes across as a dismissal. It’s unfortunate. (Of course, sometimes it is a dismissal. It’s not always asked in good faith. Sometimes it’s coupled with a healthy dose of snobbery, and those folks can just fuck right off.)

                                                              It’s harder to do IRL, but the technique I’ve adopted is that when someone asks me questions like that, I put about as much effort into the response as they did the question. If they’re earnest and just trying to understand, then my hope is that they might ask more follow up questions, and then it might become a nice teaching moment. But most of the time, it’s not.

                                                              I guess I would just re-iterate that my main issue with the anti-RIIR crowd is that it’s overbroad. If it were just some groaning about trolls, then fine. But it’s brought up pretty much any time Rust is brought up, even if bringing Rust up is appropriate.

                                                              But I suppose that’s the state of the Internet these days. Tribes are everywhere and culture wars can’t be stopped.

                                                      2. 2

                                                        And in all honest we all have made mistakes. But let’s not take relativism to the absurd. Let’s be clear, it was objectively very poorly written code with a trivial error. A seasoned engineer should look at that and immediately see the problem. If you think that level of quality is less problematic when you use a ‘safer’ language, you are in for very bad surprises. It was objectively bad engineering, nothing less. The language had nothing to do with that. A problem with the same severity would have the same probability to occur in Rust, it would just take other form.

                                                        What form would it take? Would it be in the “all private keys in use on the internet can be leaked”-form? Probably not, I think?

                                                        Anyway, let’s forget about security for a minute; why wouldn’t you want the computer to automate memory management for you? Our entire job as programmers is automate things and the more things are automated, the better as it’s less work for us. This is why we write programs and scripts in the first place.

                                                        Traditionally automated memory management has come with some trade-offs (i.e. runtime performance hits due to GCs) and Rust attempts to find a solution which automates things without these drawbacks. This seems like a good idea to me, because it’s just more convenient: I want the computer to do as much work for me as possible; that’s its job.

                                                        Back to security: if I have a door that could be securely locked by just pressing a button vs. a door that can be securely locked by some complicated procedure, then the first door would be more secure as it’s easier to use. Sooner or later people will invariable make some mistake in the second door’s procedure. Does that mean the first door guarantees security? No, of course not. You might forget to close the window, or you might even forget to press that button. But it sure reduces the number of things you need to do for a secure locking, and the chances you get it right are higher.

                                            2. 4

                                              Kinda sorta. PHP is/was largely written in C itself and IIRC had its share of memory related security bugs from just feeding naughty data to PHP standard library functions.

                                              I don’t know what PHP is like today from that point of view.

                                              So, I take issue when you say:

                                              I’m just arguing that “this program is written in C and therefore not trustworthy because C is an unsafe language” is, at best, silly. PHP 4 was safer than Rust and boy do I not want to go back to dealing with PHP 4 applications.

                                              I still think it’s completely justifiable to be skeptical of a program written in C. Just because another language may also be bad/insecure/whatever does not invalidate the statement or sentiment that C is a dangerous language that honestly brings almost nothing to the table in the modern era.

                                              1. 3

                                                Interestingly, there’s a different language from a very similar time as C, that has more safety features, and its name is Pascal. And there was a time when they were kinda competing, as far as I understood. This was maybe especially at the time of Turbo C and Turbo Pascal, and then also Delphi. Somehow C won, with the main argument being I believe “because performance”, at least that’s how I remember it. My impression is that quite often, when faced with a performance vs. security choice, “the market” chooses performance over security. I don’t have hard data as to whether code written in Pascal was more secure than that in C; I’d be curious to see some comparison like that. I seem to have a purely anecdotal memory, than when I felt some software was remarkably stable, it tended to show up to be written in Pascal. Obviously it was still totally possible to write programs with bugs in Pascal; I think Delphi code had some characteristic kind of error messages that I saw often enough to learn to recognize them. Notably, it also actually still required manual memory management - but I believe it was better guarded against buffer overruns etc. than C.

                                                1. 2

                                                  I thought the reasons for C’s wide adoption were Unix and the university system. I.E., the universities were turning out grads who knew Unix and C. I’ve only heard good things about the performance of Turbo Pascal.

                                                  Pascal is safer, but it was certainly possible to write buggy Pascal. Back in the early 90s I hung out on bulletin boards and played a lot of Trade Wars 2002. That was written in Turbo Pascal, and it had a few notable and widely exploited bugs over the years. One such was a signed overflow of a 16-bit integer. I won a couple Trade Wars games by exploiting those kinds of bugs.

                                              2. 17

                                                You are arguing that eliminating one class of bugs doesn’t make sense, because there are other classes of bugs? That reminds me of the mental gymnastics of untyped language proponents.

                                                1. 7

                                                  This cartoon deserves an entire blog post! But I’ll just list out the gymnastic routines of statically-typed language proponents:

                                                  1. 2

                                                    I think no one cares about this.

                                                    1. 3

                                                      Well, not when they only spend 3 minutes per link. You might have to read and reflect.

                                                      1. 1

                                                        The thing is that you are completely missing the point (and your kind of nerd-contrarianism doesn’t make you look as smart as you think it does).

                                                        1. 5

                                                          Oh! I don’t do this to look smart. I do this because I see in you the same tribalism that I once had, and I only grew past it because of evidence like the links I shared with you. I’m not trying to say that static typing is wrong; I’m trying to expand and enrich your knowledge about type theory. Once you’ve reached a certain altitude and vantage, then you’ll see that static and dynamic typing are not tribes which live in opposition, but ways of looking at the universal behaviors of computation.

                                                          Please, read and reflect. Otherwise this entire thread was off-topic: Your first post is not a reply to its parent, but a tangent that allowed you to display your tribal affiliation. I don’t mind being off-topic as long as it provides a chance to improve discourse.

                                                          1. 2

                                                            The (popular) mistake you are making is that you pretend things are equal when they are not, just like shitting your pants (untyped) and trying to not shit your pants (typed) are not positions with similar merit.

                                                      2. 1

                                                        No those are all serious refutations of why statically-typed languages is a panacea and is a lot more insightful than a silly comic someone made to find affirmation among their Twitter followers.

                                                        1. 1

                                                          Can you show me where I claimed that “typed languages [are] a panacea”?

                                                          Anyway, have fun stomping on that strawman.

                                                  2. 6

                                                    Here are the only economically viable solutions I see to the problem of “too much core infrastructure has recurring, exploitable memory unsafety bugs” problem (mainly because of C):

                                                    • Gradually update the code with annotations – something like Checked C. Or even some palatable subset/enhancement of C++.
                                                    • Distros evolve into sandboxing based on the principle of least privilege (DJB style, reusing Chrome/Mozilla sandboxes, etc.) – I think this is one of the most economically viable solutions. You still have memory unsafety but the resulting exploits are prevented (there has been research quantifying this)
                                                    • The product side of the industry somehow makes a drastic shift and people don’t use kernels and browsers anymore (very unlikely, as even the “mobile revolution” reused a ton of infrastructure from 10, 20, 30, 40 years ago, both on the iOS and Android side)
                                                    • (leaving out hardware-based solutions here since I think hardware changes slower than software)
                                                    • (I have looked at some of the C to Rust translators, and based on my own experience with translating code and manually rewriting it, I’m not optimistic about that approach. The target language has to be designed with translation in mind, or you get a big mess.)

                                                    Manually rewriting code in Rust or any other language is NOT on that list. It would be nice but I think it’s a fantasy. There’s simply too much code, and too few people to rewrite it.

                                                    Moreover with code like bash, to a first approximation there’s 1 person who understands it well enough to rewrite it (and even that person doesn’t really understand his own code from years ago, and that’s about what we should expect, given the situation).

                                                    Also, the most infamous bash vulnerability (ShellShock) was not related to memory unsafety at all. Memory unsafety is really a subset of the problem with core infrastructure.

                                                    Sometime around 2020 I made a claim that in 2030 the majority of your kernel and your browser will still be in C or C++ (not to mention most of your phone’s low level stack, etc.).

                                                    Knowing what the incentives are and the level of resources devoted to the problem, I think we’re still on track for that.

                                                    I’m honestly interested if anyone would take the opposite side: we can migrate more than 50% of our critical common infrastructure by 2030.


                                                    This says nothing about new projects written in Rust of course. For core infrastructure, the memory safety + lack of GC could make it a great choice. But to a large degree we’ll still be using old code. Software and especially low level infrastructure has really severe network effects.

                                                    1. 1

                                                      I agree with you. It’s just not going to happen that we actually replace most of the C code that is out there. My hope, however, is that C (and C++) becomes the next COBOL in the sense that it still exists, there are still people paid to work on systems written in it, but nobody is starting new projects in it.

                                                      Along the same lines as your first bullet point, I think a big step forward- and the best “bang for our buck” will be people doing fuzz testing on all of these old C projects. There was just recently a story making the rounds here and on HN about some bug in… sudo? maybe? that lead to a revelation of the fact that the commit on the project that caused the regression was a bugfix to a bug for which there was no test, before or after the change. So, not only did the change introduce a bug that wasn’t caught by a test that didn’t exist- we can’t even be sure that it really fixed the issue it claimed to, or that we really understood the issue, or whatever.

                                                      My point is that these projects probably “should” be rewritten in Rust or Zig or whatever. But there’s much lower hanging fruit. Just throw these code bases through some sanitizers, fuzzers, whatever.

                                                      1. 2

                                                        Yeah that’s basically what OSS Fuzz has been doing since 2016. Basically throwing a little money at projects to integrate continuous fuzzing. I haven’t heard many updates on it but in principle it seems like the right thing.

                                                        https://github.com/google/oss-fuzz

                                                        As of January 2021, OSS-Fuzz has found over 25,000 bugs in 375 open source projects.

                                                        The obvious question is what percent of curl’s vulnerabilities could be found this way. I googled and found this:

                                                        https://github.com/curl/curl-fuzzer

                                                        which doesn’t look particularly active, and doesn’t seem to show any results (?).

                                                        Rather than talk about curl and “RIIR” (which isn’t going to happen soon even if the maintainer wants it to), it would be better to talk about if curl is doing everything it can along the other lines.

                                                        Someone mentioned that curl is the epitome of bad 90’s C code, and I’ve seen a lot of that myself. There is a lot of diversity in the quality of C code out there, and often the sloppiest C projects have a lot of users.

                                                        Prominent examples are bash, PHP, Apache, etc. They code fast and sloppy and are responsive to their users.

                                                        There’s a fundamental economic problem that a lot of these discussion are missing. Possible/impossible or feasible/infeasible is one thing; whether it will actually happen is a different story.


                                                        Bottom line is that I think there should be more talk about projects along these lines, more talk about sandboxing and principle of least privilege, and less talk about “RIIR”.

                                                      2. 1

                                                        Manually rewriting code in Rust or any other language is NOT on that list.

                                                        Nah. This kinda assumes that the rewrite/replacement/whatever would happen due to technical reasons.

                                                        It certainly wouldn’t. If a kernel/library/application gets replaced by a safer implementation, it’s for business reasons, where it just happens that the replacement is written in e. g. Rust.

                                                        So yes, I fully expect that a certain amount of rewrites to happen, just not for the reasons you think.

                                                      3. 3

                                                        I mean, there is no silver bullet, right? We can all agree with that? So, therefore, “just apply more sagacious thinking” isn’t going to fix anything just as “switch to <Rust|D|C#|&C>” won’t? The focus on tools seems to miss the truth that this is a human factors problem, and a technological solution isn’t going to actually work.

                                                        1. 2

                                                          One of curl’s main advantages is its ubiquity; I can run it on an OpenWRT router, a 32bit ARMv7 OpenBSD machine, a POWER9 machine, and even Illumos//OpenIndiana. It’s a universal toolkit. It also runs in extremely constrained and underpowered environments.

                                                          Do you know of a memory-safe language that fits the bill (portability and a tiny footprint)? Rust fails on the former and Java fails on the latter. Go might work (gccgo and cgo combined have a lot of targets and TinyGo can work in constrained environments), but nowhere as well as C.

                                                          1. 3

                                                            Java fails on [a tiny footprint]

                                                            There is java for smartcards . . .

                                                            Do you know of a memory-safe language that fits the bill (portability and a tiny footprint)

                                                            Nim, ats.

                                                          2. 2

                                                            How would you deal with plan interference? The linked paper requires an entirely new language in order to even talk about this class of bugs!

                                                          1. 4

                                                            I’m not sure that’s a realistic request to make of the companies. Sure, it would be amazing to know all that beforehand, but I think no company is proud of having a somewhat broken deployment process or no proper CI. I suspect, that if they were all open about that, they’d have a hard time hiring anyone. And I can totally see how some of the awful setups described in the article came into existence.

                                                            The major thing for me is, whether they let you make improvements. Because if you’re allowed to change things for the better, a worse starting situation might be overall preferable to a mediocre one, where you’re not allowed to touch anything.

                                                            1. 8

                                                              they’d have a hard time hiring anyone

                                                              I can appreciate how it’s hard, but FWIW I think I’d advocate doing it anyway because having difficulty with employee hiring (when people don’t like what they hear in the interview) is much cheaper than having difficulty with employee retention (when people don’t like what they find out on the job, after you’ve paid the cost of onboarding them).

                                                              Separately, this also provides another channel for feedback. Candidates’ reactions to an honest description of your deploy pipeline gives you an idea of where you are relative to the rest of the world.

                                                              1. 7

                                                                If a company is honest about their problems during the hiring process there are two reasons that the candidate might pass on the job.

                                                                1. The candidate is turned off by the problems

                                                                In my opinion this is very rare. Most people I can think of will be excited by the potential of improving things and making them better.

                                                                1. The candidate understands that the problems are the symptoms of having a broken/non-existent engineering culture and by joining the company they will simply join the dumpster fire without being able to do anything about it

                                                                I think the parties involved in a job interview should be brutally honest with each other. It’s like any other partnership or relationship. You can’t trick people into it for long before it backfires so start it right.

                                                                1. 2

                                                                  My experience recruiting people to a company with a so so dev experience is that people respond really well to honesty. I try to tell candidates what we do well, what we don’t do well, and what we are doing to fix the latter.

                                                                2. 3

                                                                  Sure, it would be amazing to know all that beforehand, but I think no company is proud of having a somewhat broken deployment process or no proper CI. I suspect, that if they were all open about that, they’d have a hard time hiring anyone.

                                                                  If a company has a problem so big that it would prevent them from hiring people if it were known, then fixing it should be a top priority. It’s not just that being able to do your job shouldn’t be treated like a perk – but it deserves mentioning that it’s really stupid to hire smart people, and not give them the tools they need to work at their full potential. But if it’s so bad that it prevents them from hiring good people, there’s a good chance it’s also preventing them from keeping good people around, with all the problems that entails: no way to disseminate information, no way to grow expertise and so on.

                                                                  I’ve worked in a place that had one of… you know what, actually the setup they had was worse than anything described in that article, on every level. I can’t describe how awful it was in detail, because it’s rage-inducing – let me just say, as a first example, that after making a one-line change, it took 15-20 minutes to do the commit, and that there was a reasonable chance (about once per month) that attempting to do it would corrupt your local copy of the repository and you’d have to make a new one from scratch, which took about 4-6 hours.

                                                                  Nobody left because of that, but as soon as things got a bit tough, it quickly became the straw that broke the camel’s back. Someone would ask you when you’d push your fix – right as your local copy would get corrupted, right before your eyes, and you’d be looking at staring at a progress bar for the next six hours. Hopefully you’d backed up your files, too, otherwise you had to redo it from scratch. If that happened at about 4 PM on the day before the release, you were the lucky winner of spending a night at the office because there was no way you’d even have something to compile, let alone test and push, before 10 PM.

                                                                  Okay, no infrastructure is ever perfect. Anything can be improved. But there’s a long way from “our deployment process needs two minutes of handholding and our CI pipeline could be better, but we’re working on it” to “our deployment process involves an 80-step manual procedure where step 42 involves rlogin-ing to a server halfway across the globe and ssh-ing into about twelve servers by hand, and we don’t do CI, one of us manually does a test suite every afternoon, by rotation”. Okay, you shouldn’t “advertise” these things in an interview, but if they’re so common that they’re “the dev experience”, not a temporary situation while you’re transitioning to another set of tools or whatever, that is a problem. It shouldn’t be mentioned in an interview for the same reason you don’t mention you have a release tomorrow – because everyone should be working on fixing it around the clock.

                                                                  1. 2

                                                                    IMO it is totally realistic and I’ve done this exact thing. You need to set the tone that you are interviewing them as much as they are you. If you are desperate for a job, then yes, it doesn’t make tactical sense to ask this.

                                                                    Otherwise if you are not desperate, you can ask this, and it actually gives you leverage. Think about it: your reaction to their answer to this question means a lot. If you react in a positive way, as in, you understand this is common and that you look forward to dealing with it and improving things, it can look really good on you. As an anecdote, the interview process I went through had 3 phases and I got to the last phase. Honesty and by extension courage display self-reliance.

                                                                    1. 1

                                                                      I’m not advocating dishonesty. When asked about your engineering practices and workflows you should definitely answer honestly. (And I guess it’s a good question for an interviewee to ask at an interview)

                                                                      What I’m saying is that maybe you shouldn’t always proactively announce all the warts and issues with your workflow (which also requires that you’re aware of them, which I suspect in enough cases is just not the case. If for example all your staff is using MacBooks you probably don’t even notice that this could be a problem until you have your first hire who doesn’t want to use a MacBook).

                                                                  1. 3

                                                                    I’ve recently written a couple of posts (first one is here, second one is linked from the top of that one) arguing against precisely this type of “hide the ORM” approach to building Django apps. The thing I initially reacted to preferred the “service layer” terminology, while this one prefers the “repository” terminology, but the arguments are the same either way.

                                                                    1. 1

                                                                      There are two groups of people. Those who think this is a good idea. And those who have worked on projects that implemented these ideas.

                                                                      These “abstraction is so cool” ideas are the diet fads of software engineering. They look good and they sound convincing but somehow they just don’t work and once one of them gets old another one is just around the corner waiting to lure you in with new promises.

                                                                      1. 1

                                                                        they just don’t work

                                                                        Could you explain why? Or share articles that explain your opinion?

                                                                        1. 1

                                                                          We can have a never ending conversation on the topic but I’ll give you a bunch of summary bullet points.

                                                                          Simplicity wins: More abstraction is not inherently a good thing. Abstraction quickly compounds and increases cognitive overhead beyond your capacity, and we all over-estimate our capacity. Adding these “logical” layers of abstraction creates indirection and obfuscation. You don’t need to model the entire world in your programming language to read or write some values in a database.

                                                                          Your pretty diagrams don’t exist in the real world: Real world is hard and dirty. Just because these ideas exist cleanly in a textbook it doesn’t mean you will find them that way in real world projects. In practice you will inherit a large project touched by hundreds of people with varying levels of skill and care. Nothing I say here is going to make you feel it until you are thirty six call frames deep into debugging AbstractCustomerConnectionManagerProxyBeanRepositoryFactory at 7:30pm on a Friday night. And no “they didn’t do it right” is not a valid defense. If most of the time something can’t be done right I don’t want it. I don’t care if it’s the fault of the people or the idea. That’s irrelevant.

                                                                          Diet fads vs First principles: Just like diet fads software fads are pushed a lot by people that do a lot of talking and not a lot of doing. The loud celebrities are busy telling everyone how they’ve been doing it wrong all this time. This time for real, if only you buy their latest shiny “Clean Bullshit” book and sign up your company for their expensive consulting services. Meanwhile the humble quiet practitioners are just getting it done without turning all Socrates about everything. The first principles prevail while the fads keep changing. This is the difference between chasing books and secrets and magic diet pills vs someone who just lowers their calorie intake, and increases their calorie burn. Doing the simple thing is hard. Distracting yourself with good intentions is easy.

                                                                          Pragmatic craftsman vs Gatekeeping philosopher: Ultimately what matters is the outcome. When the simple elegant solution of the pragmatic craftsman outperforms the complex buzzword-ridden solution of the fad-chasing astronaut it is the pragmatic craftsman who has something to teach, not the other way around. The gatekeeping philosopher has something to sell that is of no use or value to the craftsman who knows how to get shit done. Yet the gatekeeping philosopher goes around the town insisting that everybody is doing it all wrong. They try to constrain the pragmatic craftsman by outlawing their craft. That’s why most of these fads come with “if you are not doing it like me you are doing it wrong” vibe.

                                                                          For a humorous relevant link also see: http://programming-motherfucker.com/

                                                                          1. 2

                                                                            Thank you very much for your comment. It definitely reads very pessimistic, but it’s helpful to get another viewpoint on the topic. I’ve never practiced DDD, I’ve just read about it and it felt powerful and like a very sensible methodology. Although, I’m no fool and no silver bullet exists. Therefore, I’m a bit skeptical and welcome other viewpoints like yours.

                                                                        2. 1

                                                                          I once worked on a codebase that used a “service layer” in front of a Data Mapper ORM. I could understand doing one or the other, but doing both in the same project was just weird. And in the end it didn’t provide much in the way of portability when the team made a decision to switch frameworks.

                                                                      1. 1

                                                                        Does anyone have any definition / understanding of what exactly the difference between a micro/macroservice is?

                                                                        Since something that may be a microservice at one company could be called a macroservice and vise versa.

                                                                        1. 10

                                                                          It’s a polite and gradual way of backpedalling and saying “hey maybe we took this shit a bit too far?”

                                                                          1. 2
                                                                            1. 2

                                                                              My definition of Microservice: Every team has its own service. Interaction only via internet-facing APIs.

                                                                              Multiple services per team goes beyond Microservices and I don’t really see an advantage there.

                                                                              1. 3

                                                                                It can have advantages. The team I’m on (three developers), we have one service that implements the business logic. We have two other services that serve as front ends (one for incoming SIP messages, one for incoming SS7 [1] messages). And we recently added a fourth service, used only by the business logic service, to do an HTTP REST request. We found it easier to throw a UDP [2] message to that new service (which can then deal with TCP and/or TLS) than to try to integrate HTTP REST directly into an app that is event driven by UDP packets.

                                                                                [1] Signalling System 7

                                                                                [2] The SS7 interface is mostly UDP-like. SIP is sent over a UDP channel. We can’t afford a dropped packet from blocking us as we have some hard deadlines to deal with.

                                                                              2. 1

                                                                                It is a hot new thing that you can put on your CV and pretend to be a “thought leader” about /s

                                                                              1. 2

                                                                                The software industry is somewhat unique in its extremely low barriers to entry.

                                                                                Producing high quality software is still relatively cheap, but it is also still a lot more expensive than many people/companies are willing to pay for. Partially because in the vast majority of cases the cost of accepting some defects is a lot lower than the cost of producing a flawless software to begin with (ignoring safety-critical use cases).

                                                                                A company just says “Oops sorry we have now fixed that bug!” and moves on. When the “solution” is that easy, there’s not that much appetite for preventing the problem in the first place.

                                                                                The low barrier to entry means people are constantly demanding low quality software to be produced. You don’t see “I have a $1,000 budget and want an airplane built similar to the F-35” requests. But in the software industry every day there are hundreds of projects similar to that being attempted. The outcome should not surprise anybody.

                                                                                Software quality is something everybody wants but nobody wants to pay for.

                                                                                1. 1

                                                                                  The pattern that the author describes is fairly popular and works for simple small projects. It is pushed frequently by some projects including Django because it “looks good” to have the config be in the same language and write a simple variable assignment in it such as ‘DATABASE_USERNAME = “username”’. However I disagree with calling it “doing configuration right”. Configuration should be “just data”. This turns configuration into executable code which is a bad idea. Before you know it non-primitive types like class instances, and side effects will creep into your “config” and it’s a world of pain from there.

                                                                                  Stick to “dumb” data formats such as JSON/Yaml/TOML and derive/calculate the other things that you need from that original raw information. This keeps your config as data, serializable, without side effects, and makes it easy for other tooling to read/write/generate/compare/switch it.

                                                                                  If you need something beyond that look into Dhall, CUE, or Jsonnet (but think twice before doing so, you probably don’t need them!)

                                                                                  1. 1

                                                                                    This turns configuration into executable code which is a bad idea.

                                                                                    Where do you draw the line? At what point does your data become executable code with a YML file? Is a python module not data? It’s not a pure data structure like a dict, but that doesn’t mean it’s not data. At some point that conversion from plaintext to (your language of choice) needs to happen.

                                                                                    I’d go so far as to argue this is data. Yes you can abuse it, but at the end of the day you can treat it as simple key value data wrapped up in a module.

                                                                                    1. 1

                                                                                      At what point does your data become executable code with a YML file?

                                                                                      Effectively at no point because it’s always “just” a yaml file. If my or some other application wants to read that and do stuff based on it that’s when code execution comes in.

                                                                                      I have to execute some code to read the Yaml file. But I don’t have to execute it. When it is a Python source file I have to execute it. One is data, the other one is code that contains data.

                                                                                      At some point that conversion from plaintext to (your language of choice) needs to happen.

                                                                                      Yes at some point that transformation needs to happen. If what your application needs internally is very close to the raw format you can get away with those two things being very similar. For example your raw JSON/Yaml becomes a python dictionary in a python application or an object in javascript.

                                                                                      If your application needs a richer format it can construct that form the raw data at the time of conversion. For example an array of file path strings from the raw configuration data may be transformed into a an array of file objects or class instances.

                                                                                      The temptation is for people to skip the serializable data format and turn a source file/module into their “config”. At that point in some ways you don’t really have a config. You are just asking the user/consumer of the application to provide a source file/module of their own that you combine with the rest of your application.

                                                                                      It makes it difficult to make the application robust and handle errors because the line between config data and application code gets blurred. For example your entire application may crash upon attempting to load the config.

                                                                                      There’s also another nasty pattern where the config module tries to dynamically decide its own values based on other things such as:

                                                                                      if something: CONFIG_VALUE = “this” else: CONFIG_VALUE = “that”

                                                                                      If you make things like that, at that point basically you don’t have a config. What you have there is an application that contains and self-generates its own configuration by executing and evaluating its own code.

                                                                                      When your config is a source module in some language effectively it says:

                                                                                      • Anybody who wants to read/inspect me must execute me to find out what my values are
                                                                                      • Nobody can generate me in a sane way (you’d have to “template” a source file which is nasty and unsafe)

                                                                                      In contrast when your config is in a conventional data format it says:

                                                                                      • Anybody who can read JSON/Yaml/TOML can also read me (up to them what they want to do with it)
                                                                                      • Anybody who can write JSON/Yaml/TOML can generate me

                                                                                      “source file as config” is hostile in many contexts. For example your ops/security team may want to inspect/store/compare/validate/verify/generate all or some parts of the config for the applications. And they may not even use the same language as the app itself. If application config data is in a “source file in langauge X” it’s going to make things quite difficult.

                                                                                  1. 31

                                                                                    Software correctness is not a developer decision, it’s largely a business decision guided by cost management. I mean depending on where you work and what you work on the software may be so stable that when you try to point out a problem the business will simply point out that the software is correct because it’s always correct and that you’re probably just not understanding why it is correct. Apps are buggy mostly when the costs of failure to the business are low or not felt by management.

                                                                                    1. 5

                                                                                      Came here to say exactly this.

                                                                                      There is no barrier to entry or minimum bar for consideration in software.

                                                                                      So you end up with thousands of businesses saying variations of “our budget is $1000 and we want you to make a software that …”.

                                                                                      Then of course you are going to see lots of failure in the resulting software.

                                                                                      The choice often ends up being “spend 10,000x and make it super robust” or “live with bugs”.

                                                                                      No business chooses the first option when you can say “oops sorry that was a bug we just fixed it. thank you! :)”.

                                                                                      This pattern persists even as the cost of developing software comes down. Meaning if you reduce the cost of producing flawless software to $X the market will choose a much more buggy version that costs a fraction of $X because the cost of living with those bugs is still much lower than the cost of choosing a flawless one.

                                                                                      1. 15

                                                                                        I recently moved to financial software development, and it seems everybody has real life experience of losing huge sums of money to a bug, and everybody, including management and trading, is willing to try practice to reduce bugs. So I became more convinced that it is the cost of bugs that matters.

                                                                                        1. 1

                                                                                          While this is true, don’t you think this is sort of… pathetic? Pretty harsh, I couldn’t come up with a better word on the spot. What I mean is, this is basically “those damn suits made us do it”.

                                                                                          1. 1

                                                                                            Not really.

                                                                                            Would you like your mobile phone screen to be made bullet proof and have it cost $150M?

                                                                                            Would you like an atomic bedside alarm clock for $500k?

                                                                                            A light bulb that is guaranteed to not fail for 200 years for $1,000?

                                                                                            It’s a real trade-off and there’s a line to be drawn about how good/robust/reliable/correct/secure you want something to be.

                                                                                            Most people/businesses can live with software with bugs and the cost of aiming for no bugs goes up real fast.

                                                                                            Taking serious steps towards improving software quality is very time consuming and expensive so even those basic first steps wont be taken unless it’s for something critical such as aircraft or rocket code.

                                                                                            For non-critical software often there’s no huge difference between 0 bugs or 5 bugs or 20 bugs. So there isn’t a strong incentive to try so hard to reduce the bugs from their initial 100 to 10 (and to keep it there).

                                                                                            The case that compels us to eliminate bugs is where it is something to the effect of “no bugs or the rocket crashes”.

                                                                                            Also you have to consider velocity of change/iteration in that software. You can spend tons of resources and have your little web app audited and certified ast it is today but you have to think of something for your future changes and additions too.

                                                                                            As the technology improves the average software should become better in the same way that the average pair of shoes or the average watch or the average tshirt becomes better.

                                                                                            1. 1

                                                                                              Would you like your mobile phone screen to be made bullet proof and have it cost $150M?

                                                                                              Quite exaggerated, but I get your point. The thing is — yes, I personally would like to pay 2-3x for a phone if I can be SURE it won’t degrade software-wise. I’m not worried about hardware (as long as the battery is replaceable), but I know that in 2-3 major OS updates it will feel unnecessarily slow and clunky.

                                                                                              Also you have to consider velocity of change/iteration in that software

                                                                                              Oh, man, that’s whole other story… I can’t remember the last time I wanted software to update. And the only two reasons I do update usually are:

                                                                                              1. It annoys me until I do;
                                                                                              2. It will hopefully fix some bugs introduced due to this whole crazy update schedule in the first place.

                                                                                              Most people/businesses can live with software with bugs and the cost of aiming for no bugs goes up real fast.

                                                                                              Which brings us back to my original point: we got used to it and we don’t create any significant pressure.

                                                                                          2. 1

                                                                                            Businesses that allow buggy code to ship should probably be shamed into better behavior. They exist because the bar is low, and would cease to exist with a higher bar. Driving them out of business would be generally desirable.

                                                                                            A boycott would need to start or be organized by developers, since developers are the only people who know the difference between a circumstance where a high-quality solution is possible but difficult, a circumstance where a high-quality solution is trivial but rare for historical reasons, and a situation where all solutions are necessarily going to run up against real, mathematical restrictions.

                                                                                            (Also, most code in existence isn’t being developed in a capitalist-corporate context, and the most important code – code used by everybody – isn’t being developed in that context either. We can and should expect high quality from it, because there’s no point at which improving quality becomes “more than my job’s worth”.)

                                                                                          3. 3

                                                                                            it’s largely a business decision guided by cost management.

                                                                                            I don’t agree about the cost management reasoning. Rather it is a business decision that follows what customers actually want. And customers actually do prefer features over quality. No matter how much it hurts our pride in craftsmanship…

                                                                                            The reason we didn’t see it before software is that other fields simply don’t have this trade off as an option: buildings and cars can’t constantly grow new physical features.

                                                                                            1. 3

                                                                                              Speed / Quality / Cost

                                                                                              Pick two

                                                                                              You can add on features to cars, and buildings, and the development process does sometimes go on and on forever. The difference is if your cow clicker game has a game breaking bug, typically nobody literally dies. There exists software where people do die if there are serious bugs and in those scenarios they either compromise in speed or cost.

                                                                                              We’ve seen this before software in other fields, and they do have this trade off as an option, you just weren’t in charge of building it. The iron triangle predates software though I do agree scope creep is a bigger problem in software it is also present in other industries.

                                                                                            2. 4

                                                                                              I agree. I suppose this is another thing that we should make clear to the general public.

                                                                                              But the problem I’m mostly focusing on is the problem of huge accidental complexity. It’s not business or management who made us build seemingly infinite layers and abstractions.

                                                                                              1. 12

                                                                                                It’s not business or management who made us build seemingly infinite layers and abstractions.

                                                                                                Oh it definitely was. The waterfall process, banking on IBM/COBOL/RPG, CORBA, endless piles of objects everywhere, big company apps using obfuscated formats/protocols, Java/.NET… these were middle managers and consultants forcing bullshit on developers. Those bandwagons are still going strong. Most developers stuck on them move slower as a result. The management solution is more bullshit that looked good in a PowerPoint or sounded convincing in a strip club with costs covered by a salesperson. The developers had hardly any say in it at all.

                                                                                                With that status quo, we typically are forced to go with two options: build the new thing on top of or within their pile of bullshit; find new niches or application areas that let us clean slate stuff. Then, we have to sell them on these whether internally or externally. Doing that for stuff that’s quality-focused rather than feature/buzzword-focused is always an uphill battle. So, quality-focused software with simple UI’s aren’t the norm. Although developers and suppliers cause problems, vast majority of status quo is from demand side of consumers and businesses.

                                                                                                1. 3

                                                                                                  It isn’t? Most managers I’ve met come and see me saying, we dont want to have to think about this, so build on top of this abstraction of it. They definitely do not want us wiping the slate clean and spending a lot of time rebuilding it anew, that would be bad for business.

                                                                                              1. 4

                                                                                                We know how to produce software that doesn’t fail. It’s not particularly easy or fast or cheap but we can do it.

                                                                                                The practical reality is software that “doesn’t fail” is rarely genuinely demanded or needed. So the market doesn’t ask or pay for it.

                                                                                                Even when businesses demonstrate an initial interest in the idea they will immediately back down as soon as they face the reality of various costs of producing such a thing. Their expression of interest is mostly just a big wish.

                                                                                                If someone’s paying $10k for let’s say a custom Wordpress plugin to be made in 2 weeks, would they be interested in a much more secure and much less buggy version that costs $10M that is made in 2 years? No. They don’t really want it and they don’t really need it.

                                                                                                1. 1

                                                                                                  This is true, but it’s also very true that many of the costs of shit software are externalized (THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT), and so no, that $10k spent doesn’t actually cover the total cost of the software to the buyer.

                                                                                                1. 4

                                                                                                  Web GUI technology has completely surpassed the desktop GUI technology.

                                                                                                  Back in the day web stuff was so basic that a desktop GUI was nicer and an upgrade, now that has reversed.

                                                                                                  1. 10

                                                                                                    I agree to some extent, except that Electron apps (and some web apps) are all but unusable on low-end/older hardware. Many (but not all) are severely lacking in keyboard control and other things that one might expect, too. Every Electron app seems to be oblivious to multilingual users and underlines every word, despite me switching input methods.

                                                                                                    1. 2

                                                                                                      I’d like a HTML-based GUI that doesn’t embed a full renderer like Electron does – something that maps HTML onto native controls (including accessibility stuff) could be really neat.

                                                                                                      1. 1

                                                                                                        Isn’t that what React Native is? Maybe that’ll be the hot new thing instead of Electron; would prolly be an upgrade.

                                                                                                        Edit: whoops, it’s iOS and Android only.

                                                                                                        1. 1

                                                                                                          React Native is just running your app as JS and communicating to a native set of widgets and layout, which need to be implemented per platform. If desktop support were something FB had as a priority it’d be a good option for a lot of people, but… it’s not.

                                                                                                    2. 9

                                                                                                      Couldn’t disagree more, and the reason is accessibility. it’s super trivial for desktop app developers to add keyboard shortcuts and other accessibility aids to their apps. Web developers, despite the fact that these standards like ARIA exist, seem unwilling to adopt them in any sizable number.

                                                                                                      We can have this conversation again when the Hello World app produced by your average Java framework is Aria accessible, has keyboard shortcuts for everything, and works properly with screen readers.

                                                                                                      1. 4

                                                                                                        If the developer doesn’t care it doesn’t matter if it’s a desktop app or a web app. They wont do it either way.

                                                                                                        The difficulty of adding keyboard shortcuts or adding accessibility tags is not dramatically different and quite easy for web apps too.

                                                                                                      2. 3

                                                                                                        As bad as GUI toolkits are, web tech is a lot more awkward to make GUIs with than any major cross-platform toolkit, simply because it’s a hack to draw anything with the DOM. (You’re literally live-editing the AST of a rich text document. It’s amazing that it works at all.)

                                                                                                        1. 1

                                                                                                          Your sole argument about DOM being a hack and akward is it being live-editing an AST? If anything, this might be a pro of the DOM API… I don’t see how a technology widely used, having API clearly defined for those use cases and supported by modern and old browsers can be called a hack and akwards. Meanwhile you have your average GUI toolkit that still ask you to design your AST in the code, put the styling right beside the event handling and often introduce first how to put a button a X,Y because using container and layout is akward and complicated.

                                                                                                          1. 1

                                                                                                            A regular GUI toolkit doesn’t involve manipulating the AST of a markup language. It involves manipulating containers that map conceptually to layout, using already-implemented widgets. There’s an event handling system designed to efficiently handle widget-specific mappings, focus changes, and other common situations, as well as having sane defaults (versus having an event system that needed to be tacked on ten years after the other features were written).

                                                                                                            The act of spawning a widget in a web app is an ugly hack, simply because document markup structurally conflicts with GUI layout in ways that the web developer must bodge.

                                                                                                            If any GUI toolkit requires you to jump through hoops to draw a dot on the screen, it’s broken. (By this standard, most popular GUI toolkits are also broken, but HTML is the most broken of all.)

                                                                                                            1. 1

                                                                                                              Yeah, regular GUI toolkit doesn’t involve AST and markup language, such as HTML, XAML, Android, QML, etc. In my opinion, working on a human readable and understandable AST might be the key of the web plateform GUI? Drawing anything is as simple as adding a node or subtree to my current tree. It’s as simple to do by hand than programmaticaly. If anything go wrong I have well made developpers tool to see and live edit this tree. Call it a hack all you want, I call it a successful low-level reprensentation to share the GUI state to the renderer, much better and powerfull than what you can do with Tcl or xlib (Although, much more heavy).

                                                                                                              If any GUI toolkit requires you to jump through hoops to draw a dot on the screen, it’s broken. (By this standard, most popular GUI toolkits are also broken, but HTML is the most broken of all.)

                                                                                                              There you go: <html><head></head><body>.</body></html>. By this test we can now assert that HTML is not broken (Or at least just as much as the others).

                                                                                                              1. 2

                                                                                                                You haven’t drawn a dot. You’ve typeset a period, and spent 40 characters doing it. And, typesetting text is what HTML is for, so it’s what it’s best at. If you actually want to ensure the period resembles a dot, set its x,y position, and set its color, you’ll need hundreds more characters.

                                                                                                                In BASIC, you can just do pset(x, y, color)

                                                                                                                In TK: canvas .c ; .c create point x y color ; pack .c

                                                                                                                An AST only makes sense if you are actually parsing or generating a structured language. The structure of an HTML document doesn’t coincide with the structure of a PARC GUI (i.e., every major GUI app since 1977), and is an even worse match for the scope of all possible useful GUIs (most of which resemble neither paper nor forms). The reason is that HTML was only ever intended to display minimally-formatted rich text.

                                                                                                                “Drawing something” is usually easier than manipulating the DOM. “Drawing something” is only trivial on the DOM when what you’re drawing is structured like a text document.

                                                                                                      1. 13

                                                                                                        Before the site went down someone found a commend injection issue, allowing command execution as root.

                                                                                                        1. 18

                                                                                                          For those who didn’t see it. It was a textbox on a web page passing unfiltered input to a root shell!

                                                                                                        1. 32

                                                                                                          In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.

                                                                                                          There’s another Lobster thread right now about how distributions like Debian are obsolete. The idea being that people use stuff like npm now, instead of apt, because apt can’t keep up with modern software development.

                                                                                                          Kubernetes official installer is some curl | sudo bash thing instead of providing any kind of package.

                                                                                                          In the meantime I will keep using only FreeBSD/OpenBSD/RHEL packages and avoid all these nightmares. Sometimes the old ways are the right ways.

                                                                                                          1. 7

                                                                                                            “In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.”

                                                                                                            I think this misses the point. The relevant claim was that npm has a good general approach to packaging, not that npm is perfectly written. You can be solving the right problem, but writing terribly buggy code, and you can write bulletproof code that solves the wrong problem.

                                                                                                            1. 5

                                                                                                              npm has a good general approach to packaging

                                                                                                              The thing is, their general approach isn’t good.

                                                                                                              They only relatively recently decided locking down versions is the Correct Thing to Do. They then screwed this up more than once.

                                                                                                              They only relatively recently decided that having a flattened module structure was a good idea (because presumably they never tested in production settings on Windows!).

                                                                                                              They decided that letting people do weird things with their package registry is the Correct Thing to Do.

                                                                                                              They took on VC funding without actually having a clear business plan (which is probably going to end in tears later, for the whole node community).

                                                                                                              On and on and on…

                                                                                                              1. 2

                                                                                                                Go and the soon-to-be-official dep dependency managment tool manages dependencies just fine.

                                                                                                                The Go language has several compilers available. Traditional Linux distro packages together with gcc-go is also an acceptable solution.

                                                                                                                1. 4

                                                                                                                  It seems the soon-to-be-official dep tool is going to be replaced by another approach (currently named vgo).

                                                                                                                2. 1

                                                                                                                  I believe there’s a high correlation between the quality of the software and the quality of the solution. Others might disagree, but that’s been pretty accurate in my experience. I can’t say why, but I suspect it has to do with the same level of care put into both the implementation and in understanding the problem in the first place. I cannot prove any of this, this is just my heuristic.

                                                                                                                  1. 8

                                                                                                                    You’re not even responding to their argument.

                                                                                                                    1. 2

                                                                                                                      There’s npm registry/ecosystem and then there’s the npm cli tool. The npm registry/ecosystem can be used with other clients than the npm cli client and when discussing npm in general people usually refer to the ecosystem rather than the specific implementation of the npm cli client.

                                                                                                                      I think npm is good but I’m also skeptical about the npm cli tool. One doesn’t exclude the other. Good thing there’s yarn.

                                                                                                                      1. 1

                                                                                                                        I think you’re probably right that there is a correlation. But it would have to be an extremely strong correlation to justify what you’re saying.

                                                                                                                        In addition, NPM isn’t the only package manager built on similar principles. Cargo takes heavy inspiration from NPM, and I haven’t heard about it having a history of show-stopping bugs. Perhaps I’ve missed the news.

                                                                                                                    2. 8

                                                                                                                      The thing to keep in mind is that all of these were (hopefully) done with best intentions. Pretty much all of these had a specific use case… there’s outrage, sure… but they all seem to have a reason for their trade offs.

                                                                                                                      • People are angry about a proposed go package manager because it throws out a ton of the work that’s been done by the community over the past year… even though it’s fairly well thought out and aims to solve a lot of problems. It’s no secret that package management in go is lacking at best.
                                                                                                                      • Distributions like Debian are outdated, at least for software dev, but their advantage is that they generally provide a rock solid base to build off of. I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.
                                                                                                                      • While I don’t trust curl | sh it is convenient… and it’s hard to argue that point. Providing packages should be better, but then you have to deal with bug reports where people didn’t install the package repositories correctly… and differences in builds between distros… and… and…

                                                                                                                      It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place… there are plenty of good, solid options for development and we’re moving (however slowly) towards safer, more efficient build/dev environments.

                                                                                                                      But maybe I’m just telling myself all this so I don’t go crazy… jury’s still out on that.

                                                                                                                      1. 4

                                                                                                                        Distributions like Debian are outdated, at least for software dev,

                                                                                                                        That is the sentiment that seems to drive the programming language specific package managers. I think what is driving this is that software often has way too many unnecessary dependencies causing setup of the environment to build the software being hard or taking lots of time.

                                                                                                                        I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                                                                                                                        Often it is possible to install libraries at another location and redirect your software to use that though.

                                                                                                                        It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place…

                                                                                                                        I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                                                                                                                        I’m growing more disillusioned the more I read Hacker News and lobste.rs… Help me be happy. :)

                                                                                                                        1. 1

                                                                                                                          So like squeak/smalltalk images then? Whats old is new again I suppose.

                                                                                                                          http://squeak.org

                                                                                                                          1. 1

                                                                                                                            I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                                                                                                                            You could say the same thing about Docker. I think package managers and tools like Docker are a net win for the community. They make it faster for experienced practitioners to setup environments and they make it easier for inexperienced ones as well. Sure, there is a lot you’ve gotta learn to use either responsibly. But I remember having to build redis every time I needed it because it wasn’t in ubuntu’s official package manager when I started using it. And while I certainly appreciate that experience, I love that I can just install it with apt now.

                                                                                                                          2. 2

                                                                                                                            I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                                                                                                                            Speaking of Python specifically, it’s not a big problem there because everyone is expected to work within virtual environments and nobody runs pip install with sudo. And when libraries require building something binary, people do rely on system-provided stable toolchains (compilers and -dev packages for C libraries). And it all kinda works :-)

                                                                                                                            1. 4

                                                                                                                              I think virtual environments are a best practice that unfortunately isn’t followed everywhere. You definitely shoudn’t run pip install with sudo but I know of a number of companies where part of their deployment is to build a VM image and sudo pip install the dependencies. However it’s the same thing with npm. In theory you should just run as a normal user and have everything installed to node_modules but this clearly isn’t the case, as shown by this issue.

                                                                                                                              1. 5

                                                                                                                                nobody runs pip install with sudo

                                                                                                                                I’m pretty sure there are quite a few devs doing just that.

                                                                                                                                1. 2

                                                                                                                                  Sure, I didn’t count :-) The important point is they have a viable option not to.

                                                                                                                                2. 2

                                                                                                                                  npm works locally by default, without even doing anything to make a virtual environment. Bundler, Cargo, Stack etc. are similar.

                                                                                                                                  People just do sudo because Reasons™ :(

                                                                                                                              2. 4

                                                                                                                                It’s worth noting that many of the “curl | bash” installers actually add a package repository and then install the software package. They contain some glue code like automatic OS/distribution detection.

                                                                                                                                1. 2

                                                                                                                                  I’d never known true pain in software development until I tried to make my own .debs and .rpms. Consider that some of these newer packaging systems might have been built because Linux packaging is an ongoing tirefire.

                                                                                                                                  1. 3

                                                                                                                                    with fpm https://github.com/jordansissel/fpm it’s not that hard. But yes, using the Debian or Redhat blessed was to package stuff and getting them into the official repos is def. painful.

                                                                                                                                    1. 1

                                                                                                                                      I used the gradle plugins with success in the past, but yeah, writing spec files by hand is something else. I am surprised nobody has invented a more user friendly DSL for that yet.

                                                                                                                                      1. 1

                                                                                                                                        A lot of difficulties when doing Debian packages come from policy. For your own packages (not targeted to be uploaded in Debian), it’s far easier to build packages if you don’t follow the rules. I like to pretend this is as easy as with fpm, but you get some bonus from it (building in a clean chroot, automatic dependencies, service management like the other packages). I describe this in more details here: https://vincent.bernat.im/en/blog/2016-pragmatic-debian-packaging

                                                                                                                                      2. 2

                                                                                                                                        It sucks that you come away from this thinking that all of these alternatives don’t provide benefits.

                                                                                                                                        I know there’s a huge part of the community that just wants things to work. You don’t write npm for fun, you end up writing stuff like it because you can’t get current tools to work with your workflow.

                                                                                                                                        I totally agree that there’s a lot of messiness in this newer stuff that people in older structures handle well. So…. we can knowledge share and actually make tools on both ends of the spectrum better! Nothing about Kubernetes requires a curl’d installer, after all.

                                                                                                                                      1. 31

                                                                                                                                        I don’t disagree with the article. However I also can’t help but to be reminded of the “I only own 1 fork, 2 tshirts, a backpack, and a laptop” people.

                                                                                                                                        “I disable HTML, CSS, Javascript and all that bloat … I only browse the internet with Emacs … ALSO … oh my god did I tell you how much RAM this Electron blasphemy uses? … what is wrong with some good old ugly lightweight Tk GUIs? I mostly use tmux over SSH anyways so who needs GUIs right right? Everything is bloated. Everything is unnecessary. Disable everything. And make sure that your stuff gracefully falls back from 2018 to this authentic vintage record player that I feel like using as my alternative web browser today for extra privacy protection.”

                                                                                                                                        I get it. I don’t even particularly disagree with it. But it’s turning into a bit of a meme.

                                                                                                                                        Also for clarity I don’t mean to imply that the author said those things. The post just reminded me of this theme.

                                                                                                                                        1. 5

                                                                                                                                          I get it. I don’t even particularly disagree with it. But it’s turning into a bit of a meme.

                                                                                                                                          It’s to distinguish yourself, as opposed to those that that run Wordpress and use IDEs. And yes, it relates to the minimalism you mention.

                                                                                                                                          (Did I mention my writing space uses pandoc and a couple of lines of shell ?)

                                                                                                                                          1. 1

                                                                                                                                            I tried that, too, but found that pandoc is actually quite difficult to maintain, since it lives in the haskell ecosystem (which isn’t too available on the non-GNU Linux ecosystem)

                                                                                                                                            Im with the markdown/jekyll stack now, i dont think its less bloated, but at least i can outsource the rendering (and therefore having the stuff installed) to github pages.

                                                                                                                                            1. 1

                                                                                                                                              It has a lot of weird issues, like pandoc output not being really stable, so compiling with a newer version of pandoc leads to a lot of churn in the results.

                                                                                                                                              pandoc releases binaries though, and whenever I’m on non-GNU, I just get the installer and install it.

                                                                                                                                          2. 5

                                                                                                                                            I want to add that minimalism can have an nice payout: reduced resource usage. If you use tmux instead of Xorg, w3m instead of firefox, suddenly an 15 year old Laptop is not scrap metal anymore. They cost like 60$ each on eBay, but you can usually get them for free from relatives.

                                                                                                                                            1. 3

                                                                                                                                              I think it’s good to have both extremes, like that everyone can choose something in the middle.
                                                                                                                                              However, 2016 and 2017 have shown that there’s a trend to not take that middle way.

                                                                                                                                              1. 2

                                                                                                                                                Maybe it gets a meme. There are memes for everything nowadays. There are meme for GUI too, or the GUI was a meme that grew too big to be called a meme, now it is the standard way of computing for most people.

                                                                                                                                                If bulk can be stripped out of the interfaces (graphical, text, any kind!) it is good, but if it becomes yet another meme… Yes, it kills all the fun.

                                                                                                                                              1. 2

                                                                                                                                                Electron bashings remind of Haskell (or similar) programmers bashing PHP.

                                                                                                                                                Just like PHP, Electron has problems, but it has many practical benefits and that’s why it’s popular despite having those problems.

                                                                                                                                                Their popularity (despite their problems) shows just how weak and problematic some of the alternatives are.

                                                                                                                                                A minority of purists will moan while people continue to produce value using these tools.

                                                                                                                                                1. 1

                                                                                                                                                  Yes, but there’s another thing: PHP had become extremely popular for historic reasons (shared web hosts where you could not execute custom binaries but had mod_php or so; much simpler to get started programming for than CGI binaries) and then just stayed there, at least to some extend, also for economical reasons: There are vast amounts of people who feel comfortable programming PHP and find Jobs doing it, so they often don’t really see a reason to look into anything else. Because a) it’s popular, b) ‘works’ for them c) heard that the other stuff is more complicated.

                                                                                                                                                  With electron, getting started is easy. When you hit performance problems or so, you often have already invested far too much resources in the platform to switch. With QT for example, getting started, is perceived as much more difficult.

                                                                                                                                                1. 3

                                                                                                                                                  Businesses tend to derail software development practices because they tend to view it as “hip brand name for how we exert power over employees”.

                                                                                                                                                  They will hang on to small pieces of a methodology as a cover for something else.

                                                                                                                                                  For example they may weaponise “daily agile standup meeting” for micro-management.

                                                                                                                                                  1. 1

                                                                                                                                                    Exactly - this is the most dangerous thing about “Agile”. It makes software developers the least important and least powerful people in software development, when really we should be the ones controlling our own field.

                                                                                                                                                  1. 5

                                                                                                                                                    I support this (for being more complete/explicit).

                                                                                                                                                    1. 14

                                                                                                                                                      Lacking a bachelors degree effects your career in development in at least one significant way; limiting your salary and promotion potential. Outside “competent” tech companies, Big Dumb Corp (ie the rest of the Forturn 500) HR will always use lack of a BS degree (or only an Associates) as reason to offer less salary up front, and lower raises once you’re on staff, and deny promotion. It’s a check box incompetents use to because they can’t tell who actually contributes. Some of the best developers I’ve worked with have had no degrees, have been self taught. It’s not right, but what I’ve seen where ever I’ve worked.

                                                                                                                                                      1. 6

                                                                                                                                                        Another unfortunate but real side effect is many people may be less than thrilled to “work under” you if they have degrees (i.e. self-taught engineer in charge of multiple PhDs).

                                                                                                                                                        The only exception is if you are some god authority figure like Linus Torvalds where no one dares to challenge your expertise.

                                                                                                                                                        1. 4

                                                                                                                                                          That’s a bias too. There is nothing to say that an engineer without a degree cannot do a good job managing a highly credentialed staff. As long as they have humility, know their limits, and are thinking about how to get the best out of someone it should be possible. Lots of research-based organisations don’t have this occurring a lot because the needs of the job (not the people management) require the PhD, but in the tech industry there are lots of PhDs being managed by less credentialed individuals.

                                                                                                                                                          1. 1

                                                                                                                                                            I agree. The thing is it’s common enough that you will not be able to consistently escape it.

                                                                                                                                                        2. 3

                                                                                                                                                          True, startups and most tech companies don’t care. Fortune 500, consultancies etc will be harder.

                                                                                                                                                          1. 1

                                                                                                                                                            I think that is less of a problem outside of the US (And maybe the UK?). I not in those countries and have not been to university. I’m doing ok as a developer. I think you just need other ways to show your skills such as a website/blog/github/experience. Once you get your first job (It’s probably not going to be stellar) then all the companies after that will mainly be looking at your experience in the work force.

                                                                                                                                                          1. 8

                                                                                                                                                            There’s one legitimate use case for password expiration policy that often gets ignored.

                                                                                                                                                            If a password is compromised the attacker can possibly quietly access a system behind a user’s back and siphon out information for years and years.

                                                                                                                                                            For example the CEOs email password has been compromised and a hacker establishes a script to siphon out and archive messages which could last for years.

                                                                                                                                                            A password expiration policy limits this risk by setting a time window.

                                                                                                                                                            If you don’t have that you can potentially get into situations where you can not reliably determine what has been compromised and what hasn’t been.

                                                                                                                                                            Password expiration gives you a frame of reference for when the compromise could and could not have happened.

                                                                                                                                                            1. 7

                                                                                                                                                              The problem with this reasoning is that it ignores the initial attack vector; how did the attacker compromise that password in the first place? It doesn’t just magically fall out of the sky once and then never reappear again.

                                                                                                                                                              Very often you’ll find that the password was obtained by compromising a victim’s computer, breaking into an auxiliary system that shares the same passwords, and so on. In these cases it is absolutely useless to change the password, because the attacker could just as easily obtain the new password and continue their business.

                                                                                                                                                              There’s a small set of cases where a time window might limit exposure, such as reused passwords in password dumps; but 1) these cases are better mitigated by preventing low-entropy passwords, 2) you’re still vulnerable for 3 months or whatever your window is, which is more than enough time to siphon out all information from most networks, 3) you’d be much better protected by just having proper monitoring of user sessions in the first place.

                                                                                                                                                              Is there theoretically a nonzero benefit to password expiry? Yes, but only if your security is already otherwise lacking, and even then it’s not a common case, and at that point it’s absolutely not worth it considering the big downside of forced password expiry: it incentivizes people to pick worse passwords, because remembering complex passwords is a big time investment that’s no longer worth it.

                                                                                                                                                              1. 1

                                                                                                                                                                This would indeed work if you can use a password manager at that point. If it is at the login prompt, you usually can’t use a password manager. If web service X has a policy to change password every Y months, it wouldn’t be much trouble, I just use my password manager. I do that occasionally (manually) anyway for social accounts.

                                                                                                                                                                Also, as the article suggest, for the particular threat you mention, you’d better use 2FA to mitigate that.

                                                                                                                                                              1. 1

                                                                                                                                                                Naming is about communicating from one human to another with an extremely low amount of information (function name) with a high amount of meaning (that function’s behavior).

                                                                                                                                                                Really wish all this code cross-refrence tooling focused on showing documentation & linking it, not code. Texinfo is mediocre, but supports the type of indexing that is useful.

                                                                                                                                                                1. 2

                                                                                                                                                                  Agreed. I have compared this to the saying that “sometimes the only way to escape the fire is to run through it”.

                                                                                                                                                                  I don’t mean this in a practical sense for doing today in your source code, but as a philosophical concept. It’s better to name something “oldPanda” than “findLastUserUnpaidInvoiceSomethingSomething”.

                                                                                                                                                                  In the first instance you just assign a name, a symbol, to a concept. You are not fighting to pack lots of information into a tiny space. Because the symbol is meaningless it can precisely mean what it is representing.

                                                                                                                                                                  In the second instance you make an attempt at packing information into somewhere that simply does not fit. Now this incomplete and inaccurate name will become one of your worst enemies for years to come.

                                                                                                                                                                  The relation to that saying at the top is that just like running through fire having obscure symbols and names is something we want to naturally avoid so we try to cram meaning into variable names which is perhaps the “obvious” solution like running away from a fire, but it’s not necessary always the best.

                                                                                                                                                                1. 25

                                                                                                                                                                  We are excited to continue experimenting with this new editing paradigm.

                                                                                                                                                                  That’s fine, but this is not new.

                                                                                                                                                                  Structured editors (also known as syntax-directed editors) have been around since at least the early 80s. I remember thinking in undergrad (nearly 20 years ago now) that structured editing would be awesome. When I got to grad school I started to poke around in the literature and there is a wealth of it. It didn’t catch on. So much so that by 1986 there were papers reviewing why they didn’t: On the Usefulness of Syntax Directed Editors (Lang, 1986).

                                                                                                                                                                  By the 90s they were all but dead, except maybe in niche areas.

                                                                                                                                                                  I have no problem with someone trying their hand at making such an editor. By all means, go ahead. Maybe it was a case of poor hardware or cultural issues. Who knows. But don’t tell me it’s new because it isn’t. And do yourself a favour and study why it failed before, lest you make the same mistakes.

                                                                                                                                                                  Addendum: here’s something from 1971 describing such a system. User engineering principles for interactive systems (Hansen, 1971). I didn’t know about this one until today!

                                                                                                                                                                  1. 10

                                                                                                                                                                    Our apologies, we were in no way claiming that syntax-directed editing is new. It obvious has a long and storied history. We only intended to describe as new our particular implementation of it. That article was intended for broad consumption. The vast majority of the users with whom we engage have no familiarity with the concepts of structured editing, so we wanted to lay them out plainly. We certainly have studied and drawn inspiration from many of the past and current attempts in this field, but thanks for those links. Looking forward to checking them out. We are heartened by the generally positive reception and feedback – the cloud era offers a lot of new avenues of exploration for syntax-directed editing.

                                                                                                                                                                    1. 3

                                                                                                                                                                      Looks like you’ve been working hard on it. Encouraging!

                                                                                                                                                                    2. 7

                                                                                                                                                                      This is an interesting relevant video: https://www.youtube.com/watch?v=tSnnfUj1XCQ

                                                                                                                                                                      The major complaint about structured editing has always been a lack of flexibility in editing incomplete/invalid programs creating an uncomfortable point and click experience that is not as fluid and freestyle as text.

                                                                                                                                                                      However that is not at all a case against structured editing. That is a case for making better structured editors.

                                                                                                                                                                      That is not an insurmountable challenge and not a big enough problem to justify throwing away all the other benefits of structured editing.

                                                                                                                                                                      1. 4

                                                                                                                                                                        Thanks for the link to the video. That’s stuff from Intentional Software, something spearheaded by Charles Simonyi(*). It’s been in development for years and was recently acquired by Microsoft. I don’t think they’ve ever released anything.

                                                                                                                                                                        To be clear, I am not against structured editing. What I don’t like is calling it new, when it clearly isn’t. And the lack of acknowledgement of why things didn’t work before is also disheartening.

                                                                                                                                                                        As for structured editing itself, I like it and I’ve tried it, and the only place I keep using it is with Lisp. I think it’s going to be one of those “worse is better” things: although it may be more “pure”, it won’t offer enough benefit over its cheaper – though more sloppy – counterpart.

                                                                                                                                                                        (*) The video was made when he was still working on that stuff within Microsoft. It became a separate company shortly after, in 2002.

                                                                                                                                                                        1. 1

                                                                                                                                                                          I mentioned this in the previous discussion about isomorf.

                                                                                                                                                                          Here is what I consider an AST editor done about as right as can be done, in terms of “getting out of my way”

                                                                                                                                                                          Friend of mine Rik Arends demoing his real-time WebGL system MakePad at AmsterdamJS this year

                                                                                                                                                                        2. 5

                                                                                                                                                                          Right, so I’ve taken multiple stabs at research on this stuff in various forms over the years, everything from AST editors, to visual programming systems and AOP. I had a bit of an exchange with @akent about it offline.

                                                                                                                                                                          I worked with Charles a bit at Microsoft and later at Intentional. I became interested in it since there is a hope for it to increase programmer productivity and correctness without sacrificing performance.

                                                                                                                                                                          You are totally right though Geoff, the editor experience can be a bugger, and if you don’t get it right, your customers are going to feel frustrated, claustrophobic and walk away. That’s the way the Intentional Programming system felt way back when - very tedious. Hopefully they improved it a lot.

                                                                                                                                                                          I attacked it from a different direction to Charles using markup in regular code. You would drop in meta-tags which were your “intentions” (using Charles’ terminology). The meta-tags were parameterized functions that ran on the AST in-place. They could reflect on the code around them or even globally, taking into account the normal programmer typed code, and then “insert magic here”.

                                                                                                                                                                          Turned out it I basically reinvented a lot of the Aspect Oriented Programming work that Gregor Kiczales had done a few years earlier although I had no idea at the time. Interestingly Gregor was the co-founder of Intentional Software along with Charles.

                                                                                                                                                                          Charles was more into the “one-representation-to-rule-them-all” thing though and for that the editor was of supreme importance. He basically wanted to do “Object Linking and Embedding”… but for code. That’s cool too.

                                                                                                                                                                          There were many demos of the fact that you could view the source in different ways, but to be honest, I think that although this demoed really well, it wasn’t as useful (at least at the time) as everyone had hoped.

                                                                                                                                                                          My stuff had its own challenges too. The programs were ultra powerful, but they were a bit of a black-box in the original system. They were capable of adding huge gobs of code that you literally couldn’t see in the editor. That made people feel queasy because unless you knew what these enzymes did, it was a bit too much voodoo. We did solve the debugging story if I remember correctly, but there were other problems with them - like the compositional aspects of them (which had no formalism).

                                                                                                                                                                          I’m still very much into a lot of these ideas, and things can be done better now, so I’m not giving up on the field just yet.

                                                                                                                                                                          Oh yeah, take a look at the Wolfram Language as well - another inspirational and somewhat related thing.

                                                                                                                                                                          But yes, it’s sage advice to see why a lot of the attempts have failed at least to know what not to do again. And also agree, that’s not a reason not to try.

                                                                                                                                                                          1. 6

                                                                                                                                                                            From the first article, fourth page:

                                                                                                                                                                            The case of Lisp is interesting though because though this language has a well defined syntax with parenthesis (ignoring the problem of macro-characters), this syntax is too trivial to be more useful than the structuring of a text as a string of characters, and it does not reflect the semantics of the language. Lisp does have a better structured syntax, but it is hidden under the parenthesis.

                                                                                                                                                                            KILL THE INFIDEL!!!

                                                                                                                                                                            1. 2

                                                                                                                                                                              Jetbrains’ MPS is using a projectional editor. I am not sure if this is only really used in academia or if it is also used in industry. The mbeddr project is build on top of it. I remember using it and being very frustrated by the learning curve of the projectional editor.