1. 6

    Most linters or IDEs will catch that you’re ignoring an error

    I’m not always using linters or IDEs. Does that mean that I should be able to just ignore the error? I don’t think so

    Plan for failure, not success

    The default behavior in Go assumes success. You have to explicitly plan for failure

    has sufficient context as to what layers of the application went wrong. Instead of blowing up with an unreadable, cryptic stack trace

    I don’t know how about you, but math: square root of negative number -15 doesn’t exactly tell my why the error occurred. I’d rather look into that “cryptic” stack trace to find out where that negative number got used.

    1. 0

      math: square root of negative number -15

      math.Sqrt(-15) = NaN. I have no idea where you’ve seen that particular error message. Cases like that are exceptional and are a programmer’s mistake (e.g. division by zero), therefore Go would panic with a full stack trace.

      But let’s assume that there is some square root function that returns an error. We usually add annotations to error messages, e.g. in Go ≥1.13 fmt.Errorf("compute something: %w", err) and later check that errors.Is(err, math.ErrNegativeSqrt), so instead of

      math: square root of negative number -15
      

      you’d actually see

      solve equation: compute discriminant: square root of negative number -15
      

      which is way more descriptive than a stack trace if there are multiple similar error paths along the call stack (from my experience, that’s almost always true). With exceptions you’d have to carefully follow the stack trace line-by-line to find the cause of an error.

      Also, in many languages discovering that a particular function throws an exception without code analyzers is pretty hard.

      1. 5

        I have no idea where you’ve seen that particular error message.

        I just took it as an example.

        We usually add annotations to error messages

        Usually. But the default for go is just return err. Which leaves you with the problematic one.

        which is way more descriptive

        I don’t know how about you, but in a project with >10k LOC I’d like to see, in which exact file do you have to open for an error. I won’t always know the project by heart, and to find it normally would take a good couple of minutes.

        if there are multiple similar error paths along the call stack

        Uhhh, you just look into the first function along the call stack and go along in such case. It’s not that difficult. I know it can look scary, but stack traces usually have all the information that you need, and then some. Just learning their syntax will allow you to efficiently find causes of bugs. That cannot be said for Go, as different projects can use different styles of error annotations.

        With exceptions you’d have to carefully follow the stack trace line-by-line to find the cause of an error.

        I usually read less than half of it before realizing where the error lies. There is a ton of information in a call stack, and you don’t always need all of it.

        Also, in many languages discovering that a particular function throws an exception without code analyzers is pretty hard.

        In a simple case, looking for raise/throw/etc. is enough. But yes, this is a problem. Some languages do have a solution, e.g. Java, but those aren’t that common yet. I’d like improvement here.

        1. 3

          What happens if “solve equation” calls “compute discriminant” multiple times? There is no line number telling me where it was called.

          My preference is a good error message with a stack trace. I can choose to ignore the stack trace when looking at logs/output. Also, (in some languages) a stack trace can be generated without an exception.

          1. 1

            line numbers assume a few things:

            • you have the same version of the code everywhere. in a production environment, you can easily have multiple versions of the same thing during deployments, upgrades, etc. Those situations are the most common situations in which you would encounter an error for the first time since, as we just said, you’re deploying potentially new code.
            • you’re only talking about comprehending the error data as it happens, and not historically, in a log file somewhere. What if you’re looking at log data from a few days ago? Now you have to cross-reference your stack trace with what version of the app was running at the time that stack trace was generated. Ok, now check out the code at that time. Now step through the line numbers and run the program in your head to try to understand the story of how the error happened. -OR- just … read the error’s story of how it happened in a Go program.
            • a stack trace is generally not going to capture local variables. Even if you know what line you’re on, you might not know what parameters that function was called with. With explicit wrapping of error values, you append to the error value the necessary contextual information, as decided by a human author, for a human to understand the cause of the error.

            I don’t miss exceptions and stack traces after using Go for a few years. Both are, in my experience, actively worse than handling errors at the call site and wrapping errors with descriptive contextual information.

            1. 2

              I use both error context and stack traces; you can add stack traces to errors quite easily (about 20/30 lines of code) and it never hurts to have more information. In some cases there are performance problems with stack traces (an issue in any language), but those are rare enough that I’m not overly worried about it.

              Cases where error context fails for me is when I’ve accidentally added identical context to two errors. This is usually a mistake on my part (copy/paste, or just laziness) and having the stack trace is helpful. I also find it easier in development because I can just go to “foo.go line 20”.

              I replaced all pkg/errors calls with some sed-fu to Go 1.13 errors, but I added back stack traces after a few weeks with a small library (about 100 lines in total).

              Either way, I don’t think context and stack traces need to be exclusive, and there is value in both in different contexts.

      1. 14

        TL;DR: the article asks “how often does Rust change?” And the answer is “every six weeks”. Regardless of how big the changes are, you have to check every six weeks to see if something changed. Like someone said the other day on Lobsters, it’s like getting your pizza sliced into 42 slices versus 12. Sure it’s the same amount of pizza but the cognitive load is higher.

        The endless release posts with tons of stuff in them makes it feel like a lot is happening.

        TFA seems to argue that yes, Rust changes a lot, but they aren’t “big” changes.

        The problem is, big or not, I’m trying to learn the language and it feels like a moving target. As a neophyte I don’t know what’s important about the new version and what isn’t.

        It’s also interesting to note the number of language changes in each release. Standard library changes are one thing, but language changes make things more difficult to learn.

        For example, from the release of Python 3.5 (which introduced explicit async/await syntax, the last major syntax revision) in late 2015, there’s been around 40 Python releases across both Python 2.x and Python 3.x. If you only track Python 3, there were around 30 releases. The majority of these releases were bug fixes, with no or minimal standard library changes. There were of course some big standard library changes, but they were the exception.

        In the same time frame, there were around 65 releases of Rust. These changes were not just bug fixes but all over the place.

        My point is, with Rust I’d have to go through twice as many release notes and while the list of changes is generally small, it’s not always clear what’s a big change and what isn’t. With Python, it’s obvious that “fixed bug in how whatever works” is a bug fix. In Rust “this macro can now be applied to structures” doesn’t mean anything to a neophyte. Is that a big change? A small one? I don’t know.

        Of course you can find counter examples in Python releases and bug fixes in Rust releases, but it just feels different. It feels impossible to keep up.

        Compare to languages like C, where there were literally no changes between the last two standard revisions (C11 and C18), or Go which has had around 10 significant releases in five years and an explicit language standard that allows alternative implementations. (Go has had more than ten minor releases in that time, but anything where only the minor version changes are bug fixes).

        I really like Rust. The type system is beautiful. The ownership model is gorgeous and innovative. The toolchain is fantastic. I just feel like using it would be on a treadmill, every six weeks poring over release notes to see what changed. Sure I can stick to Rust 2018, but by doing that I’ve run into third party code that used newer features that I didn’t know about. I’ve also run into trouble with distro-packaged Rust being “too old” to build something I find online. Sure I can use rustup, but I like having the OS packaged and supported tools.

        (Again, of course, other languages have problems with the packaged version being too old but I’ve used Python for many years and not had that many problems and Rust for a couple of months and run into it twice).

        I really, truly, think that Rust’s rapid release schedule is my and perhaps others’ primary barrier to adoption.

        (Also, and this is a minor thing, but one of the major Rust contributors tweeted (or retweeted) a thing about how “COBOL was designed by women and just works and C was designed by men and it’s impossible to use securely.” Despite being inaccurate it also struck me as really unwelcoming. I was following that person’s Twitter feed specifically to start participating in the Rust community and that did not encourage me.)

        1. 14

          Regardless of how big the changes are, you have to check every six weeks to see if something you rely on changed.

          Or else what? If you only check every 12 weeks and read a pair of posts at a time, or only check once a year, what goes wrong? Your code will keep on working.

          You miss out on anything new and nice, but only as much as if Rust was releasing less often.

          1. 7

            Well partially because it’s not just my code.

            If I want to use a third-party crate (which is almost a given because of the way Rust’s ecosystem is designed*), I need to audit that crate to make sure it’s secure and correct for my purposes. A rapidly-changing language makes it harder for me to read other people’s code regardless of how static my code is.

            * this is not necessarily a negative thing

            1. 10

              Among the changes Steve enumerated here, are there any that you feel would impact your ability to audit code without having known about the feature ahead of time?

              I would think if you are auditing and encounter something unfamiliar like const fn or dyn Trait or some new standard library method, it’s easy enough to then learn about it via search and the Rust documentation.

              For standard library methods, by far the main way people learn about them is not from release posts. They find them in documentation when searching for that functionality, or look it up when they see it in someone’s code. But in a safe language this works for the things classed as “language changes” as well. The feeling of being compelled to read the release notes isn’t well founded in my experience.

              1. 4

                The ?, for example, was a problem for me getting started. I’d never seen it before and while I understand its purpose and it makes great sense, I then had to go say “okay this way of doing things that I’d already read about and started getting a handle on has two ways of expressing it.”

                It’s not the end of the world, of course, or that difficult to understand, it just seems like there are a lot of little changes like that that increase cognitive load for me.

                Again, this is just me and my opinion. Obviously Rust is very popular and this release strategy works for them. I’m just pointing out what has been a difficulty for me and, given the articles posted recently here, for some others too.

                1. 6

                  You now shifted from “changes every 6 weeks” is a problem to “this syntax change was a problem”. I believe you that ? Was confusing at first. However, the post shows that syntax changes are rare. If and when you do see it, you go look up what it is. No way that is happening every 6 weeks.

                  1. 13

                    You now shifted from “changes every 6 weeks” is a problem to “this syntax change was a problem”.

                    You asked for an example from a limited set of possibilities, and I gave one. I didn’t imply my list was exhaustive. You then accuse me of changing my argument…because I gave you an example from one of the possibilities you asked and explained how I felt about it as a new user of the language.

                    I’ve said several times that this release schedule seems to work for Rust and a lot of people like it. I’ve also said that it seems to be a barrier to adoption for some people, myself included. I’m choosing to try to overcome because I think Rust is a fascinating and worthwhile language.

                    I don’t appreciate the implication that I’m somehow lying in my impression of a programming language or trying to “win” some debate here with gotcha tactics. That Rust changes often is a concern that a lot of people seem to have. As someone who has decided to learn the language, my impression is that I think this argument holds some water. I’ve presented my feelings in response to a topical post regarding the same issue.

                    1. 2

                      I’m not denying that try/? change was an issue for you, but how could Rust avoid causing you that problem?

                      You’re saying that it should release less frequently, but I don’t see how that avoids the fact that it changed at all. If Rust didn’t change ? in 2016, it would change it in 2018. The result in both cases is the same: you — a Rust 2015 user — wouldn’t know the new syntax.

                      1. 1

                        Nobody expects a language to be static. It’s like I said above, it’s easier to understand larger release notes twice a year than smaller release notes eight times a year, at least for me and obviously some other people.

                        There’s also the issue that, okay, it’s not Rust 2015….but it’s not Rust 2018 either. It’s something in between. I can’t say “oh yeah, that’s from Rust 2018,” I have to say “that’s from Rust 1.27.0” or whatever. There are many more versions of Rust to keep track of than just 2015 and 2018. I learn the ? syntax and that’s great…but that doesn’t teach me Rust 2018, it just teaches me part of one of eight or nine steps between the editions.

                        (And maybe I’m wrong but there doesn’t seem to be any concrete sum documentation for the Rust editions. The documentation at rust-lang seems to track stable and there doesn’t appear to be a “Rust 2015” snapshot of the documentation except in dead-tree form. Please correct me if I’m wrong. That means that as I’m studying the language, the documentation changes out from under me, not a lot but some.)

                        As I said above, a lot of people seem to like this sort of schedule with smaller changes more often. That’s great.

                        What I feel like some Rust advocates aren’t getting is that some people don’t like this schedule and the Edition mechanism doesn’t seem to totally fix the problems they have with it…and that’s okay. People are allowed to say “I like Rust but I’m having trouble with this part and it’s a major pain point.” Rather than try to “prove them wrong” about how they feel about the release schedule maybe acknowledge that it can be a problem for some people.

                        1. 2

                          Why do you have to know from which version a thing is? I get that in C or C++ you have to know when each feature has been added, because there are different camps that settle on different past decades, but in Rust there’s no such thing. If you know a feature has been released, you can use it.

                          there doesn’t seem to be any concrete sum documentation for the Rust editions

                          There has been only one so far, and it’s here: https://doc.rust-lang.org/edition-guide/

                          The documentation at rust-lang seems to track stable and there doesn’t appear to be a “Rust 2015” snapshot

                          Old docs are archived here: https://doc.rust-lang.org/1.30.0/

                          But “Rust 2015” isn’t a version of Rust. It’s not a thing that can be targetted or snapshotted. It’s a parsing mode that changes interpretation of a few bits of syntax. It’s like "use strict" in JavaScript or Quirks Mode in HTML. “2015” parsing mode in latest Rust has all the new features of the latest Rust, except a couple that would conflict with some keywords/syntax in 2015.

                          The Rust team created a lot of confusion around this by marketing “cool new features we’ve released in the past year you may have missed” together with the “new parsing mode” switch, creating a false impression that “2015” and “2018” are different languages. They’re the same language with the same standard library, but “2015” mode can use async as a variable name.

          2. 2

            Thanks for chiming in with this view. It makes a lot of sense.

            However, I also feel that the Internet encourages this sort of firehose-style of releases. To many developers, a slow release cadence indicates project stagnation. And, to them, stagnation does not imply stability, but rather, a dead-end. I’m pretty sure the rust team puts releases frequently for other reasons that are more important, but there are virtues to signaling frequent project activity. IOW, I don’t think the Rust core team caters to the “always be committing” crowd, but they do try to stay on the collective radar.

            FWIW I have been on both sides of the module change and I lost a few hours to trying to understand it. But that’s been about it so far. Haven’t ventured into asynchronous stuff yet.

            1. 1

              Worth pointing out that Python has been around since the early 90s. How much did Python change at an equivalent point in its lifecycle? How would Python look if it had been locked in amber at that point?

              1. 10

                People get hung up a lot on Python 2/3, but forget how much Python changed prior to that. During the Python 2.x series, the language gained:

                • The bool type
                • Opt-in unified type hierarchy and “new-style” classes with full operator overloading
                • Context managers/the with statement
                • Unified integral type
                • Integral versus “true” division
                • The set type
                • The logging module
                • The unittest and doctest modules
                • The modern import-hook system
                • Lazy iterables and the itertools module
                • Generator expressions
                • The functools module
                • Ternary conditional expressions
                • Absolute versus relative imports
                • Decorators
                • The format() method and formatting mini-language
                • The collections module
                • Abstract classes and methods

                And on and on… sure, you could say that code written for Python 2.0 still worked on Python 2.7 (which wasn’t actually true – there were deprecations and removals in the 2.x series!), but idiomatic Python 2.7 contains so many things that would be unrecognizable to someone who knew only idiomatic 2.0 that it might as well be a completely different language.

                1. 2

                  No doubt, but early Rust was very different from modern Rust (there was a garbage collector at one point, major differences in the type system, etc).

                  I’m not saying Rust doesn’t deserve its youthful vigor and changes because of course it does. I also don’t think Python was as popular that early in its life as Rust is, relatively speaking (at least from my fuzzy memories of those ancient days where I worked mostly in Perl and C). Python didn’t really explode to be everywhere until after the 2.0 release, IIRC. Python 2.0 and Python 3.8 are of course pretty different, but there’s something like 20 years there versus ten for Rust with I would argue even greater changes.

                  Regardless of the relative age of the languages, I do think Rust’s change schedule is a barrier to adoption for a lot of people (with the anecdotal evidence of the couple of articles posted recently here talking about just that).

                  Again, that’s just me. I’m sure a lot of people like Rust’s release schedule and it seems to be working well given Rust’s rising popularity.

                  1. 0

                    Why would you compare that to Rust’s changes now and not to Rust’s pre-1.0 changes?

                1. 2

                  I prefer https://github.com/ulid/spec instead. It is sortable and does not wreck indices.

                  1. 4

                    Lots of good things were originally unintended or semi-intended results of technical limitations. The /usr split is still a good idea today even if those technical limitations no longer exist. It’s not a matter of people not understanding history, or of people not realising the origins of things, but that things outgrow their history.

                    Rob’s email is, in my opinion, quite condescending. Everyone else is just ignorantly cargo-culting their filesystem hierarchy. Or perhaps not? Perhaps people kept the split because it was useful? That seems a bit more likely to me.

                    1. 19

                      I’m not sure it is still useful.
                      In fact, some linux distributions have moved to a “unified usr/bin” structure, where /bin, /sbin/, and /usr/sbin all are simply symlinks (for compatibility) to /usr/bin. Background on the archlinux change.

                      1. 2

                        I’m not sure it is still useful.

                        I think there’s a meaningful distinction there, but it’s a reasonable decision to say ‘there are tradeoffs for doing this but we’re happy with them’. What I’m not happy with is the condescending ‘there was never any good reason for doing this and anyone that supports it is just a cargo culting idiot’ which is the message I felt I was getting while reading that email.

                        In fact, some linux distributions have moved to a “unified usr/bin” structure, where /bin, /sbin/, and /usr/sbin all are simply symlinks (for compatibility) to /usr/bin. Background on the archlinux change.

                        I’m not quite sure why they chose to settle on /usr/bin as the one unified location instead of /bin.

                        1. 14

                          That wasn’t the argument though. There was a good reason for the split (they filled up their hard drive). But that became a non-issue as hardware quickly advanced. Unless you were privy to these details in the development history of this OS, of course you would copy this filesystem hierarchy in your unix clone. Cargo culting doesn’t make you an idiot, especially when you lack design rationale documentation and source code.

                          1. 2

                            … it’s a reasonable decision to say ‘there are tradeoffs for doing this but we’re happy with them’. What I’m not happy with is the condescending ‘there was never any good reason for doing this and anyone that supports it is just a cargo culting idiot’ which is the message I felt I was getting while reading that email.

                            Ah. Gotcha. That seems like a much more nuanced position, and I would tend to agree with that.

                            I’m not quite sure why they chose to settle on /usr/bin as the one unified location instead of /bin

                            I’m not sure either. My guess is since “other stuff” was sticking around in /usr, might as well put everything in there. /usr being able to be a single distinct mount point that could ostensibly be set as read-only, may have had some bearing too, but I’m not sure.
                            Personally, I think I would have used it as an opportunity to redo hier entirely into something that makes more sense, but I assume that would have devolved into endless bikeshedding, so maybe that is why they chose a simpler path.

                            1. 3

                              My guess is since “other stuff” was sticking around in /usr, might as well put everything in there. /usr being able to be a single distinct mount point that could ostensibly be set as read-only, may have had some bearing too, but I’m not sure.

                              That was a point further into the discussion. I can’t find the archived devwiki entry for usrmerge, but I pulled up the important parts from Allan.

                              Personally, I think I would have used it as an opportunity to redo hier entirely into something that makes more sense, but I assume that would have devolved into endless bikeshedding, so maybe that is why they chose a simpler path.

                              Seems like we did contemplate /kernel and /linker at one point in the discussion.

                              What convinced me of putting all this in /usr rather than on / is that I can have a separate /usr partition that is mounted read only (unless I want to do an update). If everything from /usr gets moved to the root (a.k.a hurd style) this would require many partitions. (There is apparently also benefits in allowing /usr to be shared across multiple systems, but I do not care about such a setup and I am really not sure this would work at all with Arch.)

                              https://lists.archlinux.org/pipermail/arch-dev-public/2012-March/022629.html

                              Evidently, we also had an request to symlink /bin/awk to /usr/bin/awk for distro compatability.

                              This actually will result in more cross-distro compatibility as there will not longer be differences about where files are located. To pick an example, /bin/awk will exist and /usr/bin/awk will exist, so either hardcoded path will work. Note this currently happens for our gawk package with symlinks, but only after a bug report asking for us to put both paths sat in our bug tracker for years…

                              https://lists.archlinux.org/pipermail/arch-dev-public/2012-March/022632.html

                              And bug; https://bugs.archlinux.org/task/17312

                        2. 18

                          Sorry, I can’t tell from your post - why is it still useful today? This is a serious question, I don’t recall it ever being useful to me, and I can’t think of a reason it’d be useful.

                          1. 2

                            My understanding is that on macOS, an OS upgrade can result in the contents of /bin being overwritten, while the /usr/local directory is left untouched. For that reason, the most popular package manager for macOS (Homebrew) installs packages to /usr/local.

                            1. 1

                              I think there are cases where people want / and /usr split, but I don’t know why. There are probably also arguments that the initramfs/initrd is enough of a separate system/layer for unusual setups. Don’t know.

                              1. 2

                                It’s nice having /usr mounted nodev, whereas I can’t have / mounted nodev for obvious reasons. However, if an OS implements their /dev via something like devfs in FreeBSD, this becomes a non-issue.

                                1. 2

                                  Isn’t /dev an own mountpoint anyways?

                                  1. 1

                                    It is on FreeBSD, which is why I mentioned devfs, but idk what the situation is on Linux, Solaris and AIX these days off the top of my head. On OpenBSD it isn’t.

                                    1. 2

                                      Linux has devtmpfs per kernel default.

                            2. 14

                              The complexity this introduced has far outweighed any perceived benefit.

                              1. 13

                                I dunno, hasn’t been useful to me in the last 20 years or so. Any problem that it solves has a better solution in 2020, and probably had a better solution in 1990.

                                1. 6

                                  Perhaps people kept the split because it was useful? That seems a bit more likely to me.

                                  Do you have a counter-example where the split is still useful?

                                  1. 3

                                    The BSDs do have the related /usr/local split which allows you to distinguish between the base system and ports/packages, which is useful since you may want to install different versions of things included in the base system (clang and OpenSSL for example). This is not really applicable to Linux of course, since there is no ‘base system’ to make distinct from installed software.

                                    1. 3

                                      Doesn’t Linux have the same /usr/local split? It’s mentioned in the article.

                                      1. 5

                                        I tend to rush for /opt/my-own-prefix-here (or per-package), myself, mainly to make it clear what it is, and avoid risk of clobbering anything else in /usr/local (like if it’s a BSD). It’s also in the FHS, so pedants can’t tell you you’re doing it wrong.

                                        1. 4

                                          It does - this is generally used for installing software outside the remit of the package manager (global npm packages, for example), and it’s designated so by the FHS which most distributions follow (as other users have noted in this thread), but it’s less prominent since most users on Linux install very little software not managed by the package manager. It’s definitely a lot more integral in BSD-land.

                                          1. 3

                                            […] since most users on Linux install very little software not managed by the package manager

                                            The Linux users around me still do heaps of ./configure && make install; but, I see your point when contrasted against the rise of PPAs, Docker and nodenv/rbenv/pyenv/…

                                            1. 3

                                              Yeah, I do tons of configure make install stuff, sometimes of things that are also in the distro - and this split of /usr/local is sometimes useful because it means if I attempt a system update my custom stuff isn’t necessarily blasted.

                                              But the split between /bin and /usr/bin is meh.

                                        2. 1

                                          That sounds sensible. Seems like there could be a command that tells you the difference. Then, a versioning scheme that handles the rest. For example, OpenVMS had file versioning.

                                    1. 2

                                      Many of programs that were upgraded to be 64 bit compatible are now buggy. This makes Catalina feel more unstable.

                                      1. 2

                                        Implicits are one of the things I did not like about Scala. I prefer Python’s be explicit philosophy.

                                          1. 2

                                            SMH, of course. Forgot they started blogging about this recently.

                                              1. 1

                                                Oh that’s nice.

                                          1. 3

                                            Can you still buy one?

                                            1. 1

                                              I appreciate more people discussing this. My observations tell me that PRs are fine for open source development, where distributed development teams are working at different times and at different paces. I think there are more optimal workflows for development teams working at the same company. That being said, things were pretty bad before PRs. PRs have created a sane workflow that most companies are better off for having adopted.

                                              There is no singular optimal workflow. If you want to optimize the workflow for your team, then you have to build a workflow that takes into account:

                                              • How many people are contributing to the repository? A 3 person development team requires different controls than a company with 10,000 engineers all contributing to a monorepo.
                                              • How often is the code in the repository put into production? Some companies release hundreds of times per day, while others release once per year.
                                              • Does the repository have good test coverage? If the code has sufficient tests, then changes can be introduced with less risk of breaking things. Note: this may not help design related issues.
                                              • How established the repository? Are there existing conventions that people know to follow? Are massive changes being made that often cause merge conflicts?

                                              I am sure there are other important points to consider that I am missing. The above is a good start though and I think this should be a discussion that teams have. If the answer is to stick with PRs, then that is fine. Maybe there is some room for a little experimentation though.

                                              1. 1

                                                Cool, I am still on neovim.

                                                1. 25

                                                  The inevitable fate of languages with no definition of done: an ever-decreasing bar to language additions.

                                                  1. 17

                                                    I like that the most recent C standard added no new features.

                                                    1. 2

                                                      It was only bugfixes, clarifications to language and bringing the de jure wording for some features into line with the de facto meanings?

                                                      1. 5

                                                        Basically. C18 is a “bugfix release” for C11. C2x isn’t supposed to add too much either, mostly decimal arithmetic, IIRC.

                                                    2. 5

                                                      I would have so much respect for a language designer who just said “that’s it, it’s done.”

                                                      1. 7

                                                        Doesn’t this describe lua?

                                                        1. 6

                                                          Or even more, Standard ML, which comes with a spec and not a reference implementation.

                                                      2. 4

                                                        Isn’t one of Ruby’s goals “developer happiness”?

                                                        It’s hard to go from this to a language spec, and well at some point you have to handle the conflicts between developers who enjoy a simple language, versus those who like to ad[oa]pt every shiny features of their week-end programming tools.

                                                        IMHO, adding more stuff to Ruby will only make it worse, unless the core team start taking cues from Rust’s development model.

                                                        1. 16

                                                          you have to handle the conflicts between developers who enjoy a simple language

                                                          Developers who enjoy a simple language would have left Ruby long ago.

                                                          1. 4

                                                            Ruby was never a simple language and Rails started using every niche behaviour there was. It was only capable of writing good inner DSLs to hide this.

                                                            1. 3

                                                              Which is why - despite being a professional Rubyist for years - I reach for a Lisp these days when coding.

                                                              1. 1

                                                                I think a lot did, and Ruby (MRI) is not getting simpler.

                                                                1. 5

                                                                  Also, a lot of awesome people joined. We overemphasize leavers, because they are usually already at the top of the community.

                                                                  Ruby is also larger than ever, even if its growth is somewhat on a plateau.

                                                              2. 13

                                                                unless the core team start taking cues from Rust’s development model

                                                                I think Rust is on a similar feature death march, it just hasn’t been going on for long enough to make it readily apparent.

                                                                1. 1

                                                                  Isn’t one of Ruby’s goals “developer happiness”?

                                                                  I’ve most often heard this expressed as “principle of least surprise”.

                                                                  Kind of tough to keep to that and yet evolve the language in a meaningful way. See my response to @soc for my weak theory that major changes should just get a whole new language name.

                                                                  1. 6

                                                                    Note that Matz has himself said that the measure for surprise is his surprise.

                                                                    1. 3

                                                                      That makes a lot of sense and explains a bunch, in that it makes the decisions he made to evolve the language more cogent as his own personal sensibilities around language design evolved.

                                                                      While there’s no principle of least surprise corollary per se, I think you can certainly make comparisons to Guido with Python. He went to work at Dropbox and Google, and worked on GINORMOUS code bases where type hinting is a huge help, and was also on a type theory kick, so he evolved the language in that direction despite the pitchforks and torches carried by large chunks of the community.

                                                                2. 4

                                                                  So I’m unsure as to whether I’d go quite as far as you do in this statement but I’d actually been thinking that this reminded me a lot of all the controversy in the Python community around some of the more tectonic language changes like the addition of type hinting in Python 3.

                                                                  I’m almost to the point where I feel like language designers who want to make tectonic changes should be honest with themselves and their audience and just change the language’s name.

                                                                  For instance, well written Python 2 and well written, idiomatic, heavily type hinted Python 3 feel very different to me and to many others fans. This isn’t necessarily a bad thing, but maybe we could suck some of the pointless “Who moved my cheese?” controversy out of the conversation and focus on what’s different and maybe better or worse instead.

                                                                  Ruby’s evolution has certainly been contentious - the syntactic sugar borrowed from Perl is part of what attracted many of us to the language initially (Yes I was a Perl guy in the 90s. So were many of us. No shame :) but when Matz realized how problematic it could be in the modern context and moved away from it, a whole lot of code broke and there was a whole lot of unhappy, even if ultimately many people agreed it was a good idea.

                                                                  1. 2

                                                                    I think I’d regard this ongoing feature creep in language as just as sleazy as those online services that keep adjusting their privacy agreement after you have signed up to collect more and more data on you.

                                                                    In the end, if language designers were honest they should probably give their “new” language a new name. I assume that the “immediate adoption” of reusing the name of an existing language is too enticing though.

                                                                    1. 4

                                                                      These languages are open source while say Github/Google/Facebook aren’t. The whole point of open source is that users have control.

                                                                      If enough people really care, they can fork it, and forks matter (XEmacs, etc.).

                                                                      I’m one of those people who doesn’t really see the value in Python 3. But it actually provides the perfect opportunity for someone to fork Python 2, because it’s stable, and it has a stable set of libraries.

                                                                      I haven’t seen very serious efforts to do this, which indicates to me that the community doesn’t care enough. (A few people have tried, to their credit, but overall the message is clear.) Lots of people will complain about things but they don’t want to put in the effort to fix them or even maintain the status quo.

                                                                      1. 2

                                                                        I think I’d regard this ongoing feature creep in language as just as sleazy as those online services that keep adjusting their privacy agreement after you have signed up to collect more and more data on you.

                                                                        The way you phrased this shines a light on an interesting facet of this whole discussion: The contract between a language’s designer(s) and its user community.

                                                                        From the designers perspective, this is their bat and ball, but it seems like at least some users don’t see it that way.

                                                                        1. 2

                                                                          I haven’t seen very serious efforts to do this, which indicates to me that the community doesn’t care enough. (A few people have tried, to their credit, but overall the message is clear.) Lots of people will complain about things but they don’t want to put in the effort to fix them or even maintain the status quo.

                                                                          Trick being you need to fork not just the language but the ecosystem. HUGE parts of the Python ecosystem have pointedly abandoned 2 and made incompatible changes, so everything your fork eats is frozen in time. Pretty high price to pay unless, as you say, you really, REALLY care.

                                                                    1. 5

                                                                      Title should have mentioned that this is mostly about selenium. Different types of automation have different ROI profiles. Also, maybe some companies have a UI defect cost high enough to warrant the use of Selenium tests. All of this is so context dependent.

                                                                      1. 9

                                                                        Some of that context: Vendor of software for debugging in production says automated testing doesn’t work.

                                                                      1. 1

                                                                        my advice would be first to consider whether the problem is actually a more human one.

                                                                        Of course the problem is a human one. Naming classes is a chore. BEM exists to help people organize the names so teams do not accidentally cause problems in some other part of the CSS. Functional CSS takes a different approach as one that limits names.

                                                                        Which one is better? I dunno. Maybe a combination of both like the author said. Figure out what works best for your website.

                                                                        1. 7

                                                                          I wish it was called web.. One syllable is far more convenient in speech than nine.

                                                                          1. 2

                                                                            I also wondered if the english speaking world would change “double-u” into something shorter now that we say “www” so often. Turns out we rather got rid of the “www” instead of the “double-u”.

                                                                            1. 3

                                                                              I shorten it to “dub”. So www is “dub dub dub”.

                                                                            2. 1

                                                                              Just pronounce it “woooo” :)

                                                                              1. 1

                                                                                I say “triple double-U”, which is only five syllables.

                                                                              1. 7

                                                                                Playing with urbit, working more on some blogposts.

                                                                                1. -3

                                                                                  No need dude. Drop that right wing bullshit.

                                                                                  1. 4

                                                                                    i’m well aware of the…unique philosophical views of its creator; but independent of that it’s actually a very interesting thing. All data is implicitly shareable. The VM that powers it is a fully transactional computer, including being able to roll back. Every change to the filesystem is a commit like git. Updates to software are instantly distributed to users. If a user goes offline, then all of the changes they missed get reapplied when they come back.

                                                                                    It’s the kind of stuff that works as a massive inspiration for creating my own stuff in the future.

                                                                                1. 10

                                                                                  For a brief moment I was hoping that this worked on Lua 5.3! Alas, it is not so…

                                                                                  1. 3

                                                                                    Compile speed is one of my largest complains about Rust. These are good improvements but it’s still so far from where I’d like it. :(

                                                                                    My tonic/tower/tokio project with ~300 LOC takes about 3 minutes to build if I touch one leaf file not referenced elsewhere. The equivalent in Go would take less than 1 minute. My project is still tiny, I worry about what happens in 2, 3, 5 years. I work with C++ that takes over an hour for a full rebuild, but even touching just one file compiles and links quicker than my 300LOC Rust binary.

                                                                                    1. 1

                                                                                      It is consistently improving. There are tools, such as cargo check, RLS and listing to help as well.

                                                                                      The only time I really have an issue is when the codebase is so performance sensitive that I have to use release builds.

                                                                                      1. 1

                                                                                        Have you used Go or D (or even C)? I found you get spoiled by the speed there.

                                                                                      2. 1

                                                                                        ~300 LOC takes about 3 minutes to build if I touch one leaf file not referenced elsewhere.

                                                                                        Is this common? I didn’t realise Rust was that much slower than C++.

                                                                                        1. 4

                                                                                          This is not only the compiler. Some ecosystems rely extremely on generics for modularity. Those are rolled out and compiled the moment all generic parameters are resolved, which is often delayed to the final program. This also leads to a lot of code being rolled out and optimised by the LLVM. Touching that program will lead to all that code being rolled out again.

                                                                                          Tools like cargo-bloat can show that effect.

                                                                                          rustc is not fast, but not that slow. But if you are actively asking for it to do a lot of work, you will get it.

                                                                                          This is, btw., one of the reasons I’m not using the tokio stack.

                                                                                          1. 1

                                                                                            And to clarify for those not well versed in Rust, the problem with generics is only for functions/types that must be monomorphized. Rust gives you the option of choosing between monomorphized generics and VTable generics. The former often give better runtime performance at the cost of compile times. Libraries that use monomorphized generics heavily can dramatically degrade compile times, similar to how template heavy C++ has poor compile times.

                                                                                            1. 1

                                                                                              Touching that program will lead to all that code being rolled out again.

                                                                                              Sure, but only for the touched file’s compilation unit, right? So it’s that and linking the executable that takes 3 minutes.

                                                                                              I have a template-heavy (stdlib and my own) C++ project here, consisting of multiple libs and executables and ~60,000 lines of C++ which all compiles and links from scratch in just over 4min on a computer from 2011. There’s a huge difference between that and a 300LOC program which takes 3min to do an incremental build.

                                                                                              Edit: excluded test LOC because I didn’t compile those when I timed the build.

                                                                                              Edit2: I guess this is really all moot because I’m sure there are benchmarks comparing apples to apples but even so, 300LOC is just tiny which is why I am so surprised.

                                                                                              1. 1

                                                                                                It’s 300 LOC but using a lot of libraries, it may end up being 60k LOC in the end. The big difference is it’s 300 LOC with tens of thousands of essentially heavy template code underneath. It’s also only using ld, lld apparently speeds this up massively, and there’s seemingly plans to migrate rustc that direction. I’m also running it via WSL2, so I’m going to look at how it differs natively.

                                                                                                1. 1

                                                                                                  It’s 300 LOC but using a lot of libraries, it may end up being 60k LOC in the end.

                                                                                                  I was actually compiling 60KLOC from scratch though. And that 60K uses lots of third-party templates. If I included third-party templates I would have much more than 60KLOC!

                                                                                                  I’m also running it via WSL2, so I’m going to look at how it differs natively.

                                                                                                  This sounds like it might make a difference.

                                                                                        1. 1

                                                                                          There seems to be a belief amongst memory safety advocates that it is not one out of many ways in which software can fail, but the most critical ones in existance today, and that, if programmers can’t be convinced to switch languages, maybe management can be made to force them.

                                                                                          I didn’t see this kind of zeal when (for example) PHP software fell pray to SQL injections left and right, but I’m trying to understand it. The quoted statistics about found vulnerabilities seem unconvincing, and are just as likely to indicate that static analysis tools have made these kind of programming errors easy to find in existing codebases.

                                                                                          1. 19

                                                                                            Not all vulnerabilities are equal. I prioritize those that give attackers full control over my computer. They’re the worst. They can lead to every other problem. Plus, their rootkits or damage might not let you have it back. You can lose the physical property, too. Alex’s field evidence shows memory unsafety causes around 70-80% of this. So, worrying about hackers hitting native code, it’s rational to spend 70-80% of one’s effort eliminating memory unsafety.

                                                                                            More damning is that languages such as Go and D make it easy to write high-performance, maintainable code that’s also memory safe. Go is easier to learn with a huge ecosystem behind it, too. Ancient Java being 10-15x slower than C++ made for a good reason not to use it. Now, most apps are bloated/slow, the market uses them anyway, some safe languages are really lean/fast, using them brings those advantages, and so there’s little reason left for memory-unsafe languages. Even in intended use cases, one can often use a mix of memory-safe and -unsafe languages with unsafe used on performance-sensitive or lowest-level parts of the system. Moreover, safer languages such as Ada and Rust give you guarantees by default on much of that code allowing you to selectively turn them off only where necessary.

                                                                                            If using unsafe languages and having money, there’s also tools that automatically eliminate most of the memory unsafety bugs. That companies pulling in 8-9 digits still have piles of them show total negligence. Same with those in open-source development who aren’t doing much better. So, on that side of things, whatever tool you encourage should lead to memory safety even with apathetic, incompetent, or rushed developers working on code with complex interactions. Double true if it’s multi-threaded and/or distributed. Safe, orderly-by-default setup will prevent loads of inevitable problems.

                                                                                            1. 13

                                                                                              The quoted statistics about found vulnerabilities seem unconvincing

                                                                                              If studies by security teams at Microsoft and Google, and analysis of Apple’s software is not enough for you, then I don’t know what else could convince you.

                                                                                              These companies have huge incentives to prevent exploitable vulnerabilities in their software. They get the best developers they can, they are pouring many millions of dollars into preventing these kinds of bugs, and still regularly ship software with vulnerabilities caused by memory unsafety.

                                                                                              “Why bother with one class of bugs, if another class of bugs exists too” position is not conductive to writing secure software.

                                                                                              1. 3

                                                                                                “Why bother with one class of bugs, if another class of bugs exists too” position is not conductive to writing secure software.

                                                                                                No - but neither is pretending that you can eliminate a whole class of bugs for free. Memory safe languages are free of bugs caused by memory unsafety - but at what cost?

                                                                                                What other classes of bugs do they make more likely? What is the development cost? Or the runtime performance cost?

                                                                                                I don’t claim to have the answers but a study that did is the sort of thing that would convince me. Do you know of any published research like this?

                                                                                                1. 9

                                                                                                  No - but neither is pretending that you can eliminate a whole class of bugs for free. Memory safe languages are free of bugs caused by memory unsafety - but at what cost?

                                                                                                  What other classes of bugs do they make more likely? What is the development cost? Or the runtime performance cost?

                                                                                                  The principle cost of memory safety in Rust, IMO, is that the set of valid programs is more heavily constrained. You often here this manifest as “fighting with the borrow checker.” This is definitely an impediment. I think a large portion of folks get past this stage, in the sense that “fighting the borrow checker” is, for the most part, a temporary hurdle. But there are undoubtedly certain classes of programs that Rust will make harder to write, even for Rust experts.

                                                                                                  Like all trade offs, the hope is that the juice is worth the squeeze. That’s why there has been a lot of effort in making Rust easier to use, and a lot of effort put into returning good error messages.

                                                                                                  I don’t claim to have the answers but a study that did is the sort of thing that would convince me. Do you know of any published research like this?

                                                                                                  I’ve seen people ask this before, and my response is always, “what hypothetical study would actually convince you?” If you think about it, it is startlingly difficult to do such a study. There are many variables to control for, and I don’t see how to control for all of them.

                                                                                                  IMO, the most effective way to show this is probably to reason about vulnerabilities due to memory safety in aggregate. But to do that, you need a large corpus of software written in Rust that is also widely used. But even this methodology is not without its flaws.

                                                                                                  1. 2

                                                                                                    If you think about it, it is startlingly difficult to do such a study. There are many variables to control for, and I don’t see how to control for all of them.

                                                                                                    That’s true - but my comment was in response to one claiming that the bug surveys published by Microsoft et al should be convincing.

                                                                                                    I could imagine something similar being done with large Rust code bases in a few years, perhaps.

                                                                                                    I don’t have enough Rust experience to have a good intuition on this so the following is just an example. I have lots of C++ experience with large code bases that have been maintained over many years by large teams. I believe that C++ makes it harder to write correct software: not (just) because of memory safety issues, undefined behavior etc. but also because the language is so large, complex and surprising. It is possible to write good C++ but it is hard to maintain it over time. For that reason, I have usually promoted C rather than C++ where there has been a choice.

                                                                                                    That was a bit long-winded but the point I was trying to make is that languages can encourage or discourage different classes of bugs. C and C++ have the same memory safety and undefined behavior issues but one is more likely than the other to engender other bugs.

                                                                                                    It is possible that Rust is like C++, i.e. that its complexity encourages other bugs even as its borrow checker prevents memory safety bugs. (I am not now saying that is true, just raising the possibility.)

                                                                                                    This sort of consideration does not seem to come up very often when people claim that Rust is obviously better than C for operating systems, for example. I would love to read an article that takes this sort of thing into account - written by someone with more relevant experience than me!

                                                                                                    1. 7

                                                                                                      I’ve been writing Rust for over 4 years (after more than a decade of C), and in my experience:

                                                                                                      • For me Rust has completely eliminated memory unsafety bugs. I don’t even use debuggers or Valgrind any more, unless I’m integrating Rust with C.
                                                                                                      • I used to have, at least during development, all kinds of bugs that spray the heap, corrupt some data somewhere, use uninitialized memory, use-after-free. Now I get compile-time errors or panics (which are safe, technically like C++ exceptions).
                                                                                                      • I get fewer bugs overall. Lack of NULL and mandatory error handling are amazing for reliability.
                                                                                                      • Built-in unit test framework, richer standard library and easy access to 3rd party dependencies help too (e.g. instead of hand-rolling another own buggy hash table, I use a well-tested well-optimized one).
                                                                                                      • My Rust programs are much faster. Single-threaded Rust is 95% as fast as single-threaded C, but I can easily parallelize way more than I’d ever dare in C.

                                                                                                      The costs:

                                                                                                      • Rust’s compile times are not nice.
                                                                                                      • It took me a while to become productive in Rust. “Getting” ownership requires unlearning C and a lot of practice. However, I’m not fighting the borrow checker any more, and I’m more productive in Rust thanks to higher-level abstractions (e.g. I can write map/reduce iterator that collects something into a btree — in 1 line).
                                                                                                2. 0

                                                                                                  Of course older software, mostly written in memory-unsafe languages, sometimes written in a time when not every device was connected to a network, contains more known memory vulnerabilities. Especially when it’s maintained and audited by companies with excellent security teams.

                                                                                                  These statistics don’t say much at all about the overall state of our software landscape. It doesn’t say anything about the relative quality of memory-unsafe codebases versus memory-safe codebases. It also doesn’t say anything about the relative sizes of memory-safe and memory-unsafe codebases on the internet.

                                                                                                  1. 10

                                                                                                    iOS and Android aren’t “older software”. They’ve been born to be networked, and supposedly secure, from the start.

                                                                                                    Memory-safe codebases have 0% memory-unsafety vulnerabilities, so that is easily comparable. For example, check out the CVE database. Even within one project — Android — you can easily see whether the C or the Java layers are responsible for the vulnerabilities (spoiler: it’s C, by far). There’s a ton of data on all of this.

                                                                                                    1. 2

                                                                                                      Android is largely cobbled together from older software, as is IOS. I think Android still needs a Fortran compiler to build some dependencies.

                                                                                                      1. 9

                                                                                                        That starts to look like a No True Scotsman. When real-world C codebases have vulnerabilities, they’re somehow not proper C codebases. Even when they’re part of flagship products of top software companies.

                                                                                                        1. 2

                                                                                                          I’m actually not arguing that good programmers are able to write memory-safe code in unsafe languages. I’m arguing vulnerabilities happen at all levels in programming, and that, while memory safety bugs are terrible, there are common classes of bugs in more widely used (and more importantly, more widely deployed languages), that make it just one class of bugs out of many.

                                                                                                          When XSS attacks became common, we didn’t implore VPs to abandon Javascript.

                                                                                                          We’d have reached some sort of conclusion earlier if you’d argued with the point I was making rather than with the point you wanted me to make.

                                                                                                          1. 4

                                                                                                            When XSS attacks became common, we didn’t implore VPs to abandon Javascript.

                                                                                                            Actually did. Sites/companies that solved XSS did so by banning generation of markup “by hand”, and instead mandated use of safe-by-default template engines (e.g. JSX). Same with SQL injection: years of saying “be careful, remember to escape” didn’t work, and “always use prepared statements” worked.

                                                                                                            These classes of bugs are prevalent only where developers think they’re not a problem (e.g. they’ve been always writing pure PHP, and will continue to write pure PHP forever, because there’s nothing wrong with it, apart from the XSS and SQLi, which are a force of nature and can’t be avoided).

                                                                                                            1. 1

                                                                                                              This kind of makes me think of someone hearing others talk about trying to lower the murder rate and then hysterically going into a rant about how murder is only one class of crime

                                                                                                              1. -1

                                                                                                                I think a better analogy is campaigning aggressively to ban automatic rifles when the vast majority of murders are committed using handguns.

                                                                                                                Yes, automatic rifles are terrible. But pointing them out as the main culprit behind the high murder rate is also incorrect.

                                                                                                                1. 4

                                                                                                                  That analogy is really terrible and absolutely not fitting the context here. It’s also very skewed, the murder rate is not the reason for calls for bans.

                                                                                                            2. 2

                                                                                                              Although I mostly agree, I’ll note Android was originally built by a small business acquired by Google that continued to work on it probably with extra resources from Google. That makes me picture a move fast and break things kind of operation that was probably throwing pre-existing stuff together with their own as quickly as possible to get the job done (aka working phones, market share).

                                                                                                          2. 0

                                                                                                            Yes, if you zoom in on code bases written in memory-unsafe languages, you unsurprisingly get a large number of memory-unsafety vulnerabilities.

                                                                                                            1. 12

                                                                                                              And that’s exactly what illustrates “eliminates a class of bugs”. We’re not saying that we’ll end up in utopia. We just don’t need that class of bugs anymore.

                                                                                                              1. 1

                                                                                                                Correct, but the author is arguing that this is an exceptionally grievous class of security bugs, and (in another article) that developers’ judgement should not be trusted on this matter.

                                                                                                                Today, the vast majority of new code is written for a platform where execution of untrusted memory-safe code is a core feature, and the safety of that platform relies on a stack of sandboxes written mostly in C++ (browser) and Objective C/C++/C (system libraries and kernel)

                                                                                                                Replacing that stack completely is going to be a multi-decade effort, and the biggest players in the industry are just starting to dip their toes in memory-safe languages.

                                                                                                                What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                                                                                                1. 11

                                                                                                                  Replacing that stack completely is going to be a multi-decade effort, and the biggest players in the industry are just starting to dip their toes in memory-safe languages.

                                                                                                                  Hm, so. Apple has developed Swift, which is generally considered a systems programming language, to replace Objective-C, which was their main programming language and already had safety features like baked in ARC. Google has implemented Go. Mozilla Rust. Google uses tons of Rust in Fuchsia and has recently imported the Rust compiler into the Android source tree.

                                                                                                                  Microsoft has recently been blogging about Rust quite a lot and is often seen hanging around and blogs about how severe memory problems are to their safety story. Before that, Microsoft has spent tons of engineering effort into Haskell as a research base and C#/.Net as a replacement for their C/C++ APIs.

                                                                                                                  Amazon has implemented firecracker in Rust and bragged about it on their AWS keynote.

                                                                                                                  Come again about “dipping toes”? Yes, there’s huge amounts of stack around, but there’s also huge amounts to be written!

                                                                                                                  What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                                                                                                  Because it’s always been a crisis and now we have the tech to fix it.

                                                                                                                  P.S.: In case this felt a bit like bragging Rust over the others: it’s just where I’m most aware of things happening. Go and Swift are doing fine, I just don’t follow as much.

                                                                                                                  1. 2

                                                                                                                    The same argument was made for Java, which on top of its memory safety, was presented as a pry bar against the nearly complete market dominance of the Wintel platform at the time. Java evangelism managed to convert new programmers - and universities - to Java, but not the entire world.

                                                                                                                    Oracle’s deadly embrace of Java didn’t move it to rewrite its main cash cow in Java.

                                                                                                                    Rust evangelists should ask themselves why.

                                                                                                                    I think that of all the memory-safe languages, Microsoft’s C++/CLI effort comes closest to understanding what needs to be done to entice coders to move their software into a memory-safe environment.

                                                                                                                    At my day job, I actually try to spend my discretionary time trying to move our existing codebase to a memory-safe language. It’s mostly about moving the pieces into place so that green-field software can seamlessly communicate with our existing infrastructure. Then seeing what parts of our networking code can be replaced, slowly reinforcing the outer layers while the inner core remains memory unsafe.

                                                                                                                    Delicate stuff, not something you want the VP of Engineering to issue edicts about. In the meantime, I’m still a C++ programmer, and I really don’t appreciate this kind of article painting a big target on my back.

                                                                                                                    1. 4

                                                                                                                      Java and Rust are vastly different ball parks for what you describe. And yet, Java is used successfully in the database world, so it is definitely to be considered. The whole search engine database world is full of Java stacks.

                                                                                                                      Oracle didn’t rewrite its cashcow, because - yes, they are risk-averse and that’s reasonable. That’s no statement on the tech they write it in. But they did write tons of Java stacks around Oracle DB.

                                                                                                                      It’s an argument on the level of “Why isn’t everything at Google Go now?” or “Why isn’t Apple using Swift for everything?”.

                                                                                                                      1. 2

                                                                                                                        Looking at https://news.ycombinator.com/item?id=18442941 it seems that it was too late for a rewrite when Java matured.

                                                                                                                    2. 8

                                                                                                                      What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                                                                                                      To start the multi-decade effort now, and not spend more decades just saying that buffer overflows are fine, or that—despite of 40 years of evidence to the contrary—programmers can just avoid causing them.

                                                                                                          3. 9

                                                                                                            I didn’t see this kind of zeal when (for example) PHP software fell pray to SQL injections left and right

                                                                                                            You didn’t? SQL injections are still #1 in the OWASP top 10. PHP had to retrain an entire generation of engineers to use mysql_real_escape_string over vulnerable alternatives. I could go on…

                                                                                                            I think we have internalized arguments the SQL injection but have still not accepted memory safety arguments.

                                                                                                            1. 3

                                                                                                              I remember arguments being presented to other programmers. This article (and another one I remembered, which, as it turns out, is written by the same author: https://www.vice.com/en_us/article/a3mgxb/the-internet-has-a-huge-cc-problem-and-developers-dont-want-to-deal-with-it ) explicitly target the layperson.

                                                                                                              The articles use the language of whistleblowers. It suggests that counter-arguments are made in bad faith, that developers are trying to hide this ‘dirty secret’. Consider that C/C++ programmers skew older, have less rosy employment prospects, and that this article feeds nicely into the ageist prejudices already present in our industry.

                                                                                                              Arguments aimed at programmers, like this one at least acknowledge the counter-arguments, and frame the discussion as one of industry maturity, which I think is correct.

                                                                                                              1. 2

                                                                                                                I do not see it as bad faith. There are a non-zero number of people who say they can write memory safe C++ despite there being a massive amount of evidence that even the best programmers get tripped up by UB and threads.

                                                                                                                1. 1

                                                                                                                  Consider that C/C++ programmers skew older, have less rosy employment prospects, and that this article feeds nicely into the ageist prejudices already present in our industry.

                                                                                                                  There’s an argument to be made that the resurging interest in systems programming languages through Rust, Swift and Go futureproofs experience in those areas.

                                                                                                              2. 5

                                                                                                                Memory safety advocate here. It is the most pressing issue because it invokes undefined behavior. At that point, your program is entirely meaningless and might do anything. Security issues can still be introduced without memory unsafety of course, but you can at least reason about them, determine the scope of impact, etc.

                                                                                                              1. 3

                                                                                                                I think PHP has lost its identity. I think P++ only muddies the waters more. The problems that PHP was originally meant to solve are now minor issues. The PHP team needs to decide on what niche or market they want to target and then go after it. They have spent years trying to be a little bit of everything but not the best at anything.

                                                                                                                1. 5

                                                                                                                  Surprisingly, no mention of Hack.

                                                                                                                  1. 3

                                                                                                                    Yeah that is interesting. I think Hack dropped support for some popular frameworks awhile ago, hence dropping PHP compatibility.

                                                                                                                    I’m neither a PHP or Hack developer, but from the outside it looks like the two groups had diverging interests and couldn’t work together. Not saying that’s good or bad.

                                                                                                                    I did think Hack was a good idea, i.e. to rescue a bad but useful language, but I guess there is more than one way to do it.