1. 6

    End-to-End doesn’t make sense in this case, because the other ‘end’ is a future version of yourself.

    A faithful equivelant to ‘end-to-end’ in context of a backup is “Is it encrypted at rest with keys only I have access to”. And the answer to “Are Apple iCloud backups encrypted at rest with keys I control” is: “Yes, but..”

    Yes Apple iCloud backups of photos/contacts are encrypted on apples servers, by keys that are yours, it works similarly to iMessage’s in that theres a keyring of unlock keys. But there is nothing stopping apple from intercepting decryption events and keeping a copy for themselves, or lying to people about having encryption features at all.

    This is how the 2FA works, it requests that another machine that has a key is granted a token to unlock the ‘vault’ of contents in iCloud, then you can basically used the unsealed version of the data for the session. When you open a new session you either have an unlock token or you don’t.

    So, for apple to add their own key to a keychain everytime they unseal a vault is relatively trivial, it’s impossible for you to know what unsealing keys are on your keyring. you can see what apple shows you (devices attached to your account, sessions active on the web etc;) but there’s nothing preventing them from adding themselves and not telling you.

    Ultimately when you don’t control the software or the platform, you only have trust left.

    1. 13

      Ultimately when you don’t control the software or the platform, you only have trust left.

      Picking out that last sentence because it highlights something that’s bothered me for a while. How many of us (software developers) have the ability to effectively audit the cryptographic tools we use on a day-to-day basis? Because it’s not enough to know C, or to have a rough understanding of RSA. Cryptography seems like a very intricate and fast-moving field, and even if you are competent, you need a lot of time to do a good audit.

      If I use open source software that I ‘control’, but I have to trust that people more competent than I have done their job well and made things secure, how does my position differ from that of an Apple user?

      1. 6

        End-to-end means that you have the key, you transport an encrypted blob so that anyone in the middle can’t see contents, then the receiver has a specific key to decrypt it.

        Whether the receiver is you in the future, your best friend on Tuesday, or an extraterrestrial in the past does not change this.

        1. 3

          Yes Apple iCloud backups of photos/contacts are encrypted on apples servers, by keys that are yours, it works similarly to iMessage’s in that theres a keyring of unlock keys.

          That is inaccurate. Backups are a special case where they get retrieved by a new device that has no keys whatsoever. The only authentication that is used is an account password, and sometimes a 2FA code.

          1. 10

            End-to-End doesn’t make sense in this case, because the other ‘end’ is a future version of yourself.

            Let’s implement some quick end-to-future-end encryption with GPG:

            gpg --encrypt --sign --armor nora@nora.codes file.txt > file.asc
            
        1. 2

          Perhaps the regression in webcam quality is due to the bezels being thinner on the newer model? To keep the viewing angle the same (somewhere between 50-70⁰ diagonally) in a thinner bezel, the sensor must be proportionally smaller.

          1. 3

            That’s the technical reason. But the question is: are these slimmer MacBook Pros (post-2016) worth it? Sacrificing battery life, webcam and keyboard quality for a few millimeters?

          1. 7

            I like glassine negative sleeves. I put a little label on them describing the general time and location, and collect them in binders.

            1. 10

              This could just as easily be titled: “My customized, non-standard environment”. Yeah, the OP has installed a bunch of weird tools, but that doesn’t mean that the *nix environment is not pretty much the same as it has been for the past 20 years, it just means that the author has installed a bunch of non-standard tools and likes using them. It doesn’t seem to say much for the state of the *nix ecosystem in general, except maybe there are a lot more specialized tools you can use these days.

              1. 11

                Hmm. I hear what you’re saying, but it’s a bit more nuanced than that. For the last 25+ years I’ve been doing development, I’m used to seeing variations in e.g. awk v. gawk or bash v. sh v. dash or the like. I think writing that all off as a “customized, non-standard environment” generically is a bit strong, yet the idea of shell scripts localized to Linux v. macOS v. SunOS or the like is pretty normal—and we generally have tools to deal with it, because the differences are, generally, either subtle, or trivial to work around.

                What I’m observing now, and what I’m saying I think I’m part of the “problem” with, is a general movement away from the traditional tools entirely. It’s not awk v. gawk; it’s awk v. perl, 2020 edition. And I think the thing it says is we’re looking at a general massive shift in the Unix ecosystem, and we’re likely at the leading edge, where we’re going to see a lot of churn and a lot of variation.

                I’m hearing in your comment that I may not have conveyed that terribly well, so I’ll think about how to clarify it.

                1. 4

                  I came here to post the same comment as @mattrose, and then read your response which clarified your point pretty well. I think I can summarize the point by saying “general purpose scripting languages are winning the ad hoc ‘tools market on Unix,’ in part due to the rise of flavor specific additions to POSIX, limiting compatibility and creating more specialized, non-portable ‘*nix’ shell scripts. This fact is more easily corrected by using one of the many flavors of a higher level scripting language that augments the Unix API with it’s own standard APIs” – Or something to that affect.

                  1. 4

                    Thank you for sharing your experiences. You conveyed your point beautifully.

                    Observer bias is something that constantly comes to mind when I compare my experiences with others.

                    Our environment doesn’t help any more so to lessen that bias either.

                    I don’t doubt what you have witnessed, just as I don’t doubt what mattrose has experienced.

                    For what it’s worth, from what I have seen, I cannot say that I have seen anything that validates either of y’alls experiences - but that’s because I have my head stuck in a completely different world ;)

                    1. 4

                      I think this is part of the cambric explosion of (open source) software. Twenty years ago, for any particular library there might be one or two well-maintained alternatives. OpenSSL and cURL, for example, achieved their central positions because they were the only reasonable option at the time.

                      I think that even then, there was (relatively) more variety in shell tooling because these tools have a far larger influence on many people’s user experience.

                      Compared to twenty years ago, the number of open source developers has grown by a lot. I’ve no idea how much, but I wouldn’t be surprised if it turned out to be a hundred or a thousandfold. It’s almost unthinkable today that there would be only one implementation of anything. And the variety of command-line tools has exploded even more.

                      1. 3

                        I think you’re right in that there is an explosion of tooling in the *nix world in the past 10 or so years, it seems every day there’s a new command line tool that does a traditional thing in a new and non-traditional way, but…

                        I think that the people that are writing software for *nix, and even the ones that are writing the new tools that you are so fond of, realize that there is a baseline *nix … platform for lack of a better word, try very hard (and trust me, it’s not easy) to keep to that baseline API, and only depend on tools that they know are widely distributed, or can be bootstrapped up from those tools, via package management systems like apt, or macos homebrew or FreeBSD pkg tools. I would never write software trusting that the user has fish already installed on their machine, but I would trust that there is stuff like a Bourne shell (or something compatible), and grep, and even awk (It’s so small it even fits in busybox).

                        Personally, I think this explosion of tools is actually a good thing. I think it has upped user productivity and happiness to a great extent, because you can create your own environment that fits the way you do things. Don’t like vi, or emacs? Install vscode. Don’t like bash, use fish, or zsh, or even MS powershell. I write a lot of little tools in ruby, because I like the syntax, which means I end up writing a lot more scripts than I did back in the days when I was forced into using bash, or (euch) perl and I end up having a much nicer environment to work in.

                        The original reason I read your post is that I am worried about a fragmentation of the *nix API, but at a more basic level. For example, for many years, the way to configure ip addresses was ifconfig. There were a few shell script wrappers around it, but the base command was always available. These days, on FreeBSD, you still use ifconfig, but on some newer Linuxes, it’s not even installed anymore. And everyone does dynamic network configuration using drastically different tools. MacOS moving away from the GNU utilities more and more, even when it doesn’t make sense (I just installed Catalina and I’m still trying to get used to zsh) is another example. And let’s not even get into the whole systemd thing. (FTR, I approve, but it bugs me that it’s so linux specific)

                        Differences like these are troubling, and remind me of the bad old days when you had BSD, and Linux, and Solaris, and IRIX, and HP-UX, and AIX, and and and and. And every one of them had a different toolkit, and utilities.

                        Interestingly enough, all of these other variants faded away due to being tied to propietary hardware (except, kinda, Solaris), but there doesn’t seem to be anything stopping this from happening again, and I do see similar things happening again.

                        1. 2

                          ifconfig disappearance has nothing with dynamic configuration. ifconfig has disappeared because its maintainers never adapted it to support new features of the network stack—not even support for multiple addresses on the same NIC. Someone could step up and do it, but no one did. In fact, the netlink API makes it much simpler to create a lookalike of the old Linux ifconfig or FreeBSD ifconfig, if someone feels like it. It would be no harder to create UI-compatible replacements for route, vconfig, brctl etc. There’s just hardly a reason to.

                          The problem with making such a tool is that there’s a lot to do if one is to make it as functional as iproute2. I have most of it compiled in a handy format in one place. I can’t see how an ifconfig lookalike could be meaningfully extended to handle VRF—you’d have to have “vrfctl” plus new route options.

                          The dynamic configuration tools call iproute2 in some form, usually. It has machine-readable output, even though the format could be better. Few are talking netlink directly.

                        2. 1

                          It’s not awk v. gawk; it’s awk v. perl, 2020 edition. And I think the thing it says is we’re looking at a general massive shift in the Unix ecosystem, and we’re likely at the leading edge, where we’re going to see a lot of churn and a lot of variation.

                          Guess I don’t buy this premise or axiom. I write a lot of bourne shell, note, not bash, that almost always runs on any unix. Its not particularly hard, but a lot of linux developers just don’t seem to care or even try. Your perl example is good though, cause I’ve rewrote a lot of that early 2000 nonsense back into plain olde shell when i find it and made it smaller in the process.

                          And observing this where exactly? Are you sure you’re not just in a bubble and self reinforcing? I’ve found just teaching people how to write shellscripts with say shellcheck and the regular tools tends to get them to realize all these fancy new tools might be great but for stuff that should last sticking with builtin tools isn’t that hard and means less effort overall in later maintenance.

                      1. 6

                        I’ve been doing interviews for a large Nordic Bank the last couple of years. Many of the developers applying there are a bit older. That’s not negative in itself. It usually means more experience. But, one major concern for that large bank, when applicants are older, is that the developer is merely searching out this “stable” job to kind of breeze through, without too much effort, to retirement.

                        So, my two cents; if you are an older developer (like myself) try to be clear about why you are searching out that particular job.

                        Ps. By the way. In the Nordics we have strong unions, and labour laws, large companies can’t really fire people for not being productive enough. So this piece of advice might not be applicable in other places.

                        1. 5

                          It’s absolutely applicable in the United States.

                          The other piece is a concern that older people can’t be trained in the company’s local customs/culture, which can lead to massive communication and productivity problems even with the best of intentions.

                          1. 3

                            older people can’t be trained in the company’s local customs/culture

                            I’m sure you didn’t mean it this way, but to me, this sounds like when companies talk of ‘culture’ as an excuse for working stupidly long hours without a care for your life outside work, obligatory drinking and partying, and generally trying to pretend that a toxic workplace environment centred around a specific demographic is a positive thing. I can’t imagine what other sort of workplace culture an older person would have difficulty learning/fitting into.

                            I’d love it if you could expand a little on what you really mean.

                            1. 5

                              Every company builds its own culture. And yes, that matters.

                              • Some have strict chain of command. You do not go over your manager’s head to the top levels. Ever.
                              • Some have open access. Call the CEO whenever you have something you think he should hear.
                              • Some require all problems be presented with a possible, even if terrible, solution, at least as a starting point.
                              • Some companies have deep sharing cultures. If you can do something, you better be teaching other people how to do your job.
                              • Some allow people to attain mastery of their own space because it’s more efficient that way.
                              • Some companies actually do respect feedback.
                              • Some companies require people to only bring issues they know for certain are actually issues, to avoid distractions or rabbit-hole fixes.

                              An employee who has spent years working with that last bullet point is going to find it hard to work in a place where all issues are brought forward for at least brief discussion. That’s a cultural mismatch, and it’s not anything except an inability to utilize a person as trained.

                              1. 0

                                An employee who has spent years working with that last bullet point is going to find it hard to work in a place where all issues are brought forward for at least brief discussion.

                                People can just learn to do thing differently. Do people find it hard to work in different ways at different places? If not, why is this a reason to not hire somebody. Putting such a strong requirement on something that can be changed anyway seems like a good way to have to go through more people than you have to during recruitment.

                                1. 3

                                  Do people find it hard to work in different ways at different places?

                                  Yes.

                            2. 2

                              I work at a place with a strong union in the US.

                              1. 5

                                that seems to be increasingly rare, for programming jobs anyways. or are you doing something completely different than that?

                                1. 4

                                  What has that experience been like?

                                  1. 1

                                    I do, too. Worker protection is rare in US, though. Their comment is still mostly true.

                                  2. 1

                                    older people can’t be trained in the company’s local customs/culture

                                    I feel like some times when a company say someone isn’t a “culture fit”, they’re just using coded language to discriminate against a worker. Age discrimination is illegal in the US, but not hiring someone because of “culture fit” is totally fine. Similar with race, sex, etc.

                                    1. 3

                                      See my answer here: https://lobste.rs/s/qu40ze/how_prepare_for_losing_your_programming#c_omhxsj

                                      Companies know they have their own cultures. A person’s past experiences or levels of aggression (or lack thereof) may be incompatible with the way work is done at a company. It has nothing to do with being old, just with being trained.

                                  3. 2

                                    Then ask yourself, is your IT landscape one with recent commodity frameworks and infrastructure, or is there a lot of in-house software and infrastructure that might take someone half a year, maybe longer to understand and grow into?

                                    Is your employer the kind of place that needs (and retains) ambitious people who shake things up, or do you need people who are willing to maintain legacy software without complaining too much?

                                    If you’re willing to invoke the stereotype of the greybeard who wants to coast until retirement, then you should be fair and invoke the stereotype of the ambitious fresh graduate who rewrites everything in Ruby on Rails, then leaves after three years, leaving the organization with the pieces.

                                    Edit: I should’ve realized that you point out this stereotype to warn people of stereotypes that work against them, not that you’ve internalized this stereotype yourself

                                  1. 7

                                    wondering what’s more disappointing here 1. the quality of discourse in that study 2. the way the EU allocates money on “innovation studies” or 3. their actual recommendations…

                                    1. 3

                                      The arguments in that study all seem to be based on a superficial understanding of press releases. What is “DNA based customization”? What makes “6G” internet so meaningfully different from 5G that it warrants its own era? What applications do we have for the (hypothetical) bandwidth increases it enables?

                                      I think most MEPs have a greater understanding of technology than the “research bureau” (called “Future Candy”) tasked with preparing this document.

                                    1. 5

                                      He describes a more general tendency - the one to apply the expectations raised by well-funded corporate-sponsored open source projects to smaller hobby projects. And to the ones stuck somewhere in between, like Elm.

                                      1. 7

                                        Does anyone know of a good book that treats systemd somewhat comprehensively?

                                        At this point, I’ve decided my feelings about systemd don’t matter; it’s clearly here to stay. System administration is not my day job, so I’ve been able to get by with only minimal and superficial knowledge of it. I’ll have to re-learn a lot of tasks that I knew how to do (or could at least figure out how to do) using the previous init systems.

                                        Whenever I do, I find I’m always googling (or reading man pages) in frustration. It’s a very extensive system. Command names are long and non-obvious. That’s okay. After all, if I managed to learn git, I should be able to learn systemd.

                                        But I’d like to do so at a leisurely pace, instead of scraping documentation together from (frequently severely outdated) blog posts and man pages.

                                        So is there a good book that’s up-to-date? Or is Red Hat certification study material my best hope?

                                        1. 15

                                          At this point, I’ve decided my feelings about systemd don’t matter; it’s clearly here to stay.

                                          That’s not totally clear to me. The two biggest bits of the Linux ecosystem are Android and cloud deployments. Android does not use systemd, it uses its own thing. Cloud deployments increasingly use things like Kubernetes to deploy containers. They may use systemd on the oustide (does containerd depend on systemd yet?) but typically the containers don’t use systemd on the inside. Over time, I expect the things on the outside to be simplified and systemd is an obvious contender to go because it doesn’t add much value in this space.

                                          Systemd is really only dominant in individually managed servers and desktop deployments, neither of which are particularly large or growing parts of the overall ecosystem.

                                          1. 5

                                            Plus, nothing is permanent in the world of tech :). At this point, systemd is mature enough, and deployed widely enough, that I’m beginning to see a bunch of anti-patterns. In my experience, this is when people begin thinking of the next step. Anti-patterns aren’t just a symptom of incompetent users/developers, they’re a sign that a technology is reaching its limits.

                                            A pattern that I’m seeing increasingly often in embedded systems is something that I’ll just go ahead and call the “big init service”, for lack of a better word. Basically, there’s a unit file that runs a shell script, which runs all the application-specific programs (random example from the latest gadget I’ve worked on: a zeromq-based daemon, a logging service – I don’t know why that’s a thing, I was paid just to make the previous daemon stop crashing –, a pseudo-watchdog timer, a license manager, some serial-listening daemon, and a Wayland session in kiosk mode). Basically anything that didn’t have a Yocto package in a base layer so that you could just list it as enabled/disabled at build time.

                                            Being the helpful idiot that I am, I asked one my customers why those aren’t separate init services (I generally know better, but we had a history together and figured it wouldn’t hurt). They told me they knew it was possible but it was a lot of hassle, and they frequently had to launch some of these programs with various extra flags or environment variables for debugging or development purposes, or tweak various parameters when testing. Plus they were all designed to work together anyway. It was a lot more convenient to be able to start them all at once, stop them all at once, and tweak how that’s done by just changing a few lines in a shell script, than to mess with half a dozen unit files. I offered a few suggestions about how systemd can manage that. Turns out they’d tried each and every one of them, and never managed to get it to work reliably – and figured they’d rather write a clunky script than risk dealing with all sorts of flops in the field.

                                            (Edit: to be clear, I think systemd can actually handle that case pretty well – but, indeed, I guess it is a lot of hassle, especially if you want to ensure it works the same way every time, which is very much important for unattended devices where interactive access isn’t always easy to get).

                                            1. 1

                                              What advantage does systemd bring in this use case in the first place? It sounds to me that they’re basically bypassing systemd? Or are there other parts that bring value?

                                              1. 1

                                                There aren’t, at least not for them, but at this point it’s so hard to yank it out of (some) systems that they’d rather bypass it.

                                            2. 3

                                              I tend to use containers over VMs specifically because I don’t have to configure a process manager or a log collector or an ssh daemon or host metrics or anything else. More importantly, developers don’t have to know how to do these things, so now they are empowered to own a greater share of the “ops”, and they aren’t bottlenecking on communication and coordination with an ops function.

                                              1. 2

                                                They may use systemd on the oustide (does containerd depend on systemd yet?) but typically the containers don’t use systemd on the inside.

                                                You must work in a much more aesthetically pleasing corner of the software world than I do! At this point I’m rarely surprised to find systemd in an Enterprise Container™.

                                                1. 2

                                                  That might be true, but while Gartner might have some say about what kind of work I’m likely to do in the future, it has very little influence on my personal choice of desktop, and the people who maintain open source desktop linux have mostly chosen systemd.

                                                  1. 1

                                                    That might be true, but while Gartner might have some say about what kind of work I’m likely to do in the future, it has very little influence on my personal choice of desktop, and the people who maintain open source desktop linux have mostly chosen systemd.

                                                    I don’t disagree, but I suspect that systemd will make part of the ecosystem increasingly disconnected from everything else. I wouldn’t be at all surprised if Android started to encroach on the desktop Linux market. Android works surprisingly well on devices with keyboards and mice instead of touchscreens and now supports split screen mode, which is what fans of tiling window managers have been telling us all for ages is better than multiple windows. If you use F-Droid, you can install a load of F/OSS apps, including a chroot environment for GNU/Linux things and even an X server so you can run graphical non-Android *NIX apps. There’s even a native OpenOffice port. At the same time, you can run things like MS Office or any of the rest of a huge pile of other supported proprietary software.

                                                    If you’re a hobbyist developer, writing an Android app rather than something for GNOME or KDE gives you a lot more potential users, a lot more potential collaborators, and a set of more marketable skills if you want to eventually move into commercial (open source or proprietary) development. How long do you think GNU/systemd/{GTK/Qt}/{X11/Wayland}/Linux is going to be more popular than Android for desktops?

                                                    1. 1

                                                      That’s an interesting question. Popularity - for me - is not a reason to jump ship to another system, at least not for personal use. You’re a FreeBSD developer; if you’d listened to the cool kids on slashdot back in the day you’d have abandoned that doomed ship a long time ago ;)

                                                      But you’ve made me wonder what my criterium for choosing a platform is. I’ve always told myself it’s about antifeatures (as defined by Benjamin Mako Hill), which is why Android doesn’t appeal to me. But arguably, systemd qualifies as an antifeature, and it nevertheless went on to dominate Linux distributions.

                                                      1. 1

                                                        You’re a FreeBSD developer; if you’d listened to the cool kids on slashdot back in the day you’d have abandoned that doomed ship a long time ago ;)

                                                        In hindsight, that might have been the right call. At the moment, I’m working more on Linux than on FreeBSD. Some things are nicer on one, some on the other (my ideal *NIX would have clone, futex, ZFS, capsicum and jails, for example). If Linux were not GPL’d, I think I’d be working now on bringing the features that I miss from FreeBSD across and give up on FreeBSD entirely. That said, I see Windows and *NIX as legacy systems at this point and I’m more interested in working on the things that will replace them.

                                                  2. 1

                                                    Just you wait a few months, and systemd will also do container orchestration! j/k :)

                                                    1. 1

                                                      Well, there is systemd-nspawn. Which is a sorta alternative to LXC. I haven’t tried either yet, so I can’t tell anything about their respective qualities.

                                                  3. 11

                                                    I’ve found the systemd man pages to be somewhat complete and comprehensible. Give man systemd a try and then you can gradually discover other tools by jumping from one to the next in the “see also” sections. It’s not the best approach to learning, but it’s not bad either.

                                                    1. 6

                                                      Exactly. This is what I’ve come to appreciate about systemd. Compared to what Linux had, it’s reasonably consistent, coherent, and thorough. Sure, it makes a few questionable decisions, sure it has had a rocky road at times (I recall my system locking up on shutdown in the early days thanks to something in journald), and sure there are individual parts of it for which I might personally prefer an alternative (runit for process management, cronie, syslog), but taken as a whole I feel so much more comfortable with what we have now. It gives a certain level of polish I’ve always felt lacking in Linux (OpenBSD has this coherent feel too, without systemd, but I can appreciate both approaches). And most importantly, despite the fears, it doesn’t seem to have affected the viability of alternatives. There are still plenty of distributions that avoid systemd entirely.

                                                      I’ve recently been looking for a Linux distribution supported (by default) by cloud providers that feels reasonably clean. I’ve been using OpenBSD but sometimes a Linux just makes certain things easier and I needed the option. Long story short I ended up on Fedora and the one part I really didn’t like was NetworkManager. It felt a bit of a mess with documentation scattered all over, disorganised manpages, and legacy formats (ifcfg-rh) just to confuse things. Then I realised that I could just use systemd-networkd which was well documented, far more simple, easy to configure with familiar syntax, and pretty broadly available (in base installs) in case I ever need to switch. l actually ended up using NetworkManager’s keyfile plugin, but discovering that was a chore, and I just want these things to work.

                                                    2. 6

                                                      One of the early adopters of systemd, Arch Linux, has fine cheat sheet -like documentation that goes through many things: https://wiki.archlinux.org/index.php/Systemd

                                                      Systemd isn’t so complex that you should be looking at certifications.

                                                      1. 2

                                                        Newest edition of UNIX and Linux System Administration Handbook has a good chapter about systemd, I believe, but I am not sure if it will be as comprehensive as you’d want it.

                                                      1. 4

                                                        Something else to consider; the average age of a registered car is 10.7 years in Europe, and Google quotes 11.8 as the average age in the US. Possibly this is skewed somewhat by people collecting classic cars, but for decades, a new car has offered few advantages over a used one.

                                                        PCs have reached a similar level of age indifference; today you can buy a 5 year-old used laptop or pc, and expect to get at least another three years of use out of them. When you buy new, you expect to get at least five years, and I see people around me using laptops for over ten years.

                                                        While there’s a growing market for refurbished phones, all of them seem doomed by the limited number of years Apple and Google will support older models.

                                                        1. 2

                                                          I’m using a number of ~10yo (9 and a half, more, but still…) Android phones almost daily, one of them in its original duty as a phone, others for different purposes - remote-controlled media player, trailer camera, etc. Even though the manufacturer - Motorola - never got beyond Android 2.3.6 they’re all running 4.4.4. One of them doesn’t have a screen (it got broken in some distant past), that is the one in use as a trailer camera. The thing is, these older Android phones are still useable for many purposes, from their original gadgety-communications-device role to those things I mentioned and more, due to the free software nature of Android and Linux.

                                                          With Apple the story is a bit different, they do offer longer support than most Android vendors but once they drop a model it quickly becomes useless. Some devices can be ‘jailbroken’ and with that their useful life can be extended a bit but since the size of the hacking community around Apple devices is nothing compared to that around Android it takes a lot more effort to get things done. Seen as curves the Android ‘usability’ curve starts going down earlier than the Apple one but once Apple drops support their curve quickly sinks below that of Android devices of similar vintage. In both cases it takes a bit of hacking to extend the useful life, more in the case of Apple hardware.

                                                          1. 1

                                                            With Apple the story is a bit different, they do offer longer support than most Android vendors but once they drop a model it quickly becomes useless.

                                                            How did you get to the conclusion of rendering Apple device useless after support is dropped?

                                                            1. 1

                                                              Depends on your use, I suppose.

                                                              My ipad quickly became useless for my use because I needed to install or upgrade apps to evaluate [… digression elided], and that quickly started demanding newer ios versions. If your use is to keep running and using the apps you already have, nothing bad will happen, AIUI.

                                                              1. 1

                                                                So basically same as w/ Android? I don’t recall difference between two platforms as per the comment bias.

                                                                1. 1

                                                                  The big difference is that with many Android devices there are AOSP-derived distributions which can be used to keep the device up to date once vendor-supported updates have ceased.

                                                                  1. 1

                                                                    No, not basically the same. The same in principle. The key word is quickly.

                                                                    Apple is good about providing upgrades and coercing users to upgrade, and the flip side is that app developers feel free to drop support for old versions quickly. Being two or three versions behind on an ios device limits your app selection much more than being two or three versions behind on android device.

                                                                  2. 1

                                                                    The biggest reason that the old iPads are “useless” today is that today’s apps use too much RAM and CPU - something a new OS version isn’t going to solve. When today’s latest iPads are five years old, this is likely going to be less of a problem since performance increases aren’t as huge any longer, but for the first five years or so of iPads this is the biggest limiting factor. IMHO.

                                                                2. 1

                                                                  I don’t doubt that they’re useful for other purposes, and you’re probably right that we should be making better use of them. But personally I don’t like the idea of using an internet-connected device that’s limited to a seven year-old operating system.

                                                                  1. 1

                                                                    The thing is, they’re not limited to whatever version of Android the device is left with when the vendor ceases to support it. Those AOSP-derived distributions can take it along for the ride more or less until the hardware can no longer support the newest version, e.g. because of the 32/64 bit shift. The Galaxy SIIIneo which I mentioned was left by Samsung at Android 4.4.4, it currently runs Android 9 through LineageOS. It gets weekly OTA updates, the latest was on the 20th of April. As long as these projects support those devices they will stay up to date. They are supported until there is not enough interest from developers, which again depends on the number of users who want to keep those devices in use. There are some hard limits on support like the mentioned 32/64 bit shift, others are a lack of driver support for those platforms which rely on closed-source blobs, hardware capacity limits (memory, GPU, SoC) being exceeded by newer versions of the operating system, etc.

                                                                3. 2

                                                                  Something else to consider; the average age of a registered car is 10.7 years in Europe, and Google quotes 11.8 as the average age in the US. Possibly this is skewed somewhat by people collecting classic cars, but for decades, a new car has offered few advantages over a used one.

                                                                  Similar to cars, a lot of the advantages are in the realm of safety and security features you don’t want to become important. A 2020 Accord has measurable improvements in structural safety components over a 2010 Accord, and a 2020 iPhone has security features that 2017 iPhones don’t have the silicon to support.

                                                                1. 0

                                                                  This is ridiculous. The iPhone SE is not a commodity. If you want to compare it to the automotive industry, it’s an entry level Porsche.

                                                                  It starts at a price high enough that most people can’t afford for a superfluous expense such as a smartphone.

                                                                  It might be cheap from the perspective of a software engineer making good income and having high job security, but that’s not most people. For it to be a commodity, shouldn’t most people be able to afford it easily?

                                                                  1. 2

                                                                    Unless you only consider Apple devices, then it’s kinda cheap. Total cost of ownership is important too – I’ll gladly pay a bit more for a phone that lasts longer. Apple’s commitment to recycling is important as well.

                                                                    1. 1

                                                                      I understand that total cost of ownership and recycling are different stories. But I know many people with lower incomes (or no interests in technology) that will do longer with an $80 than most iPhone users. I will also bet you $1000 that if we pick 100 random iPhone owners, no more than 3 will cite recycling as a reason for buying Apple on their own.

                                                                      Your argument about only considering Apple can easily be countered with extending my analogy to “unless your only considering Porsche”.

                                                                      1. 1

                                                                        Apple devices only last longer in the sense of Apple supporting the hardware for a longer period of time. Android vendors often only offer 2-3 years of support but the slack is often picked up by AOSP-derived Android distributions, extending the useable lifetime way beyond those 2-3 years. Distributions like LineageOS offer OTA updates which makes them just as useable by non-technical people as stock distributions. There’s a Samsung Galaxy Tab 3 from 2012 and a Galaxy SIIIneo from 2014 in use here which are running LineageOS with OTA updates, after I installed the original image I have not had to do anything other than to press ‘OK’ to keep them up to date. Battery life is still OK, the SIIIneo lasts for about 2 days of use as a phone and media (music/netcast) player, the Tab 3 has a standby life close to 3 weeks, screen-on about 7 hours. Not bad for an 8 year-old device which never had its battery replaced. The battery in the SIIIneo can be changed in a few seconds if needed so even if it were to go dead I could replace it easily.

                                                                      2. 1

                                                                        I feel your analogy is a bit wrong. To me, SE isn’t an entry level premium, it’s a middle class phone. You don’t have premium features at all here.

                                                                        1. 1

                                                                          A commodity is something made to a standardized quality that can be traded in bulk. Manufacturers then don’t have to worry what well their oil came from, or what farm grew their potatoes. Conceivably, you could apply this label to the affordable Android phones - they all use similar chipsets, have similar capabilities, and run the same OS; the logo on the outside matters very little.

                                                                        1. 43

                                                                          I hope this makes more businesses realize that people can work from home 90% of the time for a great many positions. The amount of time saved, gas saved, and stress saved is immense….not to mention the amount saved on office space and associated costs.

                                                                          1. 6

                                                                            I’ve been working from home for over a week and I’ve been much happier.

                                                                            I just need to go for a walk around my neighborhood each day to at least leave the house. I never go for a walk when I go to the office. Its nice, I went around and took some photos on my Nikon FE2 today (been getting back into film recently)

                                                                            1. 7

                                                                              I got a dog to force me get out every day and it’s rewarding in many ways.

                                                                            2. 3

                                                                              I also hope this could be the case, but I think there’s also a possibility that it could have the opposite effect, owing to:

                                                                              1. Rushing into it without time to prepare and test remote-working infrastructure.
                                                                              2. Being forced to suddenly go all in, rather than easing themselves into it gradually by initially having some people working from home some of the time.

                                                                              If a company experiences problems because of it, they might be more likely to dismiss the possibility in future.

                                                                              1. 3

                                                                                Bram Cohen has a good Twitter thread about this - https://twitter.com/bramcohen/status/1235291382299926529 - “My office went full remote starting the beginning of this week related to covid-19 … This isn’t out of fear that going in to work is dangerous. It isn’t, at least not yet. It’s out of concern for not spreading disease and erring on the side of going full remote sooner rather than later.” Making sure you can strikes me as a good idea.

                                                                              2. 7

                                                                                If only I could work at McDonald’s from home. Sure would be nice if I could just receive a case of patties in the mail, cook them up, and mail them out. They have enough preservatives that it wouldn’t be an issue, right?

                                                                                1. 10

                                                                                  There’s something that resonates about this. I wonder if these companies also encourage their data center engineers to work from home. Or even their cleaning and cafeteria staff. ‘Working from home’ requires an economic infrastructure that we expect to keep working, even though it requires people not to ‘work from home’.

                                                                                  I’m absolutely sympathetic to the argument that not packing people together in tight spaces might, if we’re lucky, limit the spread of the virus. Maybe this is the wrong moment to wonder about the classist aspects of this.

                                                                                  1. 31

                                                                                    I think the idea of restricting workplace interaction gets a bit muddled in transmission.

                                                                                    A pandemic of this kind is almost impossible to stop absent draconian quarantine practices.

                                                                                    The point of getting (some) people not to take public transport, go to restaurants, hang out around the water cooler etc. is not to ensure that those people don’t get sick. A certain percentage of them will get sick, no matter what. The point is to slow the transmission, to flatten the curve of new illnesses, so that the existing care infrastructure can handle the inevitable illness cases without being overwhelmed.

                                                                                    1. 6

                                                                                      I’m not sure what is there about class. There are white collar jobs that can’t be remote, like doctors. And there are some blue collar ones that can, like customer support by phone.

                                                                                      1. 11

                                                                                        There are always exceptions. But in general, “knowledge work” is both paid higher, and also allows the employee greater flexibility in choosing their place of work.

                                                                                        1. 2

                                                                                          Agreed with this take, yes.

                                                                                        2. 8

                                                                                          Many middle class jobs in the United States provide very few paid sick days, let alone jobs held by the working class. Paid sick leave is a rarity for part time jobs.

                                                                                          People who hold multiple part time jobs to survive will face the choice of going to work while sick or self-isolating and losing their income.

                                                                                          There’s absolutely a class component to consider, especially in America where social safety nets are especially weak.

                                                                                          1. 1

                                                                                            While it’s true that not all doctors can work remotely and others can’t all the time, telemedicine is a significant and growing part of the medical profession. Turns out there’s a lot of medicine that does not require in-person presence.

                                                                                        3. 0

                                                                                          What do you think this comment possibly adds to the conversation except being snide?

                                                                                      1. 4

                                                                                        Main Workstation

                                                                                        • OS: 64bit Mac OS X 10.13.3c 17D47
                                                                                        • Kernel: x86_64 Darwin 17.4.0
                                                                                        • Shell: zsh 5.6.2
                                                                                        • Resolution: 3440x1440 | 3440x1440 | 1920x1080
                                                                                        • CPU: Intel Core i7-7700K @ 4.20GHz
                                                                                        • GPU: MSI VGA Graphic Cards RX 580 ARMOR 8G OC
                                                                                        • RAM: 64GB
                                                                                        • Keyboard: Redragon Kumara with Cherry Reds
                                                                                        • Mouse: Corsair Harpoon RGB
                                                                                        1. 4

                                                                                          Not sure where to look. I get itchy having more than 5 tabs open, let alone that many screens begging for my attention. Kudos to you for being able to handle all that.

                                                                                          1. 2

                                                                                            I’m with you; most of the time I’m fine hacking on my little MacBook Air, and I generally use things in full screen mode.

                                                                                            • Work: MBP 15”
                                                                                            • Personal Travel/Bumming Around: MBA (Retina 2019)
                                                                                            • Personal Home: MBP 15” (2012)

                                                                                            my home machine is the only one with two monitors, and they are:

                                                                                            • a 25” that only displays code
                                                                                            • the 15” that only displays Slack & Chrome
                                                                                          2. 2

                                                                                            I’m actually most impressed by the soundproofing setup you have, although I get the impression that it’s meant to kill echos for videoconferencing, rather than mute outside noises, right?

                                                                                            1. 1

                                                                                              Correct, the setup is meant to kill echos. Soundproofing to mute outside noise is significantly more difficult and costly but I eventually want to get there.

                                                                                          1. 3

                                                                                            I use zsh

                                                                                            PS1='%n@%m %2d%% '
                                                                                            
                                                                                            RPROMPT="%* %W"
                                                                                            

                                                                                            I don’t update my environment very much; I’ve taken my basic UI (Window Maker, zsh, xterm, Emacs, a few other things) with me across multiple Linux distros. It’s old enough that I’m pretty sure it predates emoji support in widely-available terminal fonts, for example.

                                                                                            1. 1
                                                                                              PS1="`hostname | awk -F . '{print $1}'`%# ";
                                                                                              RPS1='%d';
                                                                                              

                                                                                              I think I last modified mine in 2003 or something? I remember choosing zsh at the time for the simple reason that it supported right-aligned prompts. I’m not sure if the hostname-into-awk nonsense predates the %m escape code, or if I simply was unaware of it at the time.

                                                                                            1. 2

                                                                                              It’s articles like this that make me question the viability of remote work. These big, savvy companies, who could theoretically grow their workforce almost anywhere in the world, seem to be willing to pay a premium for engineers who are located not just in the US, but a specific area of the US.

                                                                                              If remote software developers were truly as effective as co-located developers, you’d expect salaries to even out. Instead, there are differences of an order of magnitude, seemingly mostly based on location.

                                                                                              1. 2

                                                                                                It’s hard to be an effective remote engineer. Some people can do it but it’s not for everybody.

                                                                                                1. 1

                                                                                                  There is a group of people (even more vocal in my country, I think) who argue that remote work is the future, and it’s true that remote work, aided by the availability of broadband, has been growing. And it’s a very attractive idea.

                                                                                                  On the opposite side of that argument, the component of people’s compensation that is location-based seems to be increasing, not decreasing. This is assuming that fresh grads in SV are at least somewhat comparable to fresh grads elsewhere.

                                                                                                  1. 1

                                                                                                    Remote work is growing precisely because it’s so much cheaper to hire engineers in places outside the Bay Area but that’s where the VCs are.

                                                                                              1. 11

                                                                                                Conspicuously absent is xfig, an easy-to-use vector image editor. I used it for a bunch of projects before Inkscape rolled into town. Looks to still be maintained today unlike most of the programs in this list.

                                                                                                1. 2

                                                                                                  xfig also has one of the few implementations of x-splines (x means “cross” here, like “pedestrian xing”, unrelated to the X window system). I find x-splines very nice and intuitive.

                                                                                                  Here’s a little x-spline implementation I made:

                                                                                                  https://jordi.platinum.edu.pl/xsplines/splines.html

                                                                                                  1. 1

                                                                                                    It was easy to learn because at each step, it showed an explanation what would happen if you clicked the left, right, or middle button. It was a very simple affordance that few applications since have copied.

                                                                                                    I used it long after ‘better’ tools became available. It was ridiculously easy for making diagrams.

                                                                                                  1. 1

                                                                                                    There seems to be a belief amongst memory safety advocates that it is not one out of many ways in which software can fail, but the most critical ones in existance today, and that, if programmers can’t be convinced to switch languages, maybe management can be made to force them.

                                                                                                    I didn’t see this kind of zeal when (for example) PHP software fell pray to SQL injections left and right, but I’m trying to understand it. The quoted statistics about found vulnerabilities seem unconvincing, and are just as likely to indicate that static analysis tools have made these kind of programming errors easy to find in existing codebases.

                                                                                                    1. 19

                                                                                                      Not all vulnerabilities are equal. I prioritize those that give attackers full control over my computer. They’re the worst. They can lead to every other problem. Plus, their rootkits or damage might not let you have it back. You can lose the physical property, too. Alex’s field evidence shows memory unsafety causes around 70-80% of this. So, worrying about hackers hitting native code, it’s rational to spend 70-80% of one’s effort eliminating memory unsafety.

                                                                                                      More damning is that languages such as Go and D make it easy to write high-performance, maintainable code that’s also memory safe. Go is easier to learn with a huge ecosystem behind it, too. Ancient Java being 10-15x slower than C++ made for a good reason not to use it. Now, most apps are bloated/slow, the market uses them anyway, some safe languages are really lean/fast, using them brings those advantages, and so there’s little reason left for memory-unsafe languages. Even in intended use cases, one can often use a mix of memory-safe and -unsafe languages with unsafe used on performance-sensitive or lowest-level parts of the system. Moreover, safer languages such as Ada and Rust give you guarantees by default on much of that code allowing you to selectively turn them off only where necessary.

                                                                                                      If using unsafe languages and having money, there’s also tools that automatically eliminate most of the memory unsafety bugs. That companies pulling in 8-9 digits still have piles of them show total negligence. Same with those in open-source development who aren’t doing much better. So, on that side of things, whatever tool you encourage should lead to memory safety even with apathetic, incompetent, or rushed developers working on code with complex interactions. Double true if it’s multi-threaded and/or distributed. Safe, orderly-by-default setup will prevent loads of inevitable problems.

                                                                                                      1. 13

                                                                                                        The quoted statistics about found vulnerabilities seem unconvincing

                                                                                                        If studies by security teams at Microsoft and Google, and analysis of Apple’s software is not enough for you, then I don’t know what else could convince you.

                                                                                                        These companies have huge incentives to prevent exploitable vulnerabilities in their software. They get the best developers they can, they are pouring many millions of dollars into preventing these kinds of bugs, and still regularly ship software with vulnerabilities caused by memory unsafety.

                                                                                                        “Why bother with one class of bugs, if another class of bugs exists too” position is not conductive to writing secure software.

                                                                                                        1. 3

                                                                                                          “Why bother with one class of bugs, if another class of bugs exists too” position is not conductive to writing secure software.

                                                                                                          No - but neither is pretending that you can eliminate a whole class of bugs for free. Memory safe languages are free of bugs caused by memory unsafety - but at what cost?

                                                                                                          What other classes of bugs do they make more likely? What is the development cost? Or the runtime performance cost?

                                                                                                          I don’t claim to have the answers but a study that did is the sort of thing that would convince me. Do you know of any published research like this?

                                                                                                          1. 9

                                                                                                            No - but neither is pretending that you can eliminate a whole class of bugs for free. Memory safe languages are free of bugs caused by memory unsafety - but at what cost?

                                                                                                            What other classes of bugs do they make more likely? What is the development cost? Or the runtime performance cost?

                                                                                                            The principle cost of memory safety in Rust, IMO, is that the set of valid programs is more heavily constrained. You often here this manifest as “fighting with the borrow checker.” This is definitely an impediment. I think a large portion of folks get past this stage, in the sense that “fighting the borrow checker” is, for the most part, a temporary hurdle. But there are undoubtedly certain classes of programs that Rust will make harder to write, even for Rust experts.

                                                                                                            Like all trade offs, the hope is that the juice is worth the squeeze. That’s why there has been a lot of effort in making Rust easier to use, and a lot of effort put into returning good error messages.

                                                                                                            I don’t claim to have the answers but a study that did is the sort of thing that would convince me. Do you know of any published research like this?

                                                                                                            I’ve seen people ask this before, and my response is always, “what hypothetical study would actually convince you?” If you think about it, it is startlingly difficult to do such a study. There are many variables to control for, and I don’t see how to control for all of them.

                                                                                                            IMO, the most effective way to show this is probably to reason about vulnerabilities due to memory safety in aggregate. But to do that, you need a large corpus of software written in Rust that is also widely used. But even this methodology is not without its flaws.

                                                                                                            1. 2

                                                                                                              If you think about it, it is startlingly difficult to do such a study. There are many variables to control for, and I don’t see how to control for all of them.

                                                                                                              That’s true - but my comment was in response to one claiming that the bug surveys published by Microsoft et al should be convincing.

                                                                                                              I could imagine something similar being done with large Rust code bases in a few years, perhaps.

                                                                                                              I don’t have enough Rust experience to have a good intuition on this so the following is just an example. I have lots of C++ experience with large code bases that have been maintained over many years by large teams. I believe that C++ makes it harder to write correct software: not (just) because of memory safety issues, undefined behavior etc. but also because the language is so large, complex and surprising. It is possible to write good C++ but it is hard to maintain it over time. For that reason, I have usually promoted C rather than C++ where there has been a choice.

                                                                                                              That was a bit long-winded but the point I was trying to make is that languages can encourage or discourage different classes of bugs. C and C++ have the same memory safety and undefined behavior issues but one is more likely than the other to engender other bugs.

                                                                                                              It is possible that Rust is like C++, i.e. that its complexity encourages other bugs even as its borrow checker prevents memory safety bugs. (I am not now saying that is true, just raising the possibility.)

                                                                                                              This sort of consideration does not seem to come up very often when people claim that Rust is obviously better than C for operating systems, for example. I would love to read an article that takes this sort of thing into account - written by someone with more relevant experience than me!

                                                                                                              1. 7

                                                                                                                I’ve been writing Rust for over 4 years (after more than a decade of C), and in my experience:

                                                                                                                • For me Rust has completely eliminated memory unsafety bugs. I don’t even use debuggers or Valgrind any more, unless I’m integrating Rust with C.
                                                                                                                • I used to have, at least during development, all kinds of bugs that spray the heap, corrupt some data somewhere, use uninitialized memory, use-after-free. Now I get compile-time errors or panics (which are safe, technically like C++ exceptions).
                                                                                                                • I get fewer bugs overall. Lack of NULL and mandatory error handling are amazing for reliability.
                                                                                                                • Built-in unit test framework, richer standard library and easy access to 3rd party dependencies help too (e.g. instead of hand-rolling another own buggy hash table, I use a well-tested well-optimized one).
                                                                                                                • My Rust programs are much faster. Single-threaded Rust is 95% as fast as single-threaded C, but I can easily parallelize way more than I’d ever dare in C.

                                                                                                                The costs:

                                                                                                                • Rust’s compile times are not nice.
                                                                                                                • It took me a while to become productive in Rust. “Getting” ownership requires unlearning C and a lot of practice. However, I’m not fighting the borrow checker any more, and I’m more productive in Rust thanks to higher-level abstractions (e.g. I can write map/reduce iterator that collects something into a btree — in 1 line).
                                                                                                          2. 0

                                                                                                            Of course older software, mostly written in memory-unsafe languages, sometimes written in a time when not every device was connected to a network, contains more known memory vulnerabilities. Especially when it’s maintained and audited by companies with excellent security teams.

                                                                                                            These statistics don’t say much at all about the overall state of our software landscape. It doesn’t say anything about the relative quality of memory-unsafe codebases versus memory-safe codebases. It also doesn’t say anything about the relative sizes of memory-safe and memory-unsafe codebases on the internet.

                                                                                                            1. 10

                                                                                                              iOS and Android aren’t “older software”. They’ve been born to be networked, and supposedly secure, from the start.

                                                                                                              Memory-safe codebases have 0% memory-unsafety vulnerabilities, so that is easily comparable. For example, check out the CVE database. Even within one project — Android — you can easily see whether the C or the Java layers are responsible for the vulnerabilities (spoiler: it’s C, by far). There’s a ton of data on all of this.

                                                                                                              1. 2

                                                                                                                Android is largely cobbled together from older software, as is IOS. I think Android still needs a Fortran compiler to build some dependencies.

                                                                                                                1. 9

                                                                                                                  That starts to look like a No True Scotsman. When real-world C codebases have vulnerabilities, they’re somehow not proper C codebases. Even when they’re part of flagship products of top software companies.

                                                                                                                  1. 2

                                                                                                                    I’m actually not arguing that good programmers are able to write memory-safe code in unsafe languages. I’m arguing vulnerabilities happen at all levels in programming, and that, while memory safety bugs are terrible, there are common classes of bugs in more widely used (and more importantly, more widely deployed languages), that make it just one class of bugs out of many.

                                                                                                                    When XSS attacks became common, we didn’t implore VPs to abandon Javascript.

                                                                                                                    We’d have reached some sort of conclusion earlier if you’d argued with the point I was making rather than with the point you wanted me to make.

                                                                                                                    1. 4

                                                                                                                      When XSS attacks became common, we didn’t implore VPs to abandon Javascript.

                                                                                                                      Actually did. Sites/companies that solved XSS did so by banning generation of markup “by hand”, and instead mandated use of safe-by-default template engines (e.g. JSX). Same with SQL injection: years of saying “be careful, remember to escape” didn’t work, and “always use prepared statements” worked.

                                                                                                                      These classes of bugs are prevalent only where developers think they’re not a problem (e.g. they’ve been always writing pure PHP, and will continue to write pure PHP forever, because there’s nothing wrong with it, apart from the XSS and SQLi, which are a force of nature and can’t be avoided).

                                                                                                                      1. 1

                                                                                                                        This kind of makes me think of someone hearing others talk about trying to lower the murder rate and then hysterically going into a rant about how murder is only one class of crime

                                                                                                                        1. -1

                                                                                                                          I think a better analogy is campaigning aggressively to ban automatic rifles when the vast majority of murders are committed using handguns.

                                                                                                                          Yes, automatic rifles are terrible. But pointing them out as the main culprit behind the high murder rate is also incorrect.

                                                                                                                          1. 4

                                                                                                                            That analogy is really terrible and absolutely not fitting the context here. It’s also very skewed, the murder rate is not the reason for calls for bans.

                                                                                                                      2. 2

                                                                                                                        Although I mostly agree, I’ll note Android was originally built by a small business acquired by Google that continued to work on it probably with extra resources from Google. That makes me picture a move fast and break things kind of operation that was probably throwing pre-existing stuff together with their own as quickly as possible to get the job done (aka working phones, market share).

                                                                                                                    2. 0

                                                                                                                      Yes, if you zoom in on code bases written in memory-unsafe languages, you unsurprisingly get a large number of memory-unsafety vulnerabilities.

                                                                                                                      1. 12

                                                                                                                        And that’s exactly what illustrates “eliminates a class of bugs”. We’re not saying that we’ll end up in utopia. We just don’t need that class of bugs anymore.

                                                                                                                        1. 1

                                                                                                                          Correct, but the author is arguing that this is an exceptionally grievous class of security bugs, and (in another article) that developers’ judgement should not be trusted on this matter.

                                                                                                                          Today, the vast majority of new code is written for a platform where execution of untrusted memory-safe code is a core feature, and the safety of that platform relies on a stack of sandboxes written mostly in C++ (browser) and Objective C/C++/C (system libraries and kernel)

                                                                                                                          Replacing that stack completely is going to be a multi-decade effort, and the biggest players in the industry are just starting to dip their toes in memory-safe languages.

                                                                                                                          What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                                                                                                          1. 11

                                                                                                                            Replacing that stack completely is going to be a multi-decade effort, and the biggest players in the industry are just starting to dip their toes in memory-safe languages.

                                                                                                                            Hm, so. Apple has developed Swift, which is generally considered a systems programming language, to replace Objective-C, which was their main programming language and already had safety features like baked in ARC. Google has implemented Go. Mozilla Rust. Google uses tons of Rust in Fuchsia and has recently imported the Rust compiler into the Android source tree.

                                                                                                                            Microsoft has recently been blogging about Rust quite a lot and is often seen hanging around and blogs about how severe memory problems are to their safety story. Before that, Microsoft has spent tons of engineering effort into Haskell as a research base and C#/.Net as a replacement for their C/C++ APIs.

                                                                                                                            Amazon has implemented firecracker in Rust and bragged about it on their AWS keynote.

                                                                                                                            Come again about “dipping toes”? Yes, there’s huge amounts of stack around, but there’s also huge amounts to be written!

                                                                                                                            What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                                                                                                            Because it’s always been a crisis and now we have the tech to fix it.

                                                                                                                            P.S.: In case this felt a bit like bragging Rust over the others: it’s just where I’m most aware of things happening. Go and Swift are doing fine, I just don’t follow as much.

                                                                                                                            1. 2

                                                                                                                              The same argument was made for Java, which on top of its memory safety, was presented as a pry bar against the nearly complete market dominance of the Wintel platform at the time. Java evangelism managed to convert new programmers - and universities - to Java, but not the entire world.

                                                                                                                              Oracle’s deadly embrace of Java didn’t move it to rewrite its main cash cow in Java.

                                                                                                                              Rust evangelists should ask themselves why.

                                                                                                                              I think that of all the memory-safe languages, Microsoft’s C++/CLI effort comes closest to understanding what needs to be done to entice coders to move their software into a memory-safe environment.

                                                                                                                              At my day job, I actually try to spend my discretionary time trying to move our existing codebase to a memory-safe language. It’s mostly about moving the pieces into place so that green-field software can seamlessly communicate with our existing infrastructure. Then seeing what parts of our networking code can be replaced, slowly reinforcing the outer layers while the inner core remains memory unsafe.

                                                                                                                              Delicate stuff, not something you want the VP of Engineering to issue edicts about. In the meantime, I’m still a C++ programmer, and I really don’t appreciate this kind of article painting a big target on my back.

                                                                                                                              1. 4

                                                                                                                                Java and Rust are vastly different ball parks for what you describe. And yet, Java is used successfully in the database world, so it is definitely to be considered. The whole search engine database world is full of Java stacks.

                                                                                                                                Oracle didn’t rewrite its cashcow, because - yes, they are risk-averse and that’s reasonable. That’s no statement on the tech they write it in. But they did write tons of Java stacks around Oracle DB.

                                                                                                                                It’s an argument on the level of “Why isn’t everything at Google Go now?” or “Why isn’t Apple using Swift for everything?”.

                                                                                                                                1. 2

                                                                                                                                  Looking at https://news.ycombinator.com/item?id=18442941 it seems that it was too late for a rewrite when Java matured.

                                                                                                                              2. 8

                                                                                                                                What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                                                                                                                To start the multi-decade effort now, and not spend more decades just saying that buffer overflows are fine, or that—despite of 40 years of evidence to the contrary—programmers can just avoid causing them.

                                                                                                                    3. 9

                                                                                                                      I didn’t see this kind of zeal when (for example) PHP software fell pray to SQL injections left and right

                                                                                                                      You didn’t? SQL injections are still #1 in the OWASP top 10. PHP had to retrain an entire generation of engineers to use mysql_real_escape_string over vulnerable alternatives. I could go on…

                                                                                                                      I think we have internalized arguments the SQL injection but have still not accepted memory safety arguments.

                                                                                                                      1. 3

                                                                                                                        I remember arguments being presented to other programmers. This article (and another one I remembered, which, as it turns out, is written by the same author: https://www.vice.com/en_us/article/a3mgxb/the-internet-has-a-huge-cc-problem-and-developers-dont-want-to-deal-with-it ) explicitly target the layperson.

                                                                                                                        The articles use the language of whistleblowers. It suggests that counter-arguments are made in bad faith, that developers are trying to hide this ‘dirty secret’. Consider that C/C++ programmers skew older, have less rosy employment prospects, and that this article feeds nicely into the ageist prejudices already present in our industry.

                                                                                                                        Arguments aimed at programmers, like this one at least acknowledge the counter-arguments, and frame the discussion as one of industry maturity, which I think is correct.

                                                                                                                        1. 2

                                                                                                                          I do not see it as bad faith. There are a non-zero number of people who say they can write memory safe C++ despite there being a massive amount of evidence that even the best programmers get tripped up by UB and threads.

                                                                                                                          1. 1

                                                                                                                            Consider that C/C++ programmers skew older, have less rosy employment prospects, and that this article feeds nicely into the ageist prejudices already present in our industry.

                                                                                                                            There’s an argument to be made that the resurging interest in systems programming languages through Rust, Swift and Go futureproofs experience in those areas.

                                                                                                                        2. 5

                                                                                                                          Memory safety advocate here. It is the most pressing issue because it invokes undefined behavior. At that point, your program is entirely meaningless and might do anything. Security issues can still be introduced without memory unsafety of course, but you can at least reason about them, determine the scope of impact, etc.

                                                                                                                        1. 10

                                                                                                                          Just for comparison - here (CZ in EU) a company is required to pay you for the period when such an agreement is in effect and prevents you to perform an competing job.

                                                                                                                          1. 6

                                                                                                                            He mentions that in the post. To my understanding, that’s even EU-wide (edit: see below, it is not). (it definitely is in Germany)

                                                                                                                            The reasoning in Germany is definitely that employee mobility is seen as a good thing and therefore strongly protected.

                                                                                                                            1. 2

                                                                                                                              It’s not EU-wide. In the Netherlands non-compete clauses are valid (and quite generic and common, even, particularly in tech). A judge can overturn it if it prevents you from finding work at all, but there’s no guarantee you’ll get paid.

                                                                                                                              One of the worst rumours I’ve heard was about a company where people left not because they had better offers elsewhere, but because work conditions were shitty. Management threatened to invoke the nc to keep them from abandoning ship altogether. I wonder how much of that was true, and how it went.

                                                                                                                              1. 2

                                                                                                                                Right, thanks!

                                                                                                                                Now that you mention that, I know about at least on Netherlands company trying to enforce a non-compete on an employee of their German subsidiary, being confused when he insisted on the conditions mandated by law.

                                                                                                                          1. 5

                                                                                                                            There was exactly one moment when I was willing to consider the performance of that code, and it was the moment my shell command stuttered.

                                                                                                                            If you can dismiss the issue at your whim, maybe the performance of the code wasn’t that relevant in the first place? I have trouble understanding this. If my browser decides to do some garbage collection right when I’m compiling, that’s likely to have vastly more impact on compilation speed than one unit test that’s slightly slower. I can make a change that adds 50ms to a 100ms function. Or I can make a change that turns a 0.1ms function into a 1ms function. Undetectable using this method, but a much worse regression.

                                                                                                                            Of course, Go has a pretty good built-in system (go bench) for collecting objective, reproducible data about performance regressions. But you have to decide upfront that code needs to be benchmarked, and how it needs to be done.

                                                                                                                            I know that for most software, desired performance isn’t specified, so code has a natural tendency to get slower over time. But there has to be a better way to stop this than relying on compiler interactivity.