1. 5

    It is not the databases that ruin good ideas. It is the fear of changing a vital component of the production system that does. Sometimes even database migrations feel like a heart transplant and the powers that be require zero downtime. There in lies the fear, the anxiety and the killer of any idea.

    1. 4

      Zero (intentional) downtime is a much more expensive requirement than I think people give it credit for, mostly because it’s usually more of a death-by-a-thousand-cuts kind of expensive than a put-a-big-project-on-the-roadmap kind of expensive.

      In some cases it’s a legit requirement, but I suspect if the ongoing cost were more obvious, it’d be a much less frequent requirement.

      1. 3

        Zero (intentional) downtime is a much more expensive requirement than I think people give it credit for

        I think if folks don’t understand this, they should look at the uptime of their own systems as a start. While this doesn’t have data associated with it, I feel comfortable saying that the uptime/effort curve is logarithmic, such that only an exponential increase in effort will get you a linear increase in uptime after a certain point. Knowing that you should be able to look at a system you run and come to a rough idea on how much effort it would take to scale its uptime.

        1. 3

          I think that is totally true if you’re talking about minimizing unintentional outages, but it seems much less clear to me that it applies to intentional ones, if only because it’s possible to actually get all the way to zero intentional downtime without infinite effort.

          One can argue that there’s no good reason to distinguish the two, that downtime is downtime whether it’s scheduled or not. But I think in some contexts there’s a meaningful difference between, “Our service will be unavailable from 2AM to 4AM on Thursday of next week,” and, “Oops, something broke all of a sudden and it took us two hours to recover.”

          1. 1

            Fair enough. I work in a context where we have guarantees around uptime, planned or unplanned, which is where my comment came from. But yes, I do think it’s a lot easier to drive intentional downtime to zero.

        2. 1

          I think there are definitely ways to mitigate issues with database change in particular:

          1. Just use your error budget. It’s there to be used.
          2. Make a new database with the new schema, deploy the system writing to both, run a migration process behind the scenes then turn off the original database. Have some error budget in the bank for a rollback.
          3. Build tolerance into the system interfaces to be able to deal with the old and new scenario. This is what we do at Google constantly, we just build out a protocol buffer with more and more fields and mark the deprecated fields as such, then file bugs to get rid of those fields from the code when we can.

          State doesn’t have to be scary, especially with databases. I worry about things like corrupt files, cascading failures and security holes. Basically anything that requires specialist knowledge and foresight, which no one can have all of.

          1. 3

            Oh, it is absolutely possible to mitigate the risks. But everything you mentioned requires additional work compared to a “shut the whole system down, upgrade all of the components, start it up again” model.

            Google easily falls into the “sometimes it’s worth the cost” category, of course. Taking Google down for a couple hours on a Saturday night is obviously totally out of the question.

            But I’ve worked at B2B shops that insisted on zero-downtime code deployment and were willing to have the engineering team do the extra work to make it happen, even though none of the customers would have noticed or cared if we’d taken the system offline in the middle of the night. In one case, our system was a front end for an underlying third-party service that had regular scheduled downtime, but we had to stay online 24x7 even though our system was completely useless during the other system’s outages. “Our system never goes down!” was apparently a valuable enough talking point for our sales team that management was willing to pay the price, even though it added no actual value for customers.

      1. 4

        Still no mention of fixing the appallingly broken podcast syncing to Apple Watch. It’s intensely frustrating that this hasn’t improved for years. So many times I’ve gotten ready for a run just to find out that, yep, it never actually sync’d, and there’s no way to force it to.

        1. 3

          I bought an Apple Watch specifically so that I didn’t need to bring my phone when I went running.

          I feel like “sync these episodes of this podcast to my watch” would be like the first use case they addressed.

          Nope, as far as I can tell there is no way to sync specific episodes (I don’t want the last three episodes of a 10-year-long podcast I just started. I want them in order from the beginning!). There’s no way to force a sync.

          It’s such an enormous blind spot that I’m wondering if I’m missing something obvious. What are these people who listen to podcasts on their watches doing??

          1. 4

            I use overcast on both phone and watch. I designate a playlist to sync to my watch. The first 20 items in that playlist sync to the watch. I add things I want to that playlist manually, but smart playlists work also.

            If I notice that something I want for my walk/run is not synchronized, the trick to synchronize quickly is to turn off bluetooth on your phone, make sure the watch and phone are both connected to the same wifi network, and launch the overcast app on the watch. It synchronizes immediately.

            I’ve never gotten the apple podcasts app to work worth a damn with my watch. @cfl this might help you, too.

            1. 2

              Thanks. I tried Overcast and it didn’t work for some reason that escapes me right now. Probably PEBKAC (PEBWAC?). I’ll give it another shot.

              It still just amazes me that what I would imagine is a top-three use case fo the Apple Watch is handled so poorly. Maybe I just don’t know how business works.

              1. 3

                FWIW, the Overcast developer (Marco Arment) regularly describes his travails in trying to get watch sync to work.

                But at least he’s actively trying, unlike Apple.

                1. 2

                  I believe he’s the one from whom I learned the (disable bluetooth/check wifi) trick to get sync moving.

                  1. 1

                    Cool, thanks for the heads-up. I was using Overcast but it doesn’t stream, so in the case when I don’t get a sync I can at least listen on my watch provided I don’t go out of cell reception (easy to do on trails sadly). I’ll give Overcast another shot.

                  2. 1

                    Yeah, it sounds like the ball is squarely in Apple’s corner on this one. Apparently there’s just no reliable way to get large amounts of data onto the Apple watch from a phone.

                    I think his latest approach is to put the whole Overcast sync engine in the watch app, to download episodes over the internet and avoid syncing files from the phone altogether. I don’t know if that’s in the current watch app or a currently-unreleased beta version though, or if he abandoned that altogether.

          1. 4

            All these compiler errors make me worry that refactoring anything reasonably large will get brutal and demoralizing fast. Does anyone have any experience here?

            1. 20

              I’ve got lots of experience refactoring very large rust codebases and I find it to be the opposite. I’m sure it helps that I’ve internalized a lot of the rules, so most of the errors I’m expecting, but even earlier in my rust use I never found it to be demoralizing. Really, I find it rather freeing. I don’t have to think about every last thing that a change might affect, I just make the change and use the list of errors as my todo list.

              1. 6

                That’s my experience as well. Sometimes it’s a bit inconvenient because you need to update everything to get it to compile (can’t just test an isolated part that you updated) but the confidence it gives me when refactoring that I updated everything is totally worth it.

              2. 9

                In my experience (more with OCaml, but they’re close), errors are helpful because they tell you what places in the code are affected by the refactoring. The ideal scenario is one where you make the initial change, then fix all the places that the compilers errors at, and when you’re done it all works again. If you used the type system to its best this scenario can actually happen in practice!

                1. 4

                  I definitely agree. Lots of great compiler errors make refactoring a joy. I somewhat recently wanted to add line+col numbers to my error messages and simply made the breaking change of defining the location field on my error type, then fixed compile errors for about 6h. When the code compiled for the first time it worked! (save a couple of off-by-one errors) I have to say that it is so powerful that you can trust the compiler to let you know the places that you need to make changes when doing a refactoring, and catching a lot of other errors that you may make as you quickly rip through the codebase. (For example even if you get similar errors for the missing arguments in C++ quickly jumping to random places in the codebase makes it easy to introduce lifetime issues as you don’t always successfully grasp the lifetime constraints of the surrounding code as quickly as you think you have.) It is definitely wat nicer than dynamic languages where you get hundreds of rest failures and have to map those back to the actual location where the problem occured.

                2. 7

                  In my experience refactoring is one of the strong points of Rust. I can “break” my code anywhere I need it (e.g. make a field optional, or remove a method, etc.), and then follow the errors until it works again. It sure beats finding undefined is not a function at run time instead.

                  The compiler takes care to avoid displaying multiple redundant errors that have the same root cause. The auto-fix suggestions are usually correct. Rust-analyzer’s refactoring actions are getting pretty good too.

                  1. 3

                    Yes. My favourite is when a widely-used struct suddenly gains a generic parameter and there are now a hundred function signatures and trait bounds that need updating, along with possibly infecting any other structs that contained it. CLion has some useful refactoring tools but it can only take you so far. I don’t mean to merely whinge - it’s all a trade-off. The requirement for functions to fully specify types permits some pretty magical type inference within function bodies. As sibling says, you just treat it as a todo list and you can be reasonably sure it will work when you’re done.

                    1. 2

                      I think generics are kind of overused in rust tbh.

                    2. 2

                      I just pick one error at a time and fix them. Usually its best to comment out as much broken code as possible until you get a clean compile then work one at a time.

                      It is a grind, but once you finish, the code usually works immediately with few if any problems.

                      1. 2

                        No it makes refactors much better. Part of the reason my coworkers like Rust is because we can change our minds later.

                        All those compile errors would be runtime exceptions or race conditions or other issues that fly under the radar in a different language. You want the errors. Some experience is involved in learning how to grease the rails on a refactor and set the compiler up to create a checklist for you. My default strategy is striking the root by changing the core datatype or function and fixing all the code that broke as a result.

                        1. 1

                          As a counterpoint to what most people are saying here…

                          In theory the refactoring is “fine”. But the lack of a GC (meaning that object lifetimes are a core part of the code), combined with the relatively few tools you have to nicely monkeypatch things mean that “trying out” a code change is a lost more costly than, say, in Python (where you can throw a descriptor onto an object to try out some new functionality quickly, for example).

                          I think this is alleviated when you use traits well, but raw structs are a bit of a pain in the butt. I think this is mostly limited to modifying underlying structures though, and when refactoring functions etc, I’ve found it to be a breeze (and like people say, error messages make it easier to find the refactoring points).

                        1. 16

                          There’s a lot of good stuff in here that we all think everyone knows and we say to each other in the pub but we don’t really say out loud to the people that need to hear it.

                          The main one that comes to mind is about mobility. They said something like “if I get fired I’ll have a new job in two weeks.” The tech folks that don’t know this is true need to learn it. More importantly: the people who manage tech people need to learn it.

                          1. 22

                            if I get fired I’ll have a new job in two weeks.

                            This has never been true for me. Job hunting has always been a relentless slog.

                            1. 12

                              Imma guess it depends on where you are. Silicon Valley, Seattle, NYC, London, you can basically put your desk stuff in a box, and throw it out a window and have it land in another tech company’s lobby.

                              Other places, not so much.

                              1. 9

                                I agree living in a tech hub makes finding a job way easier, but I jump to temper the hyperbole just a bit. I know that I personally felt a lot of self-hatred when I tried to change jobs and it took months of applications and references and interviews to actually get one, even living in a tech hub.

                                1. 6

                                  Technology stacks don’t really matter because there are like 15 basic patterns of software engineering in my field that apply. I work in data so it’s not going to be the same as webdev or embedded.

                                  It depends on what you do. The author is a database specialist, so of course they’re going to claim that SQL is the ultimate language and that jobs are plentiful. I’m an SRE, so my career path requires me to pick specific backend-ready languages to learn. I have several great memories of failed interviews because I didn’t have precisely the right tools under the belt:

                                  • I worked on a Free Software library in Python along with other folks. They invited me to interview at their employer. Their employer offered me a position writing Lua for production backends. To this day, I still think that this was a bait-and-switch.
                                  • I interviewed at a local startup that was personally significant in my life. I had known that it wasn’t a good fit. Their champion had just quit and left behind a frontend written with the trendiest JS libraries, locking their main product into a rigid unmaintainable monolith. I didn’t know the exact combination of five libraries that they had used.
                                  • I interviewed at a multinational group for a position handling Kubernetes. I gathered that they had their own in-house monitoring instead of Prometheus, in-house authentication, etc. They also had a clothing line, and I’m still not sure whether I was turned down because I didn’t already know their in-house tools or because I wasn’t wearing their clothes.
                                  1. 3

                                    They also had a clothing line, and I’m still not sure whether I was turned down because I didn’t already know their in-house tools or because I wasn’t wearing their clothes.

                                    Seems like a blessing in disguise if it was the clothes.

                                  2. 3

                                    I have this problem and I’m in a tech hub. Most of my coworkers and technical friends are in different countries I can’t legally work in, so I rarely get interviews through networking. Interviewing is also not smooth sailing afterwards.

                                  3. 5

                                    This has never been true for me. Job hunting has always been a relentless slog.

                                    Same here, I also live in a city with many startups, but companies I actually want to work for, which do things I think are worthwhile, are very rare.

                                  4. 7

                                    There’s a lot of good stuff in here that we all think everyone knows and we say to each other in the pub but we don’t really say out loud to the people that need to hear it.

                                    Interesting that you say that in the context of modern IT. It has been so with many things since ancient time.

                                    https://en.wikipedia.org/wiki/In_vino_veritas

                                    Perhaps the traditional after-work Friday beer plays a more important role in one’s career than most people think. Wisdom is valuable and not available ons course you can sign up to.

                                    1. 1

                                      Wisdom is valuable and not available ons course you can sign up to.

                                      Which is ironic given wisdom is often what they’re being sold as providing.

                                    2. 5

                                      The main one that comes to mind is about mobility. They said something like “if I get fired I’ll have a new job in two weeks.” The tech folks that don’t know this is true need to learn it. More importantly: the people who manage tech people need to learn it.

                                      Retention is a big problem. It can take up to a year to ramp up even a senior person to be fully productive on a complicated legacy code base. Take care of your employees and make sure they are paid a fair wage and not under the pressure cooker of bad management who thinks yelling fixes problems.

                                      1. 2

                                        That’s probably why the OP says their salary went up 50% while their responsibilities reduced by 50%. Onboarding.

                                    1. 1

                                      I might be mistaken, but I don’t understand how this tool or the practices in this article get you a truly repeatable build.

                                      To me, having a repeatable build means you produce the same binary artifacts when you build your checked-in source code no matter when or on what machine you run the build on. But using a tool like Docker seems to already make this impossible. If a Dockerfile allows you to RUN apt-get install foo, then running that command at time T1 will give you a different answer than running at time T2.

                                      It seems to me like you can’t have real repeatability unless you have repeatable dependency semantics all the way down to the language level for each dependency manager you’re using. Tools like Blaze get around this by forcing you to use their own dependency management system that basically requires you to vendor everything, which guarantees repeatability. But I don’t see an analogous system in Earthly.

                                      1. 2

                                        We debated ourselves a lot about what the right term is for this. From community feedback, we learned that there is a term for 100% deterministic builds: Reproducible builds. Earthly is not that. Bazel is. Earthly (and the practices that this guide talks about) has some consistency, but as you point out, it doesn’t get you all the way. We called this “repeatable builds” as an in-between term. The reasoning is that for many use-cases, it’s better if you are compatible with most open-source tooling, but are not 100% deterministic, rather than go all the way deterministic, but usability is heavily impacted.

                                        1. 1

                                          No, you are not mistaken. Docker lets you RUN anything inside them, and so they are not reproducible by design. You could write Dockerfiles that are reproducible by not using some features (this is what Bazel does to ensure reproducibility when building Docker images).

                                        1. 8

                                          Now, that’s pretty shitty.

                                          HOWEVER.

                                          I kinda sorta somewhat a little bit see the point in this kind of research/test. Maybe I’m wrong (although the fact that this even happened kinda suggests I’m not), but it seems like their premise was correct: the Linux kernel review process, and lots of similar opensource projects, ARE vulnerable to malicious agents introducing the so called hypocrite patches.

                                          Now, the way those people tried to test it was absolutely unethical, I think there’s barely a discussion there. But could there be an ethical way of testing these process? Maybe by seeking consent of some maintainers, kinda like a pentest? Does anyone see any other kind of way?

                                          1. 19

                                            One way would be to get people to consent that at some point there may be a “test patch” (or “hypocrite patch”) like this. This could possibly be months or even a year later. I suspect that many maintainers will agree to this, and when done right I suspect many will even consider it helpful and useful; no one wants to accidentally approve bad patches and we can all learn from this. Takes a bit of time and effort, but it’s really not that hard, and won’t influence the study results too much.

                                            In the end, it’s the difference between asking “can I borrow your bike for an hour?” vs. just taking it for an hour. I will almost certainly say yes if you just ask, but I will be quite cross with you if you would just take it.

                                            1. 4

                                              the Linux kernel review process, and lots of similar opensource projects, ARE vulnerable to malicious agents introducing the so called hypocrite patches.

                                              All code is vulnerable to malicious agents, it’s a human process and humans make mistakes. They also generally assume good intent.

                                              You have to assume corporate/state agents are embedded at all major companies, tech companies included.

                                              1. 2

                                                Proprietary code usually has access control, so, good intent is only assumed of the people who have access to the code, which have been hired and, consequently, been through some vetting process.

                                                Also, I feel like the world rely more on open source than proprietary code? Like, there might be more proprietary code out there, but there’s more things depending on single pieces of opensource code than in single pieces of proprietary code.

                                              2. 1

                                                Maybe I’m wrong

                                                No, you’re not wrong at all. The fact that many people have accidentally submitted patch that introduce vulnerabilities implies that it’s feasible to do so intentionally - and the results of this experiment show that it’s not only feasible, but has happened, and without detection, too.

                                                I don’t think that what the researchers did was unethical, though, at least in theory - they said that they would immediately notify the reviewers after their vulnerable patches were accepted, which, if done consistently, and the reviewers were paying attention, would mean that no vulnerability would actually make it into a stable tree.

                                                Obviously, they failed at that - but that’s not a matter of ethics, but implementation, any more than a large company being breached and leaking user data is not a failure of ethics (they clearly don’t try to leak that data - it’s valuable to them), but implementation.

                                              1. 50

                                                The paper has this to say (page 9):

                                                Regarding potential human research concerns. This experiment studies issues with the patching process instead of individual behaviors, and we do not collect any personal information. We send the emails to the Linux community and seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.

                                                [..]

                                                Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.

                                                I’m not familiar with the generally accepted standards on these kind of things, but this sounds rather iffy to me. I’m very far removed from academia, but I’ve participated in a few studies over the years, which were always just questionaries or interviews, and even for those I had to sign a consent waiver. “It’s not human research because we don’t collect personal information” seems a bit strange.

                                                Especially since the wording “we will have to report this, AGAIN, to your university” implies that this isn’t the first time this has happened, and that the kernel folks have explicitly objected to being subject to this research before this patch.

                                                And trying to pass off these patches as being done in good faith with words like “slander” is an even worse look.

                                                1. 78

                                                  They are experimenting on humans, involving these people in their research without notice or consent. As someone who is familiar with the generally accepted standards on these kinds of things, it’s pretty clear-cut abuse.

                                                  1. 18

                                                    I would agree. Consent is absolutely essential but just one of many ethical concerns when doing research. I’ve seen simple usability studies be rejected due to lesser issues.

                                                    It’s pretty clear this is abuse.. the kernel team and maintainers feel strongly enough to ban the whole institution.

                                                    1. 10

                                                      Yeah, agreed. My guess is they misrepresented the research to the IRB.

                                                      1. 3

                                                        They are experimenting on humans

                                                        This project claims to be targeted at the open-source review process, and seems to be as close to human experimentation as pentesting (which, when you do social engineering, also involves interacting with humans, often without their notice or consent) - which I’ve never heard anyone claim is “human experimentation”.

                                                        1. 19

                                                          A normal penetration testing gig is not academic research though. You need to separate between the two, and also hold one of them to a higher standard.

                                                          1. 0

                                                            A normal penetration testing gig is not academic research though. You need to separate between the two, and also hold one of them to a higher standard.

                                                            This statement is so vague as to be almost meaningless. In what relevant ways is a professional penetration testing contract (or, more relevantly, the associated process) different from this particular research project? Which of the two should be held to a higher standard? Why? What does “held to a higher standard” even mean?

                                                            Moreover, that claim doesn’t actually have anything to do with the comment I was replying to, which was claiming that this project was “experimenting on humans”. It doesn’t matter whether or not something is “research” or “industry” for the purposes of whether or not it’s “human experimentation” - either it is, or it isn’t.

                                                            1. 18

                                                              Resident pentester and ex-academia sysadmin checking in. I totally agree with @Foxboron and their statement is not vague nor meaningless. Generally in a penetration test I am following basic NIST 800-115 guidance for scoping and target selection and then supplement contractual expectations for my clients. I can absolutely tell you that the methodologies that are used by academia should be held to a higher standard in pretty much every regard I could possibly come up with. A penetration test does not create a custom methodology attempting do deal with outputting scientific and repeatable data.

                                                              Let’s put it in real terms, I am hired to do a security assessment in a very fixed highly focused set of targets explicitly defined in contract by my client in an extremely fixed time line (often very short… like 2 weeks maximum and 5 day average). Guess what happens if social engineering is not in my contract? I don’t do it.

                                                              1. 1

                                                                Resident pentester and ex-academia sysadmin checking in.

                                                                Note: this is worded like an appeal to authority, although you probably don’t mean it that way, so I’m not going to act like you are.

                                                                I totally agree with @Foxboron and their statement is not vague nor meaningless.

                                                                Those are two completely separate things, and neither is implied by the other.

                                                                their statement is not vague nor meaningless.

                                                                Not true - their statement contained none of the information you just provided, nor any other sort of concrete or actionable information - the statement “hold to a higher standard” is both vague and meaningless by itself…and it was by itself in that comment (or, obviously, there were other words - none of them relevant) - there was no other information.

                                                                the methodologies that are used by academia should be held to a higher standard

                                                                Now you’re mixing definitions of “higher standard” - GP and I were talking about human experimentation and ethics, while you seem to be discussing rigorousness and reproducibility of experiments (although it’s not clear, because “A penetration test does not create a custom methodology attempting do deal with outputting scientific and repeatable data” is slightly ambiguous).

                                                                None of the above is relevant to the question of “was this a human experiment” and the closely-related one “is penetration testing a human experiment”. Evidence suggests “no” given that the term does not appear in that document, nor have I heard of any pentest being reviewed by an ethics review board, nor have I heard any mention of “human experimenting” in the security community (including when gray-hat and black-hat hackers and associated social engineering e.g. Kevin Mitnick are mentioned), nor are other similar, closer-to-human experimentation (e.g. A/B testing, which is far closer to actually experimenting on people) processes considered to be such - up until this specific case.

                                                              2. 5

                                                                if you’re an employee in an industry, you’re either informed of penetration testing activity, or you’ve at the very least tacitly agreed to it along with many other things that exist in employee handbooks as a condition of your employment.

                                                                if a company did this to their employees without any warning, they’d be shitty too, but the possibility that this kind of underhanded behavior in research could taint the results and render the whole exercise unscientific is nonzero.

                                                                either way, the goals are different. research seeks to further the verifiability and credibility of information. industry seeks to maximize profit. their priorities are fundamentally different.

                                                                1. 1

                                                                  you’ve at the very least tacitly agreed to it along with many other things that exist in employee handbooks as a condition of your employment

                                                                  By this logic, you’ve also agreed to everything else in a massive, hundred-page long EULA that you click “I agree” on, as well as consent to be tracked by continuing to use a site that says that in a banner at the bottom, as well as consent to Google/companies using your data for whatever they want and/or selling it to whoever will buy.

                                                                  …and that’s ignoring whether or not companies that have pentesting done on them actually explicitly include that specific warning in your contract - “implicit” is not good enough, as then anyone can claim that, as a Linux kernel patch reviewer, you’re “implicitly agreeing that you may be exposed to the risk of social engineering for the purpose of getting bad code into the kernel”.

                                                                  the possibility that this kind of underhanded behavior in research could taint the results and render the whole exercise unscientific

                                                                  Like others, you’re mixing up the issue of whether the experiment was properly-designed with the issue of whether it was human experimentation. I’m not making any attempt to argue the former (because I know very little about how to do good science aside from “double-blind experiments yes, p-hacking no”), so I don’t know why you’re arguing against it in a reply to me.

                                                                  either way, the goals are different. research seeks to further the verifiability and credibility of information. industry seeks to maximize profit. their priorities are fundamentally different.

                                                                  I completely agree that the goals are different - but again, that’s irrelevant for determining whether or not something is “human experimentation”. Doesn’t matter what the motive is, experimenting on humans is experimenting on humans.

                                                            2. 18

                                                              This project claims to be targeted at the open-source review process, and seems to be as close to human experimentation as pentesting (which, when you do social engineering, also involves interacting with humans, often without their notice or consent) - which I’ve never heard anyone claim is “human experimentation”.

                                                              I had a former colleague that once bragged about getting someone fired at his previous job during a pentesting exercise. He basically walked over to this frustrated employee at a bar, bribed him a ton of money and gave a job offer in return for plugging a usb key into the network. He then reported it to senior management and the employee was fired. While that is an effective demonstration of a vulnerability in their organization, what he did was unethical under many moral frameworks.

                                                              1. 2

                                                                First, the researchers didn’t engage in any behavior remotely like this.

                                                                Second, while indeed an example of pentesting, most pentesting is not like this.

                                                                Third, the fact that it was “unethical under many moral frameworks” is irrelevant to what I’m arguing, which is that the study was not “human experimentation”. You can steal money from someone, which is also “unethical under many moral frameworks”, and yet still not be doing “human experimentation”.

                                                              2. 3

                                                                If there is a pentest contract, then there is consent, because consent is one of the pillars of contract law.

                                                                1. 1

                                                                  That’s not an argument that pentesting is human experimentation in the first place.

                                                            3. 42

                                                              The statement from the UMinn IRB is in line with what I heard from the IRB at the University of Chicago after they experimented on me, who said:

                                                              I asked about their use of any interactions, or use of information about any individuals, and they indicated that they have not and do not use any of the data from such reporting exchanges other than tallying (just reports in aggregate of total right vs. number wrong for any answers received through the public reporting–they said that much of the time there is no response as it is a public reporting system with no expectation of response) as they are not interested in studying responses, they just want to see if their tool works and then also provide feedback that they hope is helpful to developers. We also discussed that they have some future studies planned to specifically study individuals themselves, rather than the factual workings of a tool, that have or will have formal review.

                                                              They because claim they’re studying the tool, it’s OK to secretly experiment on random strangers without disclosure. Somehow I doubt they test new drugs by secretly dosing people and observing their reactions, but UChicago’s IRB was 100% OK with doing so to programmers. I don’t think these IRBs literally consider programmers sub-human, but it would be very inconvenient to accept that experimenting on strangers is inappropriate, so they only want to do so in places they’ve been forced to by historical abuse. I’d guess this will continue for years until some random person is very seriously harmed by being experimented on (loss of job/schooling, pushing someone unstable into self-harm, targeting someone famous outside of programming) and then over the next decade IRBs will start taking it seriously.

                                                              One other approach that occurs to me is that the experimenters and IRBs claim they’re not experimenting on their subjects. That’s obviously bullshit because the point of the experiment is to see how the people respond to the treatment, but if we accept the lie it leaves an open question: what is the role played by the unwitting subject? Our responses are tallied, quoted, and otherwise incorporated into the results in the papers. I’m not especially familiar with academic publishing norms, but perhaps this makes us unacknowledged co-authors. So maybe another route to stopping experimentation like this would be things like claiming copyright over the papers, asking journals for the papers to be retracted until we’re credited, or asking the universities to open academic misconduct investigations over the theft of our work. I really don’t have the spare attention for this, but if other subjects wanted to start the ball rolling I’d be happy to sign on.

                                                              1. 23

                                                                I can kind of see where they’re coming from. If I want to research if car mechanics can reliably detect some fault, then sending a prepared car to 50 garages is probably okay, or at least a lot less iffy. This kind of (informal) research is actually fairly commonly by consumer advocacy groups and the like. The difference is that the car mechanics will get paid for their work where as the Linux devs and you didn’t.

                                                                I’m gonna guess the IRBs probably aren’t too familiar with the dynamics here, although the researchers definitely were and should have known better.

                                                                1. 18

                                                                  Here it’s more like keying someone’s car to see how quick it takes them to get an insurance claim.

                                                                  1. 4

                                                                    Am I misreading? I thought the MR was a patch designed to fix a potential problem, and the issue was

                                                                    1. pushcx thought it wasn’t a good fix (making it a waste of time)
                                                                    2. they didn’t disclose that it was an auto-generated PR.

                                                                    Those are legitimate complaints, c.f. https://blog.regehr.org/archives/2037, but from the analogies employed (drugs, dehumanization, car-keying), I have to double-check that I haven’t missed an aspect of the interaction that makes it worse than it seemed to me.

                                                                    1. 2

                                                                      We were talking about Linux devs/maintainers too, I commented on that part.

                                                                      1. 1

                                                                        Gotcha. I missed that “here” was meant to refer to the Linux case, not the Lobsters case from the thread.

                                                                  2. 1

                                                                    Though there they are paying the mechanic.

                                                                  3. 18

                                                                    IRB is a regulatory board that is there to make sure that researchers follow the (Common Rule)[https://www.hhs.gov/ohrp/regulations-and-policy/regulations/common-rule/index.html].

                                                                    In general, any work that receives federal funding needs to comply with the federal guidelines for human subject research. All work involving human subjects (usually defined as research activities that involve interaction with humans) need to be reviewed and approved by the institution IRB. These approvals fall within a continuum, from a full IRB review (which involve the researcher going to a committee and explaining their work and usually includes continued annual reviews) to a declaration of the work being exempt from IRB supervision (usually this happens when the work meets one of the 7 exemptions listed in the federal guidelines). The whole process is a little bit more involved, see for example (all the charts)[https://www.hhs.gov/ohrp/regulations-and-policy/decision-charts/index.html] to figure this out.

                                                                    These rules do not cover research that doesn’t involve humans, such as research on technology tools. I think that there is currently a grey area where a researcher can claim that they are studying a tool and not the people interacting with the tool. It’s a lame excuse that probably goes around the spirit of the regulations and is probably unethical from a research stand point. The data aggregation method or the data anonymization is usually a requirement for an exempt status and not a non-human research status.

                                                                    The response that you received from IRB is not surprising, as they probably shouldn’t have approved the study as non-human research but now they are just protecting the institution from further harm rather than protecting you as a human subject in the research (which, by the way, is not their goal at this point).

                                                                    One thing that sticks out to me about your experience is that you weren’t asked to give consent to participate in the research. That usually requires a full IRB review as informed consent is a requirement for (most) human subject research. Exempt research still needs informed consent unless it’s secondary data analysis of existing data (which your specific example doesn’t seem to be).

                                                                    One way to quickly fix it is to contact the grant officer that oversees the federal program that is funding the research. A nice email stating that you were coerced to participate in the research study by simply doing your work (i.e., review a patch submitted to a project that you lead) without being given the opportunity to provide prospective consent and without receiving compensation for your participation and that the research team/university is refusing to remove your data even after you contacted them because they claim that the research doesn’t involve human subjects can go a long way to force change and hit the researchers/university where they care the most.

                                                                    1. 7

                                                                      Thanks for explaining more of the context and norms, I appreciate the introduction. Do you know how to find the grant officer or funding program?

                                                                      1. 7

                                                                        It depends on how “stalky” you want to be.

                                                                        If NSF was the funder, they have a public search here: https://nsf.gov/awardsearch/

                                                                        Most PIs also add a line about grants received to their CVs. You should be able to match the grant title to the research project.

                                                                        If they have published a paper from that work, it should probably include an award number.

                                                                        Once you have the award number, you can search the funder website for it and you should find a page with the funding information that includes the program officer/manager contact information.

                                                                        1. 3

                                                                          If they published a paper about it they likely included the grant ID number in the acknowledgements.

                                                                          1. 1

                                                                            You might have more luck reaching out to the sponsored programs office at their university, as opposed to first trying to contact an NSF program officer.

                                                                        2. 4

                                                                          How about something like a an Computer Science - External Review Board? Open source projects could sign up, and include a disclaimer that their project and community ban all research that hasn’t been approved. The approval process could be as simple as a GitHub issue the researcher has to open, and anyone in the community could review it.

                                                                          It wouldn’t stop the really bad actors, but any IRB would have to explain why they allowed an experiment on subjects that explicitly refused consent.

                                                                          [Edit] I felt sufficiently motivated, so I made a quick repo for the project . Suggestions welcome.

                                                                          1. 7

                                                                            I’m in favor of building our own review boards. It seems like an important step in our profession taking its reponsibility seriously.

                                                                            The single most important thing I’d say is, be sure to get the scope of the review right. I’ve looked into this before and one of the more important limitations on IRBs is that they aren’t allowed to consider the societal consequences of the research succeeding. They’re only allowed to consider harm to experimental subjects. My best guess is that it’s like that because that’s where activists in the 20th-century peace movement ran out of steam, but it’s a wild guess.

                                                                            1. 4

                                                                              At least in security, there are a lot of different Hacker Codes of Ethics floating around, which pen testers are generally expected to adhere to… I don’t think any of them cover this specific scenario though.

                                                                              1. 2

                                                                                any so-called “hacker code of ethics” in use by any for-profit entity places protection of that entity first and foremost before any other ethical consideration (including human rights) and would likely not apply in a research scenario.

                                                                          2. 23

                                                                            They are bending the rules for non human research. One of the exceptions for non-human research is research on organization, which my IRB defines as “Information gathering about organizations, including information about operations, budgets, etc. from organizational spokespersons or data sources. Does not include identifiable private information about individual members, employees, or staff of the organization.” Within this exception, you can talk with people about how the organization merges patches but not how they personally do that (for example). All the questions need to be about the organization and not the individual as part of the organization.

                                                                            On the other hand, research involving human subjects is defined as any research activity that involves an “individual who is or becomes a participant in research, either:

                                                                            • As a recipient of a test article (drug, biologic, or device); or
                                                                            • As a control.”

                                                                            So, this is how I interpret what they did.

                                                                            The researchers submitted an IRB approval saying that they just downloaded the kernel maintainer mailing lists and analyzed the review process. This doesn’t meet the requirements for IRB supervision because it’s either (1) secondary data analysis using publicly available data and (2) research on organizational practices of the OSS community after all identifiable information is removed.

                                                                            Once they started emailing the list with bogus patches (as the maintainers allege), the research involved human subjects as these people received a test article (in the form of an email) and the researchers interacted with them during the review process. The maintainers processing the patch did not do so to provide information about their organization’s processes and did so in their own personal capacity (In other words, they didn’t ask them how does the OSS community processes this patch but asked them to process a patch themselves). The participants should have given consent to participate in the research and the risks of participating in it should have been disclosed, especially given the fact that missing a security bug and agreeing to merge it could be detrimental to someone’s reputation and future employability (that is, this would qualify for more than minimal risk for participants, requiring a full IRB review of the research design and process) with minimal benefits to them personally or to the organization as a whole (as it seems from the maintainers’ reaction to a new patch submission).

                                                                            One way to design this experiment ethically would have been to email the maintainers and invite them to participate in a “lab based” patch review process where the research team would present them with “good” and “bad” patches and ask them whether they would have accepted them or not. This is after they were informed about the study and exercised their right to informed consent. I really don’t see how emailing random stuff out and see how people interact with it (with their full name attached to it and in full view of their peers and employers) can qualify as research with less than minimal risks and that doesn’t involve human subjects.

                                                                            The other thing that rubs me the wrong way is that they sought (and supposedly received) retroactive IRB approval for this work. That wouldn’t fly with my IRB, as my IRB person would definitely rip me a new one for seeking retroactive IRB approval for work that is already done, data that was already collected, and a paper that is already written and submitted to a conference.

                                                                            1. 6

                                                                              You make excellent points.

                                                                              1. IRB review has to happen before the study is started. For NIH, the grant application has to have the IRB approval - even before a single experiment is even funded to be done, let alone actually done.
                                                                              2. I can see the value of doing a test “in the field” so as to get the natural state of the system. In a lab setting where the participants know they are being tested, various things will happen to skew results. The volunteer reviewers might be systematically different from the actual population of reviewers, the volunteers may be much more alert during the experiment and so on.

                                                                              The issue with this study is that there was no serious thought given to what are the ethical ramifications of this are.

                                                                              If the pen tested system has not asked to be pen tested then this is basically a criminal act. Otherwise all bank robbers could use the “I was just testing the security system” defense.

                                                                              1. 8

                                                                                The same requirement for prior IRB approval is necessary for NSF grants (which the authors seem to have received). By what they write in the paper and my interpretation of the circumstances, they self certified as conducting non-human research at time of submitting the grant and only asked their IRB for confirmation after they wrote the paper.

                                                                                Totally agree with the importance of “field experiment” work and that, sometimes, it is not possible to get prospective consent to participate in the research activities. However, the guidelines are clear on what activities fall within research activities that are exempt from prior consent. The only one that I think is applicable to this case is exception 3(ii):

                                                                                (ii) For the purpose of this provision, benign behavioral interventions are brief in duration, harmless, painless, not physically invasive, not likely to have a significant adverse lasting impact on the subjects, and the investigator has no reason to think the subjects will find the interventions offensive or embarrassing. Provided all such criteria are met, examples of such benign behavioral interventions would include having the subjects play an online game, having them solve puzzles under various noise conditions, or having them decide how to allocate a nominal amount of received cash between themselves and someone else.

                                                                                These usually cover “simple” psychology experiments involving mini games or economics games involving money.

                                                                                In the case of this kernel patching experiment, it is clear that this experiment doesn’t meet this requirement as participants have found this intervention offensive or embarrassing, to the point that they are banning the researchers’ institution from pushing patched to the kernel. Also, I am not sure if reviewing a patch is a “benign game” as this is the reviewers’ jobs, most likely. Plus, the patch review could have adverse lasting impact on the subject if they get asked to stop reviewing patches if they don’t catch the security risk (e.g., being deemed imcompetent).

                                                                                Moreover, there is this follow up stipulation:

                                                                                (iii) If the research involves deceiving the subjects regarding the nature or purposes of the research, this exemption is not applicable unless the subject authorizes the deception through a prospective agreement to participate in research in circumstances in which the subject is informed that he or she will be unaware of or misled regarding the nature or purposes of the research.

                                                                                As their patch submission process was deceptive in nature, as their outline in the paper, exemption 3(ii) cannot apply to this work unless they notify maintainers that they will be participating in a deceptive research study about kernel patching.

                                                                                That leaves the authors to either pursue full IRB review for their work (as a full IRB review can approve a deceptive research project if it deems it appropriate and the risk/benefit balance is in favor to the participants) or to self-certify as non-human subjects research and fix any problems later. They decided to go with the latter.

                                                                            2. 35

                                                                              We believe that an effective and immediate action would be to update the code of conduct of OSS, such as adding a term like “by submitting the patch, I agree to not intend to introduce bugs.”

                                                                              I copied this from that paper. This is not research, anyone who writes a sentence like this with a straight face is a complete moron and is just mocking about. I hope all of this will be reported to their university.

                                                                              1. 18

                                                                                It’s not human research because we don’t collect personal information

                                                                                I yelled bullshit so loud at this sentence that it woke up the neighbors’ dog.

                                                                                1. 2

                                                                                  Yeah, that came from the “clarifiactions” which is garbage top to bottom. They should have apologized, accepted the consequences and left it at that. Here’s another thing they came up with in that PDF:

                                                                                  Suggestions to improving the patching process In the paper, we provide our suggestions to improve the patching process.

                                                                                  • OSS projects would be suggested to update the code of conduct, something like “By submitting the patch, I agree to not intend to introduce bugs”

                                                                                  i.e. people should say they won’t do exactly what we did.

                                                                                  They acted in bad faith, skirted IRB through incompetence (let’s assume incompetence and not malice) and then act surprised.

                                                                                2. 14

                                                                                  Apparently they didn’t ask the IRB about the ethics of the research until the paper was already written: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc.pdf

                                                                                  Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning. We apologize for the raised concerns. This is an important lesson we learned—Do not trust ourselves on determining human research; always refer to IRB whenever a study might be involving any human subjects in any form. We would like to thank the people who suggested us to talk to IRB after seeing the paper abstract.

                                                                                  1. 14

                                                                                    I don’t approve of researchers YOLOing IRB protocols, but I also want this research done. I’m sure many people here are cynical/realistic enough that the results of this study aren’t surprising. “Of course you can get malicious code in the kernel. What sweet summer child thought otherwise?” But the industry as a whole proceeds largely as if that’s not the case (or you could say that most actors have no ability to do anything about the problem). Heighten the contradictions!

                                                                                    There are some scary things in that thread. It sounds as if some of the malicious patches reached stable, which suggests that the author mostly failed by not being conservative enough in what they sent. Or for instance:

                                                                                    Right, my guess is that many maintainers failed in the trap when they saw respectful address @umn.edu together with commit message saying about “new static analyzer tool”.

                                                                                    1. 17

                                                                                      I agree, while this is totally unethical, it’s very important to know how good the review processes are. If one curious grad student at one university is trying it, you know every government intelligence department is trying it.

                                                                                      1. 8

                                                                                        I entirely agree that we need research on this topic. There’s better ways of doing it though. If there aren’t better ways of doing it, then it’s the researcher’s job to invent them.

                                                                                      2. 7

                                                                                        It sounds as if some of the malicious patches reached stable

                                                                                        Some patches from this University reached stable, but it’s not clear to me that those patches also introduced (intentional) vulnerabilities; the paper explicitly mentions the steps that they’re taking steps to ensure those patches don’t reach stable (I omitted that part, but it’s just before the part I cited)

                                                                                        All umn.edu are being reverted, but at this point it’s mostly a matter of “we don’t trust these patches and will need additional review” rather than “they introduced security vulnerabilities”. A number of patches already have replies from maintainers indicating they’re genuine and should not be reverted.

                                                                                        1. 5

                                                                                          Yes, whether actual security holes reached stable or not is not completely clear to me (or apparently to maintainers!). I got that impression from the thread, but it’s a little hard to say.

                                                                                          Since the supposed mechanism for keeping them from reaching stable is conscious effort on the part of the researchers to mitigate them, I think the point may still stand.

                                                                                          1. 1

                                                                                            It’s also hard to figure out what the case is since there is no clear answer what the commits where, and where they are.

                                                                                        2. 4

                                                                                          The Linux review process is so slow that it’s really common for downstream folks to grab under-review patches and run with them. It’s therefore incredibly irresponsible to put patches that you know introduce security vulnerabilities into this form. Saying ‘oh, well, we were going to tell people before they were deployed’ is not an excuse and I’d expect it to be a pretty clear-cut violation of the Computer Misuse Act here and equivalent local laws elsewhere. That’s ignoring the fact that they were running experiments on people without their consent.

                                                                                          I’m pretty appalled the Oakland accepted the paper for publication. I’ve seen paper rejected from there before because they didn’t have appropriate ethics review oversite.

                                                                                      1. 2

                                                                                        A great overview.

                                                                                        I also wouldn’t pick a new system part that isn’t written in Rust (or a similar safe & efficient language).

                                                                                        Existing bits and pieces of my tech stack have Bestandsschutz, but if you try to sell me a replacement, it better be not written in C/C++ if you don’t want to get laughed out of the room.

                                                                                        1. 16

                                                                                          I also wouldn’t pick a new system part that isn’t written in Rust (or a similar safe & efficient language).

                                                                                          This reads as very cargo-cult.

                                                                                          1. 4

                                                                                            I was thinking the same thing. @soc: are you really worried about your shell segfaulting? Or being attacked somehow? What attack vector would that be? You could easily write a shell just as insecure in Rust, you’d just have different vectors.

                                                                                            1. 2

                                                                                              I’d compare C/C++¹ with greenhouse emissions:

                                                                                              Every line of C/C++ that doesn’t get written is another C/C++ piece that doesn’t need to be decommissioned later.


                                                                                              ¹ I use “C/C++” as a catch-all phrase for the shared belief of their users that they can write “safe” C/C++ – despite 50 years of evidence to the contrary.

                                                                                            2. 1

                                                                                              I really don’t care. :-)

                                                                                          1. 21

                                                                                            Agree that CPU and disk (and maybe ram) haven’t improved enough to warrant a new laptop, but a 3200x1800 screen really is an amazing upgrade I don’t want to downgrade from.

                                                                                            1. 6

                                                                                              I love my new 4k screen for text stuff.. Sadly on linux it seems to be pain in the ass to scale this appropriately and correctly. Even more with different resolutions between screens. So far windows does this quite well.

                                                                                              1. 4

                                                                                                Wayland can handle it ok, but Xorg doesn’t (and never will) have support for per-display DPI scaling.

                                                                                                1. 3

                                                                                                  I don’t see myself being able to afford a 4k screen for a few years but if you just scale everything up, what’s the advantage?

                                                                                                  1. 4

                                                                                                    The text looks much crisper, so you can use smaller font sizes without straining your eyes if you want more screen real estate. Or you can just enjoy the increased readability.

                                                                                                    Note: YMMV. Some people love it and report significantly reduced eye strain and increased legibility, some people don’t really notice a difference.

                                                                                                    1. 2

                                                                                                      I use a much nicer font on my terminals now, which I find clearer to read. And I stare at terminals, dunno, 50% of my days.

                                                                                                      This is a Tuxedo laptop (I think it’s the same whitelabel as system86 sells) which don’t feel expensive to me.

                                                                                                      1. 1

                                                                                                        Which tuxedo laptop has 4k?

                                                                                                        1. 1

                                                                                                          I can’t find them anymore either. They used to have an option for the high res display. I go this one a bit over a year ago:

                                                                                                          1 x TUXEDO InfinityBook Pro 13 v4  1.099,00 EUR
                                                                                                           - QHD+ IPS matt | silber/silber | Intel Core
                                                                                                          i7-8565U
                                                                                                          ...
                                                                                                          Summe: 1.099,00 EUR
                                                                                                          
                                                                                                          1. 1

                                                                                                            how was your driver experience ? I’ve had to re-send mine twice due to problems with the CPU/GPU hybrid stack. Though mine is now 3? years old.

                                                                                                            1. 2

                                                                                                              Drivers are fine, it all simply works. Battery could last longer.

                                                                                                          2. 1

                                                                                                            Yeah ok. I just ordered a Pulse 15. Also wanted a 4k display but didn’t see it anywhere. thanks

                                                                                                          3. 1

                                                                                                            hah I’m also using a tuxedo one, but the font is far too tiny on that screen to work with everyday

                                                                                                          4. 1

                                                                                                            well you have a much sharper font and can go nearer if you want (like with books). I get eye strain over time from how pixelated text can appear at evening to me. Also you can watch higher res videos and all in all it looks really crisp. See also you smartphone, mine is already using a 2k screen, and you can see how clean text etc is.

                                                                                                            You may want to just get an 2k screen (and maybe 144 FPS?) as that may already be enough for you. I just took the gamble and wanted to test it. Note that I probably got a modell with an inferior background lighting, so it’s not the same around the edges when I’m less than 50CM away. I also took the IPS panel for superior viewing angle as I’m using it for movie watching also. YMMV

                                                                                                            My RTX 2070 GPU can’t play games like destiny on 4k 60 FPS without 100% GPU usage and FPS drops the moment I’m more than walking around. So I’ll definitely have to buy a new one if I want to use that.

                                                                                                          5. 1

                                                                                                            I also just got a new 4k monitor, and that’s bothering me also. It’s only a matter of time before I fix the glitch with a second 4k monitor… Maybe after Christmas

                                                                                                            1. 2

                                                                                                              I ended up doing that. It sucks, but Linux is just plain bad at HiDPI in a way Windows/macOS is not. I found a mixed DPI environment to be essentially impossible.

                                                                                                          6. 2

                                                                                                            This is where I’m at too. I’m not sure I could go back to a 1024x768 screen or even a 1440x900 screen even. I have a 1900x1200 xps 13 that I really enjoy which is hooked up to a 3440x1440p ultrawide.

                                                                                                            Might not need all the CPU power, but the screens are so so nice!

                                                                                                            1. 2

                                                                                                              And the speakers.

                                                                                                              I love my x230, but I just bought an M1 Macbook Air, and god damn, are those speakers loud and crisp!

                                                                                                              1. 1

                                                                                                                For me it’s also screen size and brightness that are important. I just can’t read the text on a small, dim screen.

                                                                                                                1. 1

                                                                                                                  Oh I’d love to have a 4k laptop. I’m currently using a 12” Xiaomi laptop from 2017 with 4GB of RAM and a 2k display. After adding a Samsung 960 evo NVMe and increasing Linux swappiness this is more than enough for my needs - but a 4k display would just be terrific!

                                                                                                                1. 6

                                                                                                                  Fabulous hacking. Perfect lobste.rs article, A++++ would upvote again.

                                                                                                                  1. 3

                                                                                                                    As a company, we are looking to move from being tightly coupled to Amazon AWS to a more agnostic approach where we can deploy our platform to different cloud providers (this is not a technical requirement at first, but needed by the business).

                                                                                                                    The obvious approach for achieving such outcome is to go with Kubernetes; for the past two weeks, I have been diving in the documentation of various tools including Kubernetes (+ Kustomise), Helm, ArgoCD, Ingresses (Istio, Nginx), etc. etc. I have found the amount of information to be overwhelming. We are pretty happy with our current pipeline which deploys on three separate environments (Staging/QA/Production) in Amazon ECS; the move to Kubernetes and GitOps already sound like a big endeavour, with a lot of decisions to be made on tooling and pipelines, and that’s frankly frightening.

                                                                                                                    1. 1

                                                                                                                      My company uses kubernetes and has a similar business requirement to be cloud agnostic. We use all of the hosted clusters, but there is still a crazy amount of complexity going on. Despite a dedicated team and some deep experience, we run into issues fairly often, especially when trying to spin up new services. Once a service is set up its fairly robust, but getting new things deployed is a massive pain.

                                                                                                                      All of this is to say unless you really need it, I would try to avoid the complexity. I primarily work on the backend, so I don’t interact with the devops work super often, but every time I do its just layers upon layers of abstractions. Even the experts at our company have trouble.

                                                                                                                      You can be cloud agnostic without k8s + co., and there are alternatives like nomad that I have heard good things about. But yeah, there is a crazy amount to learn, and even once you have things running there is a crazy amount to debug. Troubleshooting also becomes 2x harder.

                                                                                                                      1. 1

                                                                                                                        Thanks for your comment. It confirms my concerns regarding the complexity of a solution like Kubernetes for a small sized company. My main concern at this stage is how to get started since the most basic setup seems to involve many different tools, and supporting multiple environments like we do today involve adding even more complexity.

                                                                                                                        I have also heard very good feedback on Nomad, but we need to think of future recruitments. There is no doubt that Kubernetes has won the container orchestration, and the number of potential knowledgeable / expert candidates would be significantly higher with Kubernetes vs Nomad (even if the latter is more suitable for our needs).

                                                                                                                        1. 1

                                                                                                                          You’re right, there are numerous tools. I think for getting started you can forgo things like helm and flux, and stick with raw k8s manifests. Helm is a pretty attrocious templating solution in my opinion, and we have run into a number of bugs in what should be a really simple program, so I’d argue you don’t ever need it. Even with just k8s manifests there is a lot to learn, but at least its just one tool rather than 5 or 6.

                                                                                                                          You will have to do what is best for your situation, so definitely take everything with a grain of salt. One argument I would have for recruitments is that usually the popular technology has a bigger pool of talent, but the average quality of that talent is worse off. Personally I think startups should use niche but powerful tech rather than popular tech, since the applicant pool will self filter. Hiring takes a long time and a bad hire is 2x worse than missing out on a good hire at a small size.

                                                                                                                          Just food for thought! Wish you all the best in your endeavors.

                                                                                                                          1. 1

                                                                                                                            I agree with your comment on niche technologies unlocking a pool of experts; the counterpart to this argument is that these people may cost a lot of money to acquire and retain, since they will be in demand. Having a large pool of candidates means that you, indeed, you will have more junior candidates, but it’s also an opportunity for people to grow in your company and for building a diverse team that can grow with your organisation.

                                                                                                                            That being said, I will have definitely have a look and build a small POC with it.

                                                                                                                      2. 1

                                                                                                                        Author here. I wrote this other piece about this specific choice/challenge: https://zwischenzugs.com/2019/03/25/aws-vs-k8s-is-the-new-windows-vs-linux/

                                                                                                                        1. 1

                                                                                                                          Interesting read, thank you very much. The infographic at the end describes my feeling as a newcomer in the Kubernetes world; it feels that the best practices are not yet fully established so the ecosystem is super diverse and full of products of varying quality.

                                                                                                                          PS: I am one of those people who were playing Linux in its early days! I remember (not very fondly) the kernel panics following plugging an USB device (especially DSL modems, Linux loved those!)

                                                                                                                        2. 1

                                                                                                                          Disclaimer: I work for Google on what I would call a k8s “adjacent” product where we are heavily invested in the k8s ecosystem, but not part of it.

                                                                                                                          I think the k8s ecosystem is pretty Wild West as there is so much, and it’s impossible to figure out which tool is best-of-class. I think this is a common situation for “new” technologies. k8s is basically a cloud low-level operating system at this point, and there needs to be layers on top. Some good abstractions for some use cases do exist now, e.g. GCP Cloud Run, but if you’re determined on being cloud agnostic, it’s going to be a hard road until each cloud has comparable products. I don’t spend time in AWS/Azure land as I have my own job to do, but I do not think they have a Cloud Run-esque solution yet.

                                                                                                                          Do you have to be cloud agnostic? If it’s for super high 99.999% reliability then yeah, that’s your only realistic option. If it’s for having an escape ramp if you want to switch to a different provider for some reason, then I think you could get away with just building your Docker images, and having scaffolding around the single provider you’re invested in. Retooling to a new provider wouldn’t be simple, but it would be an order months, not order years, issue, in my estimation.

                                                                                                                          But I’ve never done this so don’t take my word for it.

                                                                                                                        1. 1

                                                                                                                          Bullet Journal was a life changer for me. I use the official Bullet Journal journal. It costs a bit more but it’s got reminders of how to use the system which is helpful. I tried a million different organizational methods. BuJo was the only one that stuck.

                                                                                                                          I use a Pilot Vanishing Point which is like having a ballpoint pen with a fountain pen nib. I love it. Its not a pen to baby: it gets scratched up and stuff. Mine certainly has a “patina”. It’s a workhorse, not an artifact.

                                                                                                                          I use Google Calendar, and Gmail, but things I need to do go in the journal. I also plan ahead and write down the meetings I have the next day in the journal anyway, so I feel a bit more prepared and less surprised by “oh I have that today?”

                                                                                                                          1. 4

                                                                                                                            I just switch depending on how much natural light I have. Lots of natural light: Solarized Light. Not a lot: Solarized Dark.

                                                                                                                            1. 10

                                                                                                                              I am 100% over versioning. I have never seen an implementation that doesn’t suck. It’s miserable. Something is fundamentally wrong with the whole model, whatever methodology you use for tagging won’t fix that.

                                                                                                                              There could be different ways:

                                                                                                                              1. Google has run decently internally by building everything from HEAD. It’s not easy, and it requires a monorepo, but it does work. Could this work in the real world? Probably not. But what if you say “GitHub is a monorepo”? What if when the dependency author uploads a breaking change, GitHub can say who they broke and how it broke, prompt the dependency author to document what the remediation for that pattern of breakage is, and just let people be broken until they upgrade? Maybe this is pushed to per-language registries like crates.io or the Go proxy.
                                                                                                                              2. Unison tries to sidestep versioning entirely at the language level.
                                                                                                                              3. Stop trying to meld dependencies together across packages. Every package and its dependencies are treated separately, and binaries just include every version that is depended on. Hard drive size is a trivial concern, binary sizes when you’re building binaries into Docker containers means the image size almost certainly dominates.
                                                                                                                              1. 2

                                                                                                                                I can’t wait for some of the ideas from Unison to permeate into more mainstream ecosystems. Lots of great ideas (globally accessible CAS, AST storage etc.) stuck behind a Haskell-like syntax.

                                                                                                                                1. 1

                                                                                                                                  CAS

                                                                                                                                  Compare-And-Swap? Content-Aware Scaling? Close Air Support? Computer Algebra System? Content-Addressable Storage?

                                                                                                                                  1. 1

                                                                                                                                    Content-Addressable Storage. Check it out! https://www.unisonweb.org/

                                                                                                                                2. 2

                                                                                                                                  I sort of agree because I don’t think there’s a perfect versioning system, but I think semver2 may be as good as it gets.

                                                                                                                                  I like it because it’s more functional than the marketing driven “versions don’t matter, we’ll just call it version 2.0 to sell more” and all the alternatives get into too much time spent on perfecting versioning systems to diminishing returns.

                                                                                                                                  I use it just so we have something, it saves time from deciding what to do, and it helps denote drafts or breaking changes. I use it even for stupid stuff like the “enterprise policy on bathroom breaks.” If it’s version 0.4.77 then it’s still in progress and could change any time. If it’s 1.1.16 then I mean it’s probably approved by someone. If I use 1.1.16 and see version 2.0 then it probably means I should read it because now it means I can only go to the bathroom on even hours or something that disrupts or may disrupt me.

                                                                                                                                1. 3

                                                                                                                                  I had never heard of DOOM Emacs, but as a former Emacs user (but Vim since 2000), I would be quite curious to give it a shot. Also this article has quite a few good Vim plugins that I had not tried yet.

                                                                                                                                  Now the million dollar question, when are we going to see structured editors appear and be used for real?

                                                                                                                                  1. 3

                                                                                                                                    Doom Emacs is fine. I enjoyed using it as an out of the box experience.

                                                                                                                                    I eventually went back to Vim again with the 8ish plugins I find to be indispensable. I know I sound like an old beardy but there really is value in just knowing what is going on in the editing environment at all times, rather than dealing with oddities where you don’t know what is going on. The author of Doom Emacs is great and responsive on Discord, but it’s just kind of a bummer that you sometimes need to resort to that. That’s part and parcel of the out-of-thx-box experience in non-paid editors as far as I’ve experienced.

                                                                                                                                    I do think that LSPs and coc.nvim has been a huge productivity boost. You’re getting very close to VS Code levels of editor support but with full keyboard navigation.

                                                                                                                                  1. 2

                                                                                                                                    A neat analogy, but it seems the author is unaware of the Altor SAF lock.

                                                                                                                                    1. 4

                                                                                                                                      I have previously found that if thieves can’t get through the lock, they’ll just take the parts that aren’t locked or just damage your bike out of malice.

                                                                                                                                      1. 3

                                                                                                                                        Indeed. I’m simply pointing out that this arms race is always evolving.

                                                                                                                                      2. 3

                                                                                                                                        A massive 6.2kg $300 lock is probably a poor trade-off for many due to size, weight, and price.

                                                                                                                                        1. 2

                                                                                                                                          For most, but probably not for the owner of that $7k bike in the top comment on this post.

                                                                                                                                          1. 2

                                                                                                                                            Maybe; but adding 7kg to your tour bike is not insignificant, never mind the huge size of the thing. I certainly wouldn’t look forward at hauling that around (especially not when using it as a touring bike) and would probably prefer either getting insurance or accepting the increased risk of theft.

                                                                                                                                            For an expensive racing bike it’s even worse, as they usually weigh less than 10kg (even my €400 fixie was ~11kg) so you’re basically doubling your weight.

                                                                                                                                            It all depends on your personal situation, chance of theft (i.e. where you live), what you do with it, and so forth. Generally speaking, I find that the quality of my life is better if I’m not so paranoid about this kind of stuff and just accept that I lose a bike every few years. It sucks, but one bad event every few years is better than spending time/brain cycles on this kind of stuff every day. YMMV of course.

                                                                                                                                        2. 3

                                                                                                                                          Altor SAF

                                                                                                                                          https://www.youtube.com/watch?v=1HvMPh6JBBI

                                                                                                                                          That thing is comically large! But it looks like it does resist the typical angle grinder.

                                                                                                                                        1. 5

                                                                                                                                          Ever since ponying up for PragmataPro, I find it very difficult to switch back to wider fonts. The extra information I get per line without sacrificing readability is wonderful.

                                                                                                                                          1. 3

                                                                                                                                            I’m the opposite, I recently switched to a wider font (IBM Plex Mono) and I noticed I can reduce the font size by a couple of points (11 to 9), increasing the number of lines of code I can display compared to Iosevka. I’m still able to display 2 buffers side by side.

                                                                                                                                            1. 1

                                                                                                                                              +1 for wider fonts, Source Code Pro is king here.

                                                                                                                                            2. 1

                                                                                                                                              I had the same, though I went from Iosevka which is nice to see whether you enjoy these kinds of fonts. PragmataPro is just very slightly nicer, but the incredible configurability and free license of Iosevka is definitely cool.

                                                                                                                                            1. 13

                                                                                                                                              As someone who helped design the Cloud SQL VM environment, I can corroborate with the OP that it is utterly boring :) I left that team some time ago, basically after we launched this architecture.

                                                                                                                                              When we wrote it we really were just using VMs as originally provisioned by Google Compute Engine and shoving MySQL as a Docker container and an API communication layer as a Docker container too. We deliberately wanted it to be as boring as possible partly for security concerns. The blast radius is pretty small when all you can do is compromise your single-tenant VM.

                                                                                                                                              Obviously nowadays instead of single-tenant VMs you’d do this with Kubernetes. We were about two years too early.

                                                                                                                                              1. 18

                                                                                                                                                Nice setup.

                                                                                                                                                1. I really advice against the side by side monitors. There problem is, your going to have your main app open in one monitor at a time so your going to be turning your neck for hours at a time. Suggest either stacking it going with a single large monitor. I got a Dell 43” 4k monitor for $700 ish. I previously had a single 32” ultra wide, which as the author mentioned is too short. Then a friend sold me his and I stacked them. That was ok but made me standing desk hard to use in standing mode.

                                                                                                                                                I like the single monitors with a window management app. I’d love this setup now if I could get it in a curved version and a higher resolution for sharper text, but otherwise it’s amazing.

                                                                                                                                                1. I’m always amazed that people are so hesitant to spend money on their work tools. They are tax-deductible but more importantly, they are in investment in your long term health and happiness. It’s one of the biggest advantages of working from home. Your don’t have to use the cheap crap your employer provides.

                                                                                                                                                It’s doubly amazing because many in this situation are making $100k (possibly multiples of that). Also do many people have some crazy expensive bike,car,boat,guitars, home theater, etc that’s only used a few hours a week.

                                                                                                                                                I know it’s tempting to cheap out, but 30,40,50 year old you will thank you.

                                                                                                                                                That’s my PSA if the day.

                                                                                                                                                1. 3

                                                                                                                                                  Shouldn’t have read this. The night just got expensive.

                                                                                                                                                  1. 3

                                                                                                                                                    turning your neck for hours at a time. Suggest either stacking it going with a single large monitor.

                                                                                                                                                    So you should be looking up for hours at a time?

                                                                                                                                                    1. 1

                                                                                                                                                      The distance between the center of two widescreen monitors is much smaller when stacked than when side-by-side. And of course that’s not true of landscape or square monitors. Not ALL stacked monitors are ergonomically arranged but you can reduce neck movement by stacking.

                                                                                                                                                      1. 4

                                                                                                                                                        I don’t know if it’s just about distance. I find the vertical angle matters much more than the horizontal angle. For example, I find laptops difficult to use for long periods because my neck gets sore looking down all the time, instead of looking straight ahead. However, I don’t have any problems with horizontal monitors.

                                                                                                                                                    2. 1

                                                                                                                                                      That’s a good point about the dual monitors. I’m considering having one facing flat forward, and another angled off to the side. I’d probably have to sit off to one side of my desk but that’s not too concerning.

                                                                                                                                                      I get your point about spending money on work tools, which might fall in the same category as what people say about beds & shoes. I do worry this attitude if adopted too enthusiastically can dull judgement about whether a given tool is really necessary - for example a gas-spring monitor stand instead of a basic one or an Ergodox instead of Goldtouch keyboard (although I admit being tempted by the Kinesis Advantage2 from seeing all the people who swear by it). With the way our society is set up it is often very difficult to determine (even within our own heads) whether something expensive is a reasonable purchase that supports good craftsmanship, or just a flex.

                                                                                                                                                      1. 6

                                                                                                                                                        Consider rotating one of the screens. I sit straight down the middle for the landscape screen, then have the portrait screen to my right.

                                                                                                                                                        I’m pretty sensitive to shitty ergonomic setups, and this causes me no problems at all.

                                                                                                                                                        1. 2

                                                                                                                                                          This is my setup too. Looks dorky, works great.

                                                                                                                                                          1. 2

                                                                                                                                                            I do this too. The only problem is that 16:9 screens reeally don’t like being in portrait. I have a 24” 16:9 screen to the left of the primary screen used mostly for web browsing, and it’s really common for websites to grow a combination of horizontal scroll bars and buttons with text extending outside of their bounds.

                                                                                                                                                            1. 1

                                                                                                                                                              Hah, yeah I got the last 16:10 that dell sold a few years ago and just picked up a partner for it, and having them side-by-side vertically is great, but I would be loathe to throw away 10% of that space.

                                                                                                                                                            2. 1

                                                                                                                                                              That’s a neat idea, I think I’ll try that!

                                                                                                                                                            3. 2

                                                                                                                                                              All decisions come with error bars. Fall on one side, you have a flex; fall on the other, you are performing worse at work than you could be.

                                                                                                                                                              I know which side I’m happier to land on.

                                                                                                                                                              1. 2

                                                                                                                                                                The main point is this: every single person I’ve had a discussion on buying quality tools for work and had an objection to spending money also had some expensive hobby they were willing to splurge on. (I’m sure not everyone is like this, just seemed the people with the strongest objection had other money sinks). Is just a matter of logical consistently. They might have $25k of bike equipment in the garage but get upity about spending $500 on good equipment. That’s why this is one of my hot button issues. A course of physical therapy is going to cost more than decent equipment.

                                                                                                                                                                My old equipment always finds it way to friends and family and tends to get years of useful life beyond me.

                                                                                                                                                                1. 2

                                                                                                                                                                  There’s nothing logically inconsistent about spending money in some places and saving it in others. “I spent a bunch of money on thing X, so I should also spend a lot of money on thing Y” sounds more like sales tactic psychology than logical reasoning. You can easily get good enough ergonomic equipment to keep the PT away without spending much money. A $20 used Microsoft Natural Ergonomic 4000 keyboard, a $25 Anker vertical mouse… even monitor stands can be replaced with a stack of old technical manuals. A good chair is really the only thing I’d say you need, and you can get a good-enough used Costco model for like $60.

                                                                                                                                                                  1. 1

                                                                                                                                                                    a stack of old technical manuals

                                                                                                                                                                    To be fair, these are harder and harder to find. Same goes for phone books…

                                                                                                                                                                    1. 1

                                                                                                                                                                      It is if a) this is the way you make your living and b) you are oddly cheap in this area but spend big money on things you use way less. That’s the point in trying to make and I still find the behavior quite baffling.

                                                                                                                                                                      Invest in yourself and your health.

                                                                                                                                                                      I’m not trying to sell you a standing desk.

                                                                                                                                                                  2. 1

                                                                                                                                                                    That’s a good point about the dual monitors. I’m considering having one facing flat forward, and another angled off to the side. I’d probably have to sit off to one side of my desk but that’s not too concerning.

                                                                                                                                                                    At work with a two monitors set-up, I tended to have my main one in front of me flat and the other angled on the left. Not being in the centre of the desk allowed me to have a notebook and pen on the left of the mouse that I can reach for quick notes and having a space not in front of the main screen for thinking with reasonable space to use the notebook.

                                                                                                                                                                  3. 1

                                                                                                                                                                    Could not agree more with this! Many of my colleagues think I’m crazy for sticking to one monitor but I find it not only saves my kneck but also helps keep focus.

                                                                                                                                                                  1. 19

                                                                                                                                                                    I’m probably not the only one with the opinion that rewrites in Rust may generally a good idea, but Rust’s compile times are unacceptable. I know there are efforts to improve that, but Rust’s compile times are so abysmally slow that it really affects me as a Gentoo user. Another point is that Rust is not standardized and a one-implementation-language, which also discourages me from looking deeper into Haskell and others. I’m not saying that I generally reject single-implementation languages, as this would disregard any new languages, but a language implementation should be possible without too much work (say within two man-months). Neither Haskell nor Rust satisfy this condition and contraptions like Cargo make it even worse, because implementing Rust would also mean to more or less implement the entire Cargo-ecosystem.

                                                                                                                                                                    Contrary to that, C compiles really fast, is an industry standard and has dozens of implementations. Another thing we should note is that the original C-codebase is a mature one. While Rust’s great ownership and type system may save you from general memory-handling- and type-errors, it won’t save you from intrinsic logic errors. However, I don’t weigh that point that much because this is an argument that could be given against any new codebase.

                                                                                                                                                                    What really matters to me is the increase in the diversity of git-implementations, which is a really good thing.

                                                                                                                                                                    1. 22

                                                                                                                                                                      but a language implementation should be possible without too much work (say within two man-months)

                                                                                                                                                                      Why is that a requirement? I don’t understand your position, we shouldn’t have complex, interesting or experimental languages only because a person couldn’t write an implementation by himself in 2 months? We should discard all the advances rust and haskell provide because they require a complex compiler?

                                                                                                                                                                      1. 5

                                                                                                                                                                        I’m not saying that we should discard those advances, because there is no mutual exclusion. I’m pretty certain one could work up a pure functional programming language based on linear type theory that provides the same benefits and is possible to implement in a reasonable amount of time.

                                                                                                                                                                        A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?

                                                                                                                                                                        The thing is the following: If you make the choice of a language as a developer, you “invest” into the ecosystem and if the ecosystem for some reason breaks apart/dies/changes into a direction you don’t agree with, you are forced to put additional work into it.

                                                                                                                                                                        This additional work can be a lot if you’re talking about proprietary ecosystems, meaning more or less you are forced to rewrite your programs. Rust satisfies the necessary condition of a qualified ecosystem, because it’s open source, but open source systems can also shut you out when the ABI/API isn’t stable, and the danger is especially given with the “loose” crate system that may provide high flexibility, but also means a lot of technical debt when you have to continually push your code to the newest specs to be able to use your dependencies. However, this is again a question of the ecosystem, and I’d prefer to only refer to the Rust compiler here.

                                                                                                                                                                        Anyway, I think the Rust community needs to address this and work up a standard for the Rust language. On my behalf, I won’t be investing my time into this ecosystem until this is addressed in some way. Anything else is just building a castle on sand.

                                                                                                                                                                        1. 5

                                                                                                                                                                          A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?

                                                                                                                                                                          There is a good argument by Drew DeVault that it is impossible to reimplement a web browser for the modern web

                                                                                                                                                                          1. 4

                                                                                                                                                                            I know Blink was forked from webkit but all these years later don’t you think it’s a little reductive to treat them as the same? If I’m not mistaken Blink sends nothing upstream to webkit and by now the codebases are fairly divergent.

                                                                                                                                                                        2. 8

                                                                                                                                                                          I feel ya - on OpenBSD compile times are orders of magnitude slower than on Linux! For example ncspot takes ~2 minutes to build on Linux and 37 minutes on OpenBSD (with most features disabled)!!

                                                                                                                                                                          1. 5

                                                                                                                                                                            37 minutes on OpenBSD

                                                                                                                                                                            For reals? This is terrifying.

                                                                                                                                                                            1. 1

                                                                                                                                                                              Excuse my ignorance – mind pointing me to some kind of article/document explaining why this is the case?

                                                                                                                                                                              1. 7

                                                                                                                                                                                There isn’t one. People (semarie@ - who maintains the rust port on OpenBSD being one) have looked into it with things like the RUSTC_BOOTSTRAP=1 and RUSTFLAGS='-Ztime-passes -Ztime-llvm-passes' env vars. These point to most of the time being spent in LLVM. But no one has tracked down the issue fully AFAIK.

                                                                                                                                                                            2. 6

                                                                                                                                                                              Another point is that Rust is not standardized and a one-implementation-language

                                                                                                                                                                              This is something that gives me pause when considering Rust. If the core Rust team does something that makes it impossible for me to continue using Rust (e.g. changes licenses to something incompatible with what I’m using it for), I don’t have anywhere to go and at best am stuck on an older version.

                                                                                                                                                                              One of the solutions to the above problem is a fork, but without a standard, the fork and the original can vary and no one is “right” and I lose the ability to write code portable between the two versions.

                                                                                                                                                                              Obviously, this isn’t a problem unique to Rust - most languages aren’t standardized and having a plethora of implementations can cause its own problems too - but the fact that there are large parts of Rust that are undefined and unstandardized (the ABI, the aliasing rules, etc) gives me pause from using it in mission-critical stuff.

                                                                                                                                                                              (I’m still learning Rust and I’m planning on using it for my next big thing if I get good enough at it in time, though given the time constraints it’s looking like I’ll be using C because my Rust won’t be good enough yet.)

                                                                                                                                                                              1. 2

                                                                                                                                                                                The fact that the trademark is still owned by the Mozilla foundation and not the to-be-created Rust Foundation is also likely chilling any attempts at independent reimplementation.

                                                                                                                                                                              2. 1

                                                                                                                                                                                As much as I understand your point about the slowness of compile time in Rust, I think it is a matter of time to see them shrink.

                                                                                                                                                                                On the standard point, Haskell have a standard : Haskell 2010 . GHC is the only implementation now but it have a lot of plugins to the compiler that are not in the standard. The new standard Haskell 2020 is on his way. Implementing the standard Haskell (not with all the GHC add-ons) is do-able but the language will way more simple and with flaws.

                                                                                                                                                                                1. 2

                                                                                                                                                                                  The thing is, as you said: You can’t compile a lot of code by implementing Haskell 2010 (or 2020 for that matter) when you also don’t ship the “proprietary” extensions.

                                                                                                                                                                                  1. 1

                                                                                                                                                                                    It is the same when you abuse GCC or Clang extensions in your codebase. The main difference with Haskell is that you, almost, only have GHC available and the community put their efforts in it and create a ecosystem of extensions.

                                                                                                                                                                                    As for C, your could write standard-compliant code that an hypothetical other compiler may compile. I am pretty sure if we only had one main compiler for C for so long that Haskell have had GHC, the situation would have been similar : lots of language extension outside the standard existing solely in the compiler.

                                                                                                                                                                                    1. 3

                                                                                                                                                                                      But this is exactly the case: There’s lots and lots of code out there that uses GNU extensions (from gcc). For a very long time, gcc was the only real compiler around and it lead to this problem. Some extensions are so persistent that clang had no other choice but to implement them.

                                                                                                                                                                                      1. 1

                                                                                                                                                                                        But does those extensions ever reached the standard? It as asked candidly as I do not know a lot of the evolution of C, compilers and standard that much.

                                                                                                                                                                                        1. 4

                                                                                                                                                                                          There’s a list by GNU that lists the extensions. I really hate it that you can’t enable a warning flag (like -Wextensions) that warns you about using GNU extensions.

                                                                                                                                                                                          Still, it is not as bad as bashism (i.e. extensions in GNU bash over Posix sh), because many scripts declare a /bin/sh-shebang at the top but are full of bashism because they incidentally have bash as the default shell. Most bashisms are just stupid, many people don’t know they are using them and there’s no warning to enable warnings. Another bad offender are GNU extensions of the Posix core utilities, especially GNU make, where 99% of all makefiles are actually GNU only and don’t work with Posix make.

                                                                                                                                                                                          In general, this is one major reason I dislike GNU: They see themselves as the one and only choice for software (demanding people to call Linux “GNU/Linux”) while introducing tons of extensions to chain their users to their ecosystem.

                                                                                                                                                                                          1. 2

                                                                                                                                                                                            Here are some of the GNU C extensions that ended up in the C standard.

                                                                                                                                                                                            • // comments
                                                                                                                                                                                            • inline functions
                                                                                                                                                                                            • Variable length arrays
                                                                                                                                                                                            • Hex floats
                                                                                                                                                                                            • Variadic macros
                                                                                                                                                                                            • alignof
                                                                                                                                                                                        2. 1

                                                                                                                                                                                          If I remember correctly 10 years ago hugs was still working and maybe even nhc :)

                                                                                                                                                                                          1. 1

                                                                                                                                                                                            Yep :) and yhc never landed after forking nhc. UHC and JHC seem dead. My main point is mainly that the existence of a standard does not assure the the multiplication of implementations and the cross-cIompilation between compilers/interpreters/jit/etc. It is a simplification around it and really depends on the community around those languages. If you look at Common Lisp with a set in the stone standard and a lot of compilers that can pin-point easily what is gonna work or not. Or Scheme with a fairly easy standard but you will quickly run out of the possibility to swap between interpreters if you focus on some specific stuffs.

                                                                                                                                                                                            After that, everyone have their checklist about what a programming language must or must not provide for them to learn and use.