1. 74
  1. 76

    Imagine you lived in a country that has a strange tradition where children, once a year, go to stranger’s homes and are given candy. Then, someone, in order to study the safety of this tradition, decide to give out candies laced with a mild and provably non-lethal toxin to children. This someone has a fool-proof plan to inform the children’s parents before anyone gets hurt. Not all parents tests candies for toxins, but enough do – since things like this can happen and parents of this country takes safety reasonably seriously. One parent detected this toxin in the children’s candies. All the parents are informed and said candies were thrown out. No harm, no foul?

    Imagine you lived in a country where no neighbors can be trusted. Imagine you worked in a low trust environment. Imagine stopping the OSS model because none of the contributors can be trusted.

    That’s not the kind of world we want to operate in.

    1. 32

      I think this sums up why I felt a bit sick about that whole story. It undermines the community and is essentially antisocial behaviour disguised as research. Surely they could have found a way to prove their point in a more considerate way.

      1. 8

        Surely they could have found a way to prove their point in a more considerate way.

        Could you propose some alternative approaches? As the saying goes, POC || GTFO, so I suppose the best way to prove something’s vulnerability is a harmless attack against it.

        The kernel community appears to assume good faith in every patch they receive from random people across the Internet, and this time they get mad when the researchers from UMN prove this wishful assumption to be false. On the other hand, cURL goes to great lengths to prevent the injection of backdoors. The kernel is clearly more fundamental than any userland utilities, so either the cURL developers are unnecessarily cautious against supply chain attacks, or the kernel hackers are overly credulous.

        1. 16

          Another possible approach is to ask the lead maintainers if you can perform such an experiment. Linux has a large hierarchy and I think the top level maintainers pull huge patch sets as a bundle.

          If they had permission to use an unrelated e-mail address then it could be pretty much as good. Honestly I would think a umn.edu address would give more credence to a patch, since it seems like its from someone a reputable institution.

          Of course they might not agree, in which case you don’t have consent to do the research.

          1. 18

            This. You ask for permission. Talk to the kernel maintainers, explain your research and your methods, and ask if they want to participate. You can do things like promise a maximum number of bogus patches and a timeframe where they may occur, so people know they won’t get deluged with crap for the rest of time. You could even make a list of email addresses the patches will come from ahead of time and hand it to someone trustworthy involved in the kernel project who won’t be reviewing those patches directly, so once the experiment is over they can easily revert all the bad patches even if the researcher is hit by a bus in the mean time. It’s not that hard to conduct this sort of research ethically, these researchers just didn’t do it.

            1. 6

              That’s a fair point, but I want to point out that the non-lead reviewers are still unknowingly participate in the research, so that’s still not super ethical to them. Doing so merely shifts the moral pressure to the lead maintainers, who need to decide whether or not to “deceive” the rest of the community.

              But yeah, only lead reviewers can revert commits and have enough influence in the tech world, so getting their permission is probably good enough.

              1. 6

                A top comment in a cousin thread on HN suggests, that with proper procedure, AFAIU actually all reviewers could be informed. The trick seems to be to then wait some long enough time (e.g. weeks or more) and send the patches from diverse emails (collaborating with some submitters outside your university). There should be also some agreed upon way of retracting the patches. The comment claims that this is how it’s done in the industry, for pen testing or some other “wargames”.

            2. 5

              In the subsystems that I’ve contributed to, I imagine that it would be possible to ask a maintainer for code review on a patchset, with phrasing like, “I am not suggesting that this be merged, but I will probably ask you to consider merging it in the future.” After the code review is given, then the deception can be revealed, along with a reiterated request to not merge the patches.

              This is still rude, though. I don’t know whether it’s possible to single-blind this sort of study against the software maintainers without being rudely deceptive.

              1. 2

                I think you could ask them if you can anonymously submit some patches sometime over the next few months and detail how some of them will contain errors that you will reveal before merging.

                They might say no, but if they say yes it’s a reasonably blind test, because the maintainer still won’t know which patches are part of the experiment and which are not.

                Another way to do it would be to present the study as something misleading but also do it in private and with compensation so that participants are not harmed. Say you just want to record day-in-the-life stuff or whatever and present them with some patches.

                Finally, you could look at patches historically and re-review them. Some existing patches will have been malicious or buggy and you can see if a more detailed review catches things that were missed.

          2. 17

            This research was clearly unethical, but it did make it plain that the OSS development model is vulnerable to bad-faith commits. I no longer feel what was probably a false sense of security, running Linux. It now seems likely that Linux has some devastating back doors, inserted by people with more on their minds than their publication records.

            1. 15

              This is something every engineer, and every human needs to be aware at some point. Of course, given enough effort, you can fool another human into doing something wrong. You can send anthrax spores via mail, you can fool drivers to drive off a cliff by carefully planted road signs, you can fool a maintainer into accepting a patch with a backdoor. The reason it doesn’t happen all the time is that most people are not in fact dangerous sociopath having no problem causing real harm just to prove their point (whatever that is).

              The only societal mechanism we have for rare incidents such as this one is that they usually get eventually uncovered either by overzealous reviewers or even by having caused some amount of harm. That we’re even reading about patches being reverted is the sign that this imperfect mechanism has in fact worked in this case.

            2. 2

              This country’s tradition is insanely dangerous. The very fact that some parents already tested candy is the evidence that there was some attempts to poison children in the past — and we don’t know how many of these attempts actually succeeded.

              So, if we assumed that the public outcry from this event lead to all parents testing all the candy, or changing tradition altogether, then doing something like this would result in more overall good than evil.

              1. 10

                Meanwhile in real life, poisoned Hallowe’en candy is merely an urban legend: According to Snopes, “Police have never documented actual cases of people randomly distributing poisoned goodies to children on Halloween.”

                The very fact that some parents already tested candy is the evidence that there was some attempts to poison children in the past

                Not really. Again in the real world, hospitals run candy testing services in response to people’s fears, not actual risks. From the same Snopes article: “Of several contacted, only Maryland Hospital Center reported discovering what seemed to be a real threat — a needle detected by X-ray in a candy bar in 1988. … In the ten years the National Confectioners Association has run its Halloween Hot Line, the group has yet to verify an instance of tampering”.

            3. 38

              I’ve only been following the situation, but, while this is a reasonable act of contrition, I still want to see what UMN reports officially, because there’s clearly something up with their ethics policies if this slipped under the radar.

              “We made a mistake by not finding a way to consult with the community and obtain permission before running this study; we did that because we knew we could not ask the maintainers of Linux for permission”

              That quote is exactly the root of it. You knew better, and you knew this wouldn’t fly. So you did it anyway. This seems pretty ‘reckless’, as per the legal term.

              1. 19

                Of course they could ask permission. They could have written a letter to Greg K-H: y

                “Here are our credentials, here is what we propose to do, here is how to identify commits that should not be merged; we will keep track of them here and remind you at the end of each merge window; we will review progress with you on this data and this date; the project will end on this date.

                Assuming you are generally agreeable, what have we overlooked and what changes should we make in this protocol?”

                Doing that would make the professor and his grad students white-hats.

                Not doing that makes them grey-hats at best, and in practice black hats because they didn’t actually clean up at the end.

                People who have betrayed basic ethics: the professor, his grad students, the IRB that accepted that this wouldn’t affect humans, and the IEEE people who reviewed and accepted the first paper.

                1. 9

                  I’m not going to speak too much out of place, but academia does have an honesty and integrity problem in addition to an arrogance one. There is much debate these days within the context of the philosophy of science in how our metric fuelled obsession can be corrected.

                  1. 7

                    we knew we could not ask the maintainers of Linux for permission

                    Their story continues to change. In the paper, they clearly state that they thought that their research didn’t involve human subjects and therefore they didn’t have to consent their participants. Then, it was that they asked for IRB review and the work was exempt from IRB guidelines, and now they are apologizing for non consenting people and that they knew that consenting people would have put their research at risk? Informed consent is one of the corner stones of modern (as in since the 60s) ethical research guidelines.

                    There are ways around informed consent (and deceptive research practices, see here for an example) but those require strict IRB supervision and the research team needs to show that the benefits to participants and/or the community outweigh the risks. Plus, participants need to be informed about the research project after their participation and they still need to consent to participate in it.

                    In before they didn’t know, all federal funding agencies (including NFS that is funding this work), requires all PIs and research personnel (e.g., research assistants) to complete training every 3 years on IRB guidelines and all students are required to complete a 12 hour course on human subject research (at least at my institution, I imagine that UMN has similar guidelines).

                    1. 5

                      But they are only students. It’s the people who have supervised them to blame. I hope for a positive outcome for all parts.

                      1. 24

                        PhD students are functioning adults akin to employees not k-12 students (juveniles), their supervisor and university deserves a share of the blame, so do they. Like if they had been employees their company and manager would deserve a share of the blame, and so would they.

                        1. 16

                          FWIW, the author of the email is the professor running the study and supervising the students who pushed patches out.

                          I too am waiting to see if UMN does anything, either at the department level or higher.

                          1. 3

                            They applied for (and received) grants. They’re scholars, not students.

                          2. [Comment removed by author]

                            1. 3

                              Exactly so. I saw this, and I want to see how the department resolves things; it’s (probably) not the end of it, but it is significant what this inquiry discovers.

                          3. 24

                            I am very familiar with this smell from my $megacorp days and it smells of something brown and sticky and I don’t mean a stick.

                            First of all, this, and the whole letter, attempts to present a justification for why the experiment was carried out in this manner in terms of “this is the only way in which it could have been efficiently carried out”. Even if that were the case, and there really is no other way to study the efficiency of a review process with 100% accuracy (which is extremely doubtful IMHO), that is no justification for carrying it out in the first place. Some experiments simply cannot be carried out in an ethical manner. Tough luck, but there are always going to be other things to write papers about.

                            But the justification for not kickstarting any kind of discussion and not getting consent is pretty dubious:

                            we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for the hypocrite patches.

                            Deliberately sending buggy patches is not the only way to study the review process. You can do an “offline” study and look at regressions, for example, where the harm is already done. Or you can carry out the study “in vitro”, where consenting maintainers review different patch streams, some of which do include “hypocrite commits”. They will be on the lookout for bad patches but you will have a baseline based on the false positive figures from the maintainers who only got good patches. Either the false positive figures will be good enough to draw some conclusions about the efficiency of the review process, or they will indeed show that knowledge about the possible presence of bad patches influences the review process to the point where the experiment is useless. But in that case you at least have a plausible justification for a “blinder” study, and you can discuss better methods with the community.

                            IMHO, given the large volume of patches in some subsystems, and the fact that maintainership is hierarchical to some degree, I also suspect that it’s possible to co-opt the maintainers into this experiment, if you do it over a long enough timeframe. But suppose this really isn’t the case – this argument would’ve seemed plausible if other avenues had been investigated, but this apology letter doesn’t mention any such investigation.

                            It’s also not just about the ethics of this particular experiment. And also not just about the ethics, but the results, too.

                            Linux sees a lot of participation from so many different parties with so many conflicting interests (to cite just one example, many countries have various bans into place for Huawei’s telecom gear, but are otherwise using telecom gear that runs Linux which receives plenty of patches from Huawei, some of them quite, uh, controversial). So research into these processes is definitely relevant.

                            But without scrutiny and without good barriers in place, all you get is half-assed, under-the-radar attempts like this one, which are not only going to get a big backlash from the community, but are also not going to give very relevant results, because you can’t get the funding and set up a relevant, sufficiently large experiment while keeping it quiet. At least not in a public setting, where you have to publish or perish.

                            1. 6

                              An IRB that evaluates studies using human subjects will sometimes consider requests for use of deception or lack of informed consent. See, for example, this IRB’s guidelines on research involving deception. However, it’s quite difficult to get approval for use of deception or waiver of informed consent, and the researcher would have to make a very good case that either the harm to the subjects is minimal or that the research couldn’t be carried out otherwise. A good IRB would question whether alternative solutions, like the one you describe, could be feasible and still accomplish the goals of the research project. In the “hypocrite commits” study, as I understand it, they didn’t even seek IRB approval.

                              1. 2

                                I’m not too familiar with federal legislation (I’m, uh, across the poind) but IIRC consent is required even in research that involves deceptive methodology. You’re still required to get consent, and while it’s not required, it’s considered good practice to include a “some details to follow later” note on the consent form.

                                I am fairly sure (but see the note about being across the pond above…) that this study wouldn’t have gone past the IRB in its current form, because one of the key requirements for approval of deceptive methodology is proof that the study cannot be carried out without the waiver. It’s possible that the experiment could not have been carried out otherwise, but you’re required to show that. The paper doesn’t even discuss this aspect – it looks like they didn’t even bother to seriously consider it.

                                1. 1

                                  A waiver or alteration of informed consent is possible in the US, you can see the rules at (f)(3) here. Among the requirements are that the research involves no more than minimal risk and that it can’t be carried out without the waiver.

                              2. 3

                                Even if that were the case, and there really is no other way to study the efficiency of a review process with 100% accuracy

                                Which isn’t even a requirement! Your research can be 95% accurate, and you can still get results from it and come up with new things to research based on that. Instead, they made it all-or-nothing. In which case, you’re right, the choice should have been nothing, not all.

                                1. 2

                                  You can do an “offline” study and look at regressions, for example, where the harm is already done.

                                  You can first lead a small observational study retrospectively like this but you’re going to need to run a controlled experiment to have more confidence that the results aren’t just statistical flukes and biases. Experimenting on humans without consent is obviously really bad. The arrogant and tone deaf response from the PI is just something else.

                                  1. 3

                                    You can first lead a small observational study retrospectively like this but you’re going to need to run a controlled experiment to have more confidence that the results aren’t just statistical flukes and biases.

                                    Of course, but it’s a lot easier to get community support if you lead with “this is what we tried, and this is what we’d like to try in order to get better results” rather than “lol hello lab rat” and end with a letter of apology that can be summed up as sorry we got caught.

                                    I don’t think the Linux kernel development community lacks awareness of just how finicky the review process is. It would’ve been very easy to get them on board with a study conducted in good faith. The fact that these guys didn’t even try isn’t even disappointing, it’s maddening.

                                2. 12

                                  This was so brazen, I’m almost tempted to wonder if there was a serious attempt to slip a real vulnerability in for malicious purposes and this was the escape hatch.

                                  1. 11

                                    Presumably the work will continue under personal email accounts.

                                    There was an interesting discussion previously about this, but a key question is: Were these commits accepted with less review because they came from an academic account?

                                    1. 8

                                      Presumably the work will continue under personal email accounts.

                                      It was already done under “personal email accounts”.

                                      https://lore.kernel.org/linux-nfs/821177ec-dba0-e411-3818-546225511a00@grundis.de/

                                      1. 3

                                        nope, some got caught and some didn’t. And since the institution allowed this, they are now, rightfully so, being made an example.

                                        They got caught too many times and that triggered people to ban them.

                                      2. 10

                                        I’m on the side of the people who consider this a mis-overreaction out of spite. I’d like to see a mass revert… of the actual bogus commits. The would-be researchers state in their methodology description they consciously chose to submit their patches under non-UMN email addresses, so using @umn.edu as a revert criterion has a 100% miss rate for identifying bogus commits, and using no other criterion has a 0% inclusion rate. So I don’t know if this is a case of “something must be done; this is something” or something else, but I can’t find a way to frame it as a rational reaction.

                                        (Of the would-be researchers and what they did, I won’t even speak.)

                                        1. 12

                                          I think blatant violations of ethics in research deserve to be dealt with harshly, and making it the university’s problem is absolutely the right move because it is the university’s job to police their researchers. Reputation is a very hard currency for universities: it drives grants, it drives student applications, it drives where prestigious researchers want to work. Making a big dramatic stink about this threatens UNM’s reputation in the computer science field, which helps ensure the problem gets taken seriously.

                                          1. 3

                                            Greg is what we call a high-performer. Publically shaming a university for their “research” was probably a couple hours of work at most for him. And if you can do it, why not? It will likely prevent future “experimentation” from other “research groups dedicated to improving the linux kernel”.

                                          2. 17

                                            The NSA has been working on injecting backdoors, in all kinds of software, for years. Just sayin’

                                            1. 5

                                              [citation needed]? I wouldn’t be surprised if they were and I think it’s a good idea to assume that they are, but I’m not aware of many actual public examples, with the possible exception of some RSA software (see DUAL_EC_DRBG).

                                              Edit: maybe also Skype, IIRC?

                                              1. 21

                                                Lotus Notes

                                                In 1997, Lotus negotiated an agreement with the NSA that allowed export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a “workload reduction factor” for the NSA.

                                                And don’t forget CLIPPER, which was explicitly built around key escrow.

                                            2. 9

                                              The tone of the letter makes it sound like a typical PR apology written by people not because they think what they did was wrong but because they had to do it due to the public opinion. Reads like a typical apology letter from a large corporation.

                                              1. 6

                                                Response from Greg KH: https://lore.kernel.org/lkml/YIV+pLR0nt94q0xQ@kroah.com/

                                                Thank you for your response.

                                                As you know, the Linux Foundation and the Linux Foundation’s Technical Advisory Board submitted a letter on Friday to your University outlining the specific actions which need to happen in order for your group, and your University, to be able to work to regain the trust of the Linux kernel community.

                                                Until those actions are taken, we do not have anything further to discuss about this issue.

                                                1. 9

                                                  I’ve been somewhat jokingly telling people that software is a gigantic duct-taped together Jenga tower about to fall down, and this whole situation made me even more sick to my stomach. It’s already hard to build good software, and it’s one thing for people to have architectural or style differences, but intentionally submitting vulnerabilities under the guise of research? What’s next? Purposely making vulnerable libraries for package managers like Cargo, npm or pip for “research”?

                                                  1. 9

                                                    Yeah, it is scary that a university was able to make malicious commits. If they can do it, think of how easy it might be for a well funded organization like the NSA to do something similar.

                                                    Although this research was immoral, it did succeed in bringing up how much we take for-granted security.

                                                    1. 11

                                                      Purposely making vulnerable libraries for package managers like Cargo, npm or pip for “research”

                                                      It’s called “move fast and break things” and I will have you know it’s a proven software development technique that builds extraordinary value for the shareholders, not mere “research”!

                                                      1. 3

                                                        I think I worded that statement wrong. My point is that someone might try purposely putting vulnerabilities into existing packages for “research”, assuming that hasn’t already happened.

                                                        1. 5

                                                          No, no, you worded that statement correctly, it’s my attempt at sarcasm that misfired :-).

                                                          1. 2

                                                            Well, yes. You can assume that attackers will do this. It’s called a supply chain attack.

                                                      2. 5

                                                        Everyone seems so alarmed about this that it totally confuses me: am I the only one who thinks this changes nothing?

                                                        Some of malicious commits were caught by the reviewers, the code is there in the open for people to exactly find out who and how they got certain bad code in the repository, and to revert precisely the bad code and nothing else.

                                                        This is the power of OSS, not its weakness! Bad actors are always, and I say always, going to be there and we should trust nobody 100%. This is why transparency is the key to trust and not hiding the development, that creates bad AND undetectable code!

                                                        1. 1

                                                          It’s because they’re performing research on people who have not consented to be research subjects, while also submitting patches in bad faith. This isn’t a case of a research group turned out to be bad at kernel programming, and submitted their bad patches in good faith.

                                                          1. 2

                                                            Yeah sure but the main point is that they introduced bugs and told absolutely nobody in the project about it.

                                                          1. 4

                                                            I’m not familiar with the development of the Linux kernel, but shouldn’t all commits be reviewed by a human contributor before entering the source tree? I mean, if the culprits from UMN didn’t publish that paper, would these invalid commits get unnoticed for good? In that case, any malicious user could sign up for an email account and inject garbage or even backdoor into the kernel, which sounds like a big problem in the review process.

                                                            1. 17

                                                              I’m a former kernel hacker. Some malicious commits were found by human review. Humans are not perfect at finding bugs.

                                                              As I understand it, the vast majority of kernel memory bugs are found by automated testing techniques. This isn’t going to change as long as the kernel is written mostly without automatic memory safety.

                                                              1. 6

                                                                Thanks for the input, but I was not talking about detecting bugs in kernel code written with good faith. What surprises me is that the kernel maintainers seem to assume every patch to be helpful, and merge them without going through much human review. The result is dozens of low-effort garbage patches easily sneaked into the kernel (until the paper’s acceptance into some conference caught attention). Software engineers typically don’t trust user input, and a component as fundamental as the kernel deserves even more caution, so the kernel community’s review process sounds a little sloppy to me :/

                                                                1. 5

                                                                  the kernel maintainers seem to assume every patch to be helpful, and merge them without going through much human review.

                                                                  You seem here to be assuming up-front the conclusion you want to draw.

                                                                  1. 1

                                                                    If the patches were carefully reviewed by some human on submission, why didn’t the reviewer reject them? Well, maybe there are some ad-hoc human reviews, just not effective enough. These bogus commits were unnoticed until the publication of the paper, so it’s not like the kernel community is able to reject useless/harmful contributions by itself.

                                                                    1. 3

                                                                      If the patches were carefully reviewed by some human on submission, why didn’t the reviewer reject them?

                                                                      Because there are any number of factors which explain why a bug might not be caught, especially in a language which has an infamous reputation for making it easy to write, and hard to catch, memory-safety bugs. Assuming one and only one factor as the only possible explanation is not a good practice.

                                                                      1. 1

                                                                        As others have said, review is not easy. This is especially true when done under time pressure, as is essentially always the case in FOSS development. Pre-merge code review is a first line of defense against bad commits; no one expects it to catch everything.

                                                              2. 5

                                                                A reminder that the term “open source” was coined by the “anarcho”-capitalist Eric Steven Raymond to replace “free software”, because the term was – and is – unpalatable to capitalists. I made a deliberate choice to stop saying “open source” a year or two ago, because calling it “open source” is an invitation to exploitation.

                                                                First, we had conglomerates strip-mining it like yet another resource. And now, we have unethical researchers experimenting on random unsuspecting volunteers without even a fig leaf of consent. Enough is enough.

                                                                1. 8

                                                                  This is only a bad thing if you hold the political position that anarcho-capitalism, or capitalism broadly speaking, are bad. If you don’t hold this political position than this is not a reason to avoid using the term “open source”. ESR himself has written on several occasions about how and why the term “open source” was coined and initially popularized by him and several colleagues - he describes the decision as one of “marketing” and “branding”, that is, part of the art of convincing large numbers of other humans to do something, which is a pre-capitalist concept.

                                                                  In any case, public universities are not capitalist institutions - they are funded by the state out of taxpayer funds based on the political decision that running university-like institutions is broadly good for citizens. Academic researchers working for such institutions, whether they act ethically or unethically with respect to any ethical system, are not acting as capitalists while doing their research.

                                                                  Bringing up capitalism with respect to this event actually strikes me as particularly irrelevant - the core issue here is that some security researchers tried to submit malicious packages to a particular important open-source project, in order to test how good the open-source development process of the Linux kernel was at detecting attempts to submit malicious code. People might try to get malicious code into the Linux kernel for any number of reasons - because they are capitalists who wish to use malicious code in the kernel to harm other capitalists using Linux; or harm the idea of free software; or because they are government agents or NGOs or informal political actors trying to accomplish some act of sabotage; or perhaps because they have a personal grudge against another kernel maintainer. I personally was amused by the idea that this incident was itself a “bona fide” attempt to put malicious code into the kernel, and claiming to be computer security researchers doing an unauthorized pentest was itself a cover story.

                                                                  1. 5

                                                                    The relevance of capitalism to this particular case is indirect rather than direct. My point is that rebranding to open-source deemphasized other values, like community or user freedom.

                                                                    And the end result has been exploitation, be it from companies looking to strip-mine a software commons, HR departments treating github as a CV, unethical researchers treating Linux developers as lab rats, and so on. What these things all have in common is exploitation, rather than capitalism per se. My point is that the rebranding and the exploitation are linked.

                                                                    1. 5

                                                                      They’re really not linked though - nothing about this incident would’ve worked any differently if people called the Linux kernel “free software” rather than “open source software”. In fact, these terms are basically identical in meaning and people do routinely use both terms to talk about the Linux kernel all the time. For that matter, researchers performing an unauthorized social engineering attack on the Linux kernel maintainers is a failure mode that really has little to do with the other software-related social phenomena you mention, whose status as problems is dependent on the political ideology of the person judging them.

                                                                  2. 7

                                                                    And a reminder that ESR did not “coin” the term “open source” (Christine Peterson did), and that the God-Emperor of Free Software – Richard Stallman – is a self-described capitalist who encourages software developers being able to profit from their towk.

                                                                    1. 2

                                                                      One could certainly be forgiven for concluding that Raymond invented it after reading this and similar. There is no mention of Christine Peterson on that page.

                                                                      and that the God-Emperor of Free Software – Richard Stallman – is a self-described capitalist

                                                                      No gods, no masters! At the end of the day, I don’t give a rusted counterfeit nickel about what RMS or a lot of other people think; I make up my own mind.

                                                                  3. [Comment removed by moderator pushcx: We've established pretty well at this point that bringing up Nazis never improves a conversation.]

                                                                    1. 1

                                                                      A proper apology admitting to their wrong doings. Refreshing to see in this day and age.

                                                                      1. 1

                                                                        Are hypocrite commits flying on Mars?

                                                                        I was going to say that the worst thing was not disclosing the problems with their patches soon enough to prevent them from going anywhere. Doing what you can to prevent or undo harm is pretty fundamental – more important than asking for permission, I would say, no matter if what you are doing is research. But according to themselves in this apology letter, they did:

                                                                        The three incorrect patches were discussed and stopped during exchanges
                                                                        in a Linux message board, and never committed to the code. We reported
                                                                        the findings and our conclusions (excluding the incorrect patches) of
                                                                        the work to the Linux community before paper submission
                                                                        

                                                                        Of course, there could be more to it, for all we know, and it seems the kernel community didn’t feel too safe about it, since they felt it best to distrust the whole university (both in terms of commit reversion and banning of future commits). Was this more than just a communication problem and wasted time?