1. 196
  1. 51

    The paper has this to say (page 9):

    Regarding potential human research concerns. This experiment studies issues with the patching process instead of individual behaviors, and we do not collect any personal information. We send the emails to the Linux community and seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.

    [..]

    Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.

    I’m not familiar with the generally accepted standards on these kind of things, but this sounds rather iffy to me. I’m very far removed from academia, but I’ve participated in a few studies over the years, which were always just questionaries or interviews, and even for those I had to sign a consent waiver. “It’s not human research because we don’t collect personal information” seems a bit strange.

    Especially since the wording “we will have to report this, AGAIN, to your university” implies that this isn’t the first time this has happened, and that the kernel folks have explicitly objected to being subject to this research before this patch.

    And trying to pass off these patches as being done in good faith with words like “slander” is an even worse look.

    1. 79

      They are experimenting on humans, involving these people in their research without notice or consent. As someone who is familiar with the generally accepted standards on these kinds of things, it’s pretty clear-cut abuse.

      1. 18

        I would agree. Consent is absolutely essential but just one of many ethical concerns when doing research. I’ve seen simple usability studies be rejected due to lesser issues.

        It’s pretty clear this is abuse.. the kernel team and maintainers feel strongly enough to ban the whole institution.

        1. 10

          Yeah, agreed. My guess is they misrepresented the research to the IRB.

          1. 3

            They are experimenting on humans

            This project claims to be targeted at the open-source review process, and seems to be as close to human experimentation as pentesting (which, when you do social engineering, also involves interacting with humans, often without their notice or consent) - which I’ve never heard anyone claim is “human experimentation”.

            1. 19

              A normal penetration testing gig is not academic research though. You need to separate between the two, and also hold one of them to a higher standard.

              1. 0

                A normal penetration testing gig is not academic research though. You need to separate between the two, and also hold one of them to a higher standard.

                This statement is so vague as to be almost meaningless. In what relevant ways is a professional penetration testing contract (or, more relevantly, the associated process) different from this particular research project? Which of the two should be held to a higher standard? Why? What does “held to a higher standard” even mean?

                Moreover, that claim doesn’t actually have anything to do with the comment I was replying to, which was claiming that this project was “experimenting on humans”. It doesn’t matter whether or not something is “research” or “industry” for the purposes of whether or not it’s “human experimentation” - either it is, or it isn’t.

                1. 18

                  Resident pentester and ex-academia sysadmin checking in. I totally agree with @Foxboron and their statement is not vague nor meaningless. Generally in a penetration test I am following basic NIST 800-115 guidance for scoping and target selection and then supplement contractual expectations for my clients. I can absolutely tell you that the methodologies that are used by academia should be held to a higher standard in pretty much every regard I could possibly come up with. A penetration test does not create a custom methodology attempting do deal with outputting scientific and repeatable data.

                  Let’s put it in real terms, I am hired to do a security assessment in a very fixed highly focused set of targets explicitly defined in contract by my client in an extremely fixed time line (often very short… like 2 weeks maximum and 5 day average). Guess what happens if social engineering is not in my contract? I don’t do it.

                  1. 2

                    Resident pentester and ex-academia sysadmin checking in.

                    Note: this is worded like an appeal to authority, although you probably don’t mean it that way, so I’m not going to act like you are.

                    I totally agree with @Foxboron and their statement is not vague nor meaningless.

                    Those are two completely separate things, and neither is implied by the other.

                    their statement is not vague nor meaningless.

                    Not true - their statement contained none of the information you just provided, nor any other sort of concrete or actionable information - the statement “hold to a higher standard” is both vague and meaningless by itself…and it was by itself in that comment (or, obviously, there were other words - none of them relevant) - there was no other information.

                    the methodologies that are used by academia should be held to a higher standard

                    Now you’re mixing definitions of “higher standard” - GP and I were talking about human experimentation and ethics, while you seem to be discussing rigorousness and reproducibility of experiments (although it’s not clear, because “A penetration test does not create a custom methodology attempting do deal with outputting scientific and repeatable data” is slightly ambiguous).

                    None of the above is relevant to the question of “was this a human experiment” and the closely-related one “is penetration testing a human experiment”. Evidence suggests “no” given that the term does not appear in that document, nor have I heard of any pentest being reviewed by an ethics review board, nor have I heard any mention of “human experimenting” in the security community (including when gray-hat and black-hat hackers and associated social engineering e.g. Kevin Mitnick are mentioned), nor are other similar, closer-to-human experimentation (e.g. A/B testing, which is far closer to actually experimenting on people) processes considered to be such - up until this specific case.

                  2. 5

                    if you’re an employee in an industry, you’re either informed of penetration testing activity, or you’ve at the very least tacitly agreed to it along with many other things that exist in employee handbooks as a condition of your employment.

                    if a company did this to their employees without any warning, they’d be shitty too, but the possibility that this kind of underhanded behavior in research could taint the results and render the whole exercise unscientific is nonzero.

                    either way, the goals are different. research seeks to further the verifiability and credibility of information. industry seeks to maximize profit. their priorities are fundamentally different.

                    1. 1

                      you’ve at the very least tacitly agreed to it along with many other things that exist in employee handbooks as a condition of your employment

                      By this logic, you’ve also agreed to everything else in a massive, hundred-page long EULA that you click “I agree” on, as well as consent to be tracked by continuing to use a site that says that in a banner at the bottom, as well as consent to Google/companies using your data for whatever they want and/or selling it to whoever will buy.

                      …and that’s ignoring whether or not companies that have pentesting done on them actually explicitly include that specific warning in your contract - “implicit” is not good enough, as then anyone can claim that, as a Linux kernel patch reviewer, you’re “implicitly agreeing that you may be exposed to the risk of social engineering for the purpose of getting bad code into the kernel”.

                      the possibility that this kind of underhanded behavior in research could taint the results and render the whole exercise unscientific

                      Like others, you’re mixing up the issue of whether the experiment was properly-designed with the issue of whether it was human experimentation. I’m not making any attempt to argue the former (because I know very little about how to do good science aside from “double-blind experiments yes, p-hacking no”), so I don’t know why you’re arguing against it in a reply to me.

                      either way, the goals are different. research seeks to further the verifiability and credibility of information. industry seeks to maximize profit. their priorities are fundamentally different.

                      I completely agree that the goals are different - but again, that’s irrelevant for determining whether or not something is “human experimentation”. Doesn’t matter what the motive is, experimenting on humans is experimenting on humans.

                2. 18

                  This project claims to be targeted at the open-source review process, and seems to be as close to human experimentation as pentesting (which, when you do social engineering, also involves interacting with humans, often without their notice or consent) - which I’ve never heard anyone claim is “human experimentation”.

                  I had a former colleague that once bragged about getting someone fired at his previous job during a pentesting exercise. He basically walked over to this frustrated employee at a bar, bribed him a ton of money and gave a job offer in return for plugging a usb key into the network. He then reported it to senior management and the employee was fired. While that is an effective demonstration of a vulnerability in their organization, what he did was unethical under many moral frameworks.

                  1. 2

                    First, the researchers didn’t engage in any behavior remotely like this.

                    Second, while indeed an example of pentesting, most pentesting is not like this.

                    Third, the fact that it was “unethical under many moral frameworks” is irrelevant to what I’m arguing, which is that the study was not “human experimentation”. You can steal money from someone, which is also “unethical under many moral frameworks”, and yet still not be doing “human experimentation”.

                  2. 3

                    If there is a pentest contract, then there is consent, because consent is one of the pillars of contract law.

                    1. 1

                      That’s not an argument that pentesting is human experimentation in the first place.

                3. 42

                  The statement from the UMinn IRB is in line with what I heard from the IRB at the University of Chicago after they experimented on me, who said:

                  I asked about their use of any interactions, or use of information about any individuals, and they indicated that they have not and do not use any of the data from such reporting exchanges other than tallying (just reports in aggregate of total right vs. number wrong for any answers received through the public reporting–they said that much of the time there is no response as it is a public reporting system with no expectation of response) as they are not interested in studying responses, they just want to see if their tool works and then also provide feedback that they hope is helpful to developers. We also discussed that they have some future studies planned to specifically study individuals themselves, rather than the factual workings of a tool, that have or will have formal review.

                  They because claim they’re studying the tool, it’s OK to secretly experiment on random strangers without disclosure. Somehow I doubt they test new drugs by secretly dosing people and observing their reactions, but UChicago’s IRB was 100% OK with doing so to programmers. I don’t think these IRBs literally consider programmers sub-human, but it would be very inconvenient to accept that experimenting on strangers is inappropriate, so they only want to do so in places they’ve been forced to by historical abuse. I’d guess this will continue for years until some random person is very seriously harmed by being experimented on (loss of job/schooling, pushing someone unstable into self-harm, targeting someone famous outside of programming) and then over the next decade IRBs will start taking it seriously.

                  One other approach that occurs to me is that the experimenters and IRBs claim they’re not experimenting on their subjects. That’s obviously bullshit because the point of the experiment is to see how the people respond to the treatment, but if we accept the lie it leaves an open question: what is the role played by the unwitting subject? Our responses are tallied, quoted, and otherwise incorporated into the results in the papers. I’m not especially familiar with academic publishing norms, but perhaps this makes us unacknowledged co-authors. So maybe another route to stopping experimentation like this would be things like claiming copyright over the papers, asking journals for the papers to be retracted until we’re credited, or asking the universities to open academic misconduct investigations over the theft of our work. I really don’t have the spare attention for this, but if other subjects wanted to start the ball rolling I’d be happy to sign on.

                  1. 23

                    I can kind of see where they’re coming from. If I want to research if car mechanics can reliably detect some fault, then sending a prepared car to 50 garages is probably okay, or at least a lot less iffy. This kind of (informal) research is actually fairly commonly by consumer advocacy groups and the like. The difference is that the car mechanics will get paid for their work where as the Linux devs and you didn’t.

                    I’m gonna guess the IRBs probably aren’t too familiar with the dynamics here, although the researchers definitely were and should have known better.

                    1. 18

                      Here it’s more like keying someone’s car to see how quick it takes them to get an insurance claim.

                      1. 4

                        Am I misreading? I thought the MR was a patch designed to fix a potential problem, and the issue was

                        1. pushcx thought it wasn’t a good fix (making it a waste of time)
                        2. they didn’t disclose that it was an auto-generated PR.

                        Those are legitimate complaints, c.f. https://blog.regehr.org/archives/2037, but from the analogies employed (drugs, dehumanization, car-keying), I have to double-check that I haven’t missed an aspect of the interaction that makes it worse than it seemed to me.

                        1. 2

                          We were talking about Linux devs/maintainers too, I commented on that part.

                          1. 1

                            Gotcha. I missed that “here” was meant to refer to the Linux case, not the Lobsters case from the thread.

                      2. 1

                        Though there they are paying the mechanic.

                      3. 18

                        IRB is a regulatory board that is there to make sure that researchers follow the (Common Rule)[https://www.hhs.gov/ohrp/regulations-and-policy/regulations/common-rule/index.html].

                        In general, any work that receives federal funding needs to comply with the federal guidelines for human subject research. All work involving human subjects (usually defined as research activities that involve interaction with humans) need to be reviewed and approved by the institution IRB. These approvals fall within a continuum, from a full IRB review (which involve the researcher going to a committee and explaining their work and usually includes continued annual reviews) to a declaration of the work being exempt from IRB supervision (usually this happens when the work meets one of the 7 exemptions listed in the federal guidelines). The whole process is a little bit more involved, see for example (all the charts)[https://www.hhs.gov/ohrp/regulations-and-policy/decision-charts/index.html] to figure this out.

                        These rules do not cover research that doesn’t involve humans, such as research on technology tools. I think that there is currently a grey area where a researcher can claim that they are studying a tool and not the people interacting with the tool. It’s a lame excuse that probably goes around the spirit of the regulations and is probably unethical from a research stand point. The data aggregation method or the data anonymization is usually a requirement for an exempt status and not a non-human research status.

                        The response that you received from IRB is not surprising, as they probably shouldn’t have approved the study as non-human research but now they are just protecting the institution from further harm rather than protecting you as a human subject in the research (which, by the way, is not their goal at this point).

                        One thing that sticks out to me about your experience is that you weren’t asked to give consent to participate in the research. That usually requires a full IRB review as informed consent is a requirement for (most) human subject research. Exempt research still needs informed consent unless it’s secondary data analysis of existing data (which your specific example doesn’t seem to be).

                        One way to quickly fix it is to contact the grant officer that oversees the federal program that is funding the research. A nice email stating that you were coerced to participate in the research study by simply doing your work (i.e., review a patch submitted to a project that you lead) without being given the opportunity to provide prospective consent and without receiving compensation for your participation and that the research team/university is refusing to remove your data even after you contacted them because they claim that the research doesn’t involve human subjects can go a long way to force change and hit the researchers/university where they care the most.

                        1. 7

                          Thanks for explaining more of the context and norms, I appreciate the introduction. Do you know how to find the grant officer or funding program?

                          1. 7

                            It depends on how “stalky” you want to be.

                            If NSF was the funder, they have a public search here: https://nsf.gov/awardsearch/

                            Most PIs also add a line about grants received to their CVs. You should be able to match the grant title to the research project.

                            If they have published a paper from that work, it should probably include an award number.

                            Once you have the award number, you can search the funder website for it and you should find a page with the funding information that includes the program officer/manager contact information.

                            1. 3

                              If they published a paper about it they likely included the grant ID number in the acknowledgements.

                              1. 1

                                You might have more luck reaching out to the sponsored programs office at their university, as opposed to first trying to contact an NSF program officer.

                            2. 4

                              How about something like a an Computer Science - External Review Board? Open source projects could sign up, and include a disclaimer that their project and community ban all research that hasn’t been approved. The approval process could be as simple as a GitHub issue the researcher has to open, and anyone in the community could review it.

                              It wouldn’t stop the really bad actors, but any IRB would have to explain why they allowed an experiment on subjects that explicitly refused consent.

                              [Edit] I felt sufficiently motivated, so I made a quick repo for the project . Suggestions welcome.

                              1. 7

                                I’m in favor of building our own review boards. It seems like an important step in our profession taking its reponsibility seriously.

                                The single most important thing I’d say is, be sure to get the scope of the review right. I’ve looked into this before and one of the more important limitations on IRBs is that they aren’t allowed to consider the societal consequences of the research succeeding. They’re only allowed to consider harm to experimental subjects. My best guess is that it’s like that because that’s where activists in the 20th-century peace movement ran out of steam, but it’s a wild guess.

                                1. 4

                                  At least in security, there are a lot of different Hacker Codes of Ethics floating around, which pen testers are generally expected to adhere to… I don’t think any of them cover this specific scenario though.

                                  1. 2

                                    any so-called “hacker code of ethics” in use by any for-profit entity places protection of that entity first and foremost before any other ethical consideration (including human rights) and would likely not apply in a research scenario.

                              2. 23

                                They are bending the rules for non human research. One of the exceptions for non-human research is research on organization, which my IRB defines as “Information gathering about organizations, including information about operations, budgets, etc. from organizational spokespersons or data sources. Does not include identifiable private information about individual members, employees, or staff of the organization.” Within this exception, you can talk with people about how the organization merges patches but not how they personally do that (for example). All the questions need to be about the organization and not the individual as part of the organization.

                                On the other hand, research involving human subjects is defined as any research activity that involves an “individual who is or becomes a participant in research, either:

                                • As a recipient of a test article (drug, biologic, or device); or
                                • As a control.”

                                So, this is how I interpret what they did.

                                The researchers submitted an IRB approval saying that they just downloaded the kernel maintainer mailing lists and analyzed the review process. This doesn’t meet the requirements for IRB supervision because it’s either (1) secondary data analysis using publicly available data and (2) research on organizational practices of the OSS community after all identifiable information is removed.

                                Once they started emailing the list with bogus patches (as the maintainers allege), the research involved human subjects as these people received a test article (in the form of an email) and the researchers interacted with them during the review process. The maintainers processing the patch did not do so to provide information about their organization’s processes and did so in their own personal capacity (In other words, they didn’t ask them how does the OSS community processes this patch but asked them to process a patch themselves). The participants should have given consent to participate in the research and the risks of participating in it should have been disclosed, especially given the fact that missing a security bug and agreeing to merge it could be detrimental to someone’s reputation and future employability (that is, this would qualify for more than minimal risk for participants, requiring a full IRB review of the research design and process) with minimal benefits to them personally or to the organization as a whole (as it seems from the maintainers’ reaction to a new patch submission).

                                One way to design this experiment ethically would have been to email the maintainers and invite them to participate in a “lab based” patch review process where the research team would present them with “good” and “bad” patches and ask them whether they would have accepted them or not. This is after they were informed about the study and exercised their right to informed consent. I really don’t see how emailing random stuff out and see how people interact with it (with their full name attached to it and in full view of their peers and employers) can qualify as research with less than minimal risks and that doesn’t involve human subjects.

                                The other thing that rubs me the wrong way is that they sought (and supposedly received) retroactive IRB approval for this work. That wouldn’t fly with my IRB, as my IRB person would definitely rip me a new one for seeking retroactive IRB approval for work that is already done, data that was already collected, and a paper that is already written and submitted to a conference.

                                1. 6

                                  You make excellent points.

                                  1. IRB review has to happen before the study is started. For NIH, the grant application has to have the IRB approval - even before a single experiment is even funded to be done, let alone actually done.
                                  2. I can see the value of doing a test “in the field” so as to get the natural state of the system. In a lab setting where the participants know they are being tested, various things will happen to skew results. The volunteer reviewers might be systematically different from the actual population of reviewers, the volunteers may be much more alert during the experiment and so on.

                                  The issue with this study is that there was no serious thought given to what are the ethical ramifications of this are.

                                  If the pen tested system has not asked to be pen tested then this is basically a criminal act. Otherwise all bank robbers could use the “I was just testing the security system” defense.

                                  1. 8

                                    The same requirement for prior IRB approval is necessary for NSF grants (which the authors seem to have received). By what they write in the paper and my interpretation of the circumstances, they self certified as conducting non-human research at time of submitting the grant and only asked their IRB for confirmation after they wrote the paper.

                                    Totally agree with the importance of “field experiment” work and that, sometimes, it is not possible to get prospective consent to participate in the research activities. However, the guidelines are clear on what activities fall within research activities that are exempt from prior consent. The only one that I think is applicable to this case is exception 3(ii):

                                    (ii) For the purpose of this provision, benign behavioral interventions are brief in duration, harmless, painless, not physically invasive, not likely to have a significant adverse lasting impact on the subjects, and the investigator has no reason to think the subjects will find the interventions offensive or embarrassing. Provided all such criteria are met, examples of such benign behavioral interventions would include having the subjects play an online game, having them solve puzzles under various noise conditions, or having them decide how to allocate a nominal amount of received cash between themselves and someone else.

                                    These usually cover “simple” psychology experiments involving mini games or economics games involving money.

                                    In the case of this kernel patching experiment, it is clear that this experiment doesn’t meet this requirement as participants have found this intervention offensive or embarrassing, to the point that they are banning the researchers’ institution from pushing patched to the kernel. Also, I am not sure if reviewing a patch is a “benign game” as this is the reviewers’ jobs, most likely. Plus, the patch review could have adverse lasting impact on the subject if they get asked to stop reviewing patches if they don’t catch the security risk (e.g., being deemed imcompetent).

                                    Moreover, there is this follow up stipulation:

                                    (iii) If the research involves deceiving the subjects regarding the nature or purposes of the research, this exemption is not applicable unless the subject authorizes the deception through a prospective agreement to participate in research in circumstances in which the subject is informed that he or she will be unaware of or misled regarding the nature or purposes of the research.

                                    As their patch submission process was deceptive in nature, as their outline in the paper, exemption 3(ii) cannot apply to this work unless they notify maintainers that they will be participating in a deceptive research study about kernel patching.

                                    That leaves the authors to either pursue full IRB review for their work (as a full IRB review can approve a deceptive research project if it deems it appropriate and the risk/benefit balance is in favor to the participants) or to self-certify as non-human subjects research and fix any problems later. They decided to go with the latter.

                                2. 35

                                  We believe that an effective and immediate action would be to update the code of conduct of OSS, such as adding a term like “by submitting the patch, I agree to not intend to introduce bugs.”

                                  I copied this from that paper. This is not research, anyone who writes a sentence like this with a straight face is a complete moron and is just mocking about. I hope all of this will be reported to their university.

                                  1. 18

                                    It’s not human research because we don’t collect personal information

                                    I yelled bullshit so loud at this sentence that it woke up the neighbors’ dog.

                                    1. 2

                                      Yeah, that came from the “clarifiactions” which is garbage top to bottom. They should have apologized, accepted the consequences and left it at that. Here’s another thing they came up with in that PDF:

                                      Suggestions to improving the patching process In the paper, we provide our suggestions to improve the patching process.

                                      • OSS projects would be suggested to update the code of conduct, something like “By submitting the patch, I agree to not intend to introduce bugs”

                                      i.e. people should say they won’t do exactly what we did.

                                      They acted in bad faith, skirted IRB through incompetence (let’s assume incompetence and not malice) and then act surprised.

                                    2. 14

                                      Apparently they didn’t ask the IRB about the ethics of the research until the paper was already written: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc.pdf

                                      Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning. We apologize for the raised concerns. This is an important lesson we learned—Do not trust ourselves on determining human research; always refer to IRB whenever a study might be involving any human subjects in any form. We would like to thank the people who suggested us to talk to IRB after seeing the paper abstract.

                                      1. 14

                                        I don’t approve of researchers YOLOing IRB protocols, but I also want this research done. I’m sure many people here are cynical/realistic enough that the results of this study aren’t surprising. “Of course you can get malicious code in the kernel. What sweet summer child thought otherwise?” But the industry as a whole proceeds largely as if that’s not the case (or you could say that most actors have no ability to do anything about the problem). Heighten the contradictions!

                                        There are some scary things in that thread. It sounds as if some of the malicious patches reached stable, which suggests that the author mostly failed by not being conservative enough in what they sent. Or for instance:

                                        Right, my guess is that many maintainers failed in the trap when they saw respectful address @umn.edu together with commit message saying about “new static analyzer tool”.

                                        1. 17

                                          I agree, while this is totally unethical, it’s very important to know how good the review processes are. If one curious grad student at one university is trying it, you know every government intelligence department is trying it.

                                          1. 8

                                            I entirely agree that we need research on this topic. There’s better ways of doing it though. If there aren’t better ways of doing it, then it’s the researcher’s job to invent them.

                                          2. 7

                                            It sounds as if some of the malicious patches reached stable

                                            Some patches from this University reached stable, but it’s not clear to me that those patches also introduced (intentional) vulnerabilities; the paper explicitly mentions the steps that they’re taking steps to ensure those patches don’t reach stable (I omitted that part, but it’s just before the part I cited)

                                            All umn.edu are being reverted, but at this point it’s mostly a matter of “we don’t trust these patches and will need additional review” rather than “they introduced security vulnerabilities”. A number of patches already have replies from maintainers indicating they’re genuine and should not be reverted.

                                            1. 5

                                              Yes, whether actual security holes reached stable or not is not completely clear to me (or apparently to maintainers!). I got that impression from the thread, but it’s a little hard to say.

                                              Since the supposed mechanism for keeping them from reaching stable is conscious effort on the part of the researchers to mitigate them, I think the point may still stand.

                                              1. 1

                                                It’s also hard to figure out what the case is since there is no clear answer what the commits where, and where they are.

                                            2. 4

                                              The Linux review process is so slow that it’s really common for downstream folks to grab under-review patches and run with them. It’s therefore incredibly irresponsible to put patches that you know introduce security vulnerabilities into this form. Saying ‘oh, well, we were going to tell people before they were deployed’ is not an excuse and I’d expect it to be a pretty clear-cut violation of the Computer Misuse Act here and equivalent local laws elsewhere. That’s ignoring the fact that they were running experiments on people without their consent.

                                              I’m pretty appalled the Oakland accepted the paper for publication. I’ve seen paper rejected from there before because they didn’t have appropriate ethics review oversite.

                                            1. 20

                                              Because of this, I will now have to ban all future contributions from your University and rip out your previous contributions, as they were obviously submitted in bad-faith with the intent to cause problems.

                                              That’s anyone at the University of Minnesota then banned from making Linux contributions..

                                              1. 23

                                                It’s perhaps ineffective at stopping these authors, but an excellent message to the university. It’s not going to stop the authors changing email addresses, but it will end up in the media, at which point university administrators will be concerned about the bad press.

                                                The patches coming from a group at a university probably lent them some minimal initial credibility, it’s not uncommon for CS research to build new tools and apply them to the Linux kernel. It’s unfortunate that future submissions will have to be treated with heightened suspicion.

                                              2. 17

                                                The University has responded here.

                                                1. 10

                                                  “We are investigating ourselves”

                                                  I remember years ago, a french linux community (linuxfr.org) received a cease and desist letter from a lawyer. Long story short, some company posted a job ad in this community’s forum, and they hired lawyers to clean up the reputation of their company across the internet. The lawyer firm found an answer to the job ad where one user was criticising the job ad saying something along the lines “you’re hiring senior web dev, but you’re website looks to be written by an intern” and then the user starting pointing out all the technical wrong doing, in the user’s opinion.

                                                  In the end, the community reported to the administrator of the community that the letter was illegal, since they were asking for money as “damage” in the cease and desist letter, which in France can only be asked during trial. SInce this kind of practice is illegal, the community administrator reported the lawyer to the french national bar association, of which all licensed lawyers in France have to be member.

                                                  The answer was the same “thank you for the report, we will investigate ourselves”. And nothing came out of it…

                                                  What I’ve seen in professions investigating their own, is that it leads the investigators to feel that they’re part of some kind of clan, and that they need to protect the pack as much as possible. Heavy wrong doings will just end up with a slap on the wrist.

                                                  If I were part of such a “software engineer practice board”, I think I would hold my peers to a very high standard since I wouldn’t want them to ruin the reputation of my profession. But apparently I would be part of a tiny minority. Or maybe this is just a lie I tell myself, and would just join the pack…

                                                2. 8

                                                  Now, that’s pretty shitty.

                                                  HOWEVER.

                                                  I kinda sorta somewhat a little bit see the point in this kind of research/test. Maybe I’m wrong (although the fact that this even happened kinda suggests I’m not), but it seems like their premise was correct: the Linux kernel review process, and lots of similar opensource projects, ARE vulnerable to malicious agents introducing the so called hypocrite patches.

                                                  Now, the way those people tried to test it was absolutely unethical, I think there’s barely a discussion there. But could there be an ethical way of testing these process? Maybe by seeking consent of some maintainers, kinda like a pentest? Does anyone see any other kind of way?

                                                  1. 19

                                                    One way would be to get people to consent that at some point there may be a “test patch” (or “hypocrite patch”) like this. This could possibly be months or even a year later. I suspect that many maintainers will agree to this, and when done right I suspect many will even consider it helpful and useful; no one wants to accidentally approve bad patches and we can all learn from this. Takes a bit of time and effort, but it’s really not that hard, and won’t influence the study results too much.

                                                    In the end, it’s the difference between asking “can I borrow your bike for an hour?” vs. just taking it for an hour. I will almost certainly say yes if you just ask, but I will be quite cross with you if you would just take it.

                                                    1. 4

                                                      the Linux kernel review process, and lots of similar opensource projects, ARE vulnerable to malicious agents introducing the so called hypocrite patches.

                                                      All code is vulnerable to malicious agents, it’s a human process and humans make mistakes. They also generally assume good intent.

                                                      You have to assume corporate/state agents are embedded at all major companies, tech companies included.

                                                      1. 2

                                                        Proprietary code usually has access control, so, good intent is only assumed of the people who have access to the code, which have been hired and, consequently, been through some vetting process.

                                                        Also, I feel like the world rely more on open source than proprietary code? Like, there might be more proprietary code out there, but there’s more things depending on single pieces of opensource code than in single pieces of proprietary code.

                                                      2. 1

                                                        Maybe I’m wrong

                                                        No, you’re not wrong at all. The fact that many people have accidentally submitted patch that introduce vulnerabilities implies that it’s feasible to do so intentionally - and the results of this experiment show that it’s not only feasible, but has happened, and without detection, too.

                                                        I don’t think that what the researchers did was unethical, though, at least in theory - they said that they would immediately notify the reviewers after their vulnerable patches were accepted, which, if done consistently, and the reviewers were paying attention, would mean that no vulnerability would actually make it into a stable tree.

                                                        Obviously, they failed at that - but that’s not a matter of ethics, but implementation, any more than a large company being breached and leaking user data is not a failure of ethics (they clearly don’t try to leak that data - it’s valuable to them), but implementation.

                                                      3. 7

                                                        I am curious, is this illegal in some way? They are effectively on purpose introducing bugs or security holes into a ton of computer systems including ones that are run by various government agencies and they admit openly to doing it.

                                                        1. 7

                                                          Probably not illegal, but there is no evidence of ethics approval. Chances are they can’t get ethics on it.

                                                          I’ve spoken to a couple of academics about this case and they can’t quite believe someone is trying to pull this in the name of research.

                                                          Also, looking at the funding sources they cite, they seem pretty out of bounds on that front:

                                                          https://nsf.gov/awardsearch/showAward?AWD_ID=1931208 https://nsf.gov/awardsearch/showAward?AWD_ID=1815621

                                                          1. 5

                                                            I think it’s borderline. Pen-testing is legal, and it’s generally done “on the sly” but with management’s approval.

                                                            1. 18

                                                              I don’t think this is pen-testing, their code reached the stable trees supposedly. Once that happens they actually introduced bugs and security issues and potentially compromised various systems. This is not pen-testing anymore.

                                                              https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3NcPu=17qyVyEEtVMVR_g51Ma6Q@mail.gmail.com/

                                                              1. 1

                                                                Whether their code reached stable trees is irrelevant to whether or not it’s pen-testing - you can just as easily imagine a pen-tester accidentally leaving a back-door in a system after their contract has expired. Criminal negligence? Yes. Evidence of an unethical practice in the first place? Not in the slightest.

                                                                Similarly, the researchers said that, as soon as one of their patches was accepted, they would immediately notify the tree maintainer. If they did that, and the maintainer was paying attention, the patch would never make it to a stable tree.

                                                                Whether someone is ethical or not is completely unrelated to its outcome.

                                                              2. 2

                                                                Pentesting comes with contracts and project plans signed by both the tester(s) and the company main stakeholder(s). So, no it’s not at all the same.

                                                              3. 4

                                                                Probably not, opensource is “no warranty” all the down.

                                                                1. 1

                                                                  Almost certainly… For instance the following seems appropriate.

                                                                  18 U.S. Code § 2154 - Production of defective war material, war premises, or war utilities

                                                                  Whoever, when the United States is at war, or in times of national emergency as declared by the President or by the Congress, […] with reason to believe that his act may injure, interfere with, or obstruct the United States or any associate nation in preparing for or carrying on the war or defense activities, willfully makes, constructs, or causes to be made or constructed in a defective manner, or attempts to make, construct, or cause to be made or constructed in a defective manner any war material, war premises or war utilities, or any tool, implement, machine, utensil, or receptacle used or employed in making, producing, manufacturing, or repairing any such war material, war premises or war utilities, shall be fined under this title or imprisoned not more than thirty years, or both

                                                                  Probably also various crimes relating to fraud…

                                                                  1. 8

                                                                    when the United States is at war,

                                                                    Except it’s not, so, this is not appropriated at all.

                                                                    There’s no contract, no relationship, no agreement at all between an opensource contributor and the project they contribute to. At most some sort of contributor agreement that is usually in there only for handling patents. When someone submits a patch they’re making absolutely no legal promises as for the quality of said patch, and this propagates all the way to whoever uses the software. The licenses don’t say THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND for nothing. Sure, the US army or whatever might use Linux, but they do it at their own peril.

                                                                    Now, they might get in trouble for being sketchy about the ethical approval and stuff, but that will only get them in professional trouble at most, like loosing their jobs.

                                                                    1. 3

                                                                      You missed the second half of the disjunction

                                                                      or in times of national emergency as declared by the President or by the Congress,

                                                                      This clause is true… many times over https://en.m.wikipedia.org/wiki/List_of_national_emergencies_in_the_United_States

                                                                      Edit: The US army does not do it at their own peril against actively malicious activities. Civil contracts do not override statutory law, rather the other way around.

                                                                      1. 2

                                                                        Hmm, yeah, I stand corrected (partially, at least).

                                                                        However, the law you’re quoting says war stuff or stuff used to make war stuff. I’m not even sure software would qualify as stuff, as described in there. But yeah, I’m less sure they are not screwed now. Also, from the names, they might not be US citizens, which could make things worse.

                                                                        That said, I’m somewhat skeptical anyone would pursue this kind of legal action.

                                                                        1. 7

                                                                          The definition of what’s protected here is really broad. Is the linux kernel used a tool to help operate the telecommunications infrastructure for the company making uniforms for the military? If so it’s protected.

                                                                          It’s almost like it was written for actual times of war, not this nonsense of a constant 30 national emergencies going on. Blame congress.

                                                                          I agree it’s unlikely to be prosecuted, unless there is significant damage attributable to the act of sabotage (someone deploys some ransomware to a hospital that exploits something they did, for instance), or someone in power decides that the act of sabotage’s main purpose was actually sabotage not getting papers… If it is prosecuted I also think it’s likely that they’ll find some more minor fraud related crime to actually charge… I just found this one by googling “sabotage, us law”.

                                                                          1. 3

                                                                            There’s what the law says (or can be construed to say) and what a court will actually accept. I think a lawyer would have a hard time convincing a jury that a silly research paper was war sabotage.

                                                                            1. 3

                                                                              I wish I had your faith in the system. I think a lot of this stuff depends on whether prosecutors choose to make an example of the person. I can’t see that happening here; I very much doubt that the US federal government sees its own power threatened by this irresponsible research. However, if you look at the history, there are examples that I find similarly absurd which did lead to convictions. The differentiating factor seems to not be any genuine legal distinction, but simply whether prosecutors want to go all-out.

                                                                              Furthermore, the ones the public knows about are the ones that happened in regular courts. Decisions by FISA courts or by military tribunals do not receive the same scrutiny, and thus we must assume the injustice is even greater in those venues.

                                                                              1. 1

                                                                                I don’t deny that unjust laws are often enforced, despite jury trial, I just think that in this case it would be pretty unlikely for that to happen.

                                                                                I think the state/ruling class is more likely to abuse its power when it is threatened, embarassed (journalists, whistleblowers (Wikileaks), minor hackers) or when there is the opportunity to harm an out-group or political opponent (e.g. non-dominant ethnic groups, leftist movements, sometimes extreme right-wing groups); and I don’t think any of those really apply here.

                                                                                1. 2

                                                                                  I apologize for the belated reply. I do agree with all of that.

                                                                              2. 1

                                                                                Feels like a case ripe for independent reinvention of jury nullification.

                                                                  2. 5

                                                                    Let’s add to the question “what is the quality of code review process in Linux?” an other one “what is the quality of ethical review process at universities?”.

                                                                    I think there should be a real world experiment to test it.

                                                                    1. 5

                                                                      Here is some kind of a follow up from those people: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc.pdf

                                                                      If you have further concerns, please email us at ​kjlu@umn.edu

                                                                      Well guess what, I did. And I encourage everyone to do the same.

                                                                      1. 3

                                                                        That’s interesting, apparently they “apologized” half a year ago for what they’d done, and yet here we are in 2021, with them trying it again.

                                                                        1. 1

                                                                          Are you sure this is from half a year ago? The document isn’t dated, and the timestamp in the directory listing is 2021-04-21 (yesterday).

                                                                          (aside: always date your documents people, no matter what it is or where it’s published).

                                                                          1. 3

                                                                            It’s at the top of the document @boreq linked:

                                                                            December 15, 2020

                                                                            1. 2

                                                                              Oh, it looks like the Firefox PDF viewer doesn’t render the metadata; but if I click “view page info” it does say “Modified: 15 December 2020, 22:59:54 GMT+8”.

                                                                      2. 7

                                                                        I am soon to be making the decision of where to apply for grad school. U of M is no longer on my shortlist.

                                                                        1. 4

                                                                          This is simply not something that should be done. Ever.

                                                                          They claimed they only made 3 patches, when in reality they probably made a lot more, and as a result, the entire bunch of contributions from this university were reverted - here’s Greg’s post about reverting almost 200 patches.

                                                                          3 patches, was it ?

                                                                          1. 4

                                                                            They targeted all from those university addresses, dating back as far as 2018 if I read it correctly -also just 15min after the 190 patch batch there were already nearly 10 kernel people saying some single patches were fine, so I guess

                                                                            a) a lot will be unreverted b) not all of those 190 were done by /these people/, just from the same uni, so I suppose the “3” figure is very much too low, but the 190 is very much too high

                                                                          2. 4

                                                                            Will they update there paper with an appendix about what happens when you get caught?

                                                                            1. 4

                                                                              It’s controversial work, but I do see the value in field-testing this. By requesting consent, this would probably skew the findings as people would potentially be more on guard.

                                                                              Truly malicious actors would not be so kind as to give advance warning. And it’s exactly what this research is trying to emulate.

                                                                              Having said that, I think it would’ve been the right thing to inform them after the experiment is done, telling them exactly which patches introduced which vulnerabilities. This should happen before publishing the paper so that they can be fixed.

                                                                              1. 3

                                                                                If researchers didn’t have to obtain consent they could do lots of interesting, unskewed, research. Also, a lot of damage. I’m not an expert but I don’t know if you have to explicitly call out exactly what you are trying to learn when getting consent – it seems to me they could have gone to a community and said “we’re trying to learn a few things about the flow of patches into stable releases. We’d like permission to send some patches that are part of this test, and we will disclose at a later date which patches were part of the study.”

                                                                                Would it skew results? Maybe. Or maybe they’d be denied permission altogether.

                                                                                I seem to recall a similar experiment performed on editors of academic journals, submitting garbage papers to see if they’d be published. As far as I know, they didn’t inform people ahead of time, and I don’t recall if they were called out for ethics violations.

                                                                              2. 4

                                                                                This research sounds to me like: “We sprayed graffiti on various buildings and waited to see if anyone noticed or tried to clean it up.” There’s nothing insightful about the fact that it’s easy to cause damage. Here’s another analogy: “We noticed your doors aren’t locked so we came inside and trashed the place so you’ll learn to lock your doors.”

                                                                                1. 1

                                                                                  There’s nothing insightful about the fact that it’s easy to cause damage.

                                                                                  The open-source community continually claims that their software is significantly higher-quality than proprietary software because the open-review process means that it’s harder to get bugs (and malicious code) in - so they actually don’t agree with you that it’s “easy to cause damage”.

                                                                                  A better analogy would be graffiti’ing a building that was coated with a special ultra-slick coating designed to keep spray-paint off, it sticking anyway (they got patches past the reviewers), and then removing it when you’re done (as they claimed that they were going to notify kernel developers whenever a patch got past - which they failed at, but that changes the nature of the problem from “you were trying to do a bad thing” to “you were either lying or really bad at trying to do a good thing”).

                                                                                2. 1

                                                                                  Can anyone link the 3 patches submitted to lkml mentioned in the paper?

                                                                                  I had trouble finding them from the little information that is in the paper. That the search on lore.kernel.org doesn’t seem to have a full text index makes this harder.

                                                                                  1. 2

                                                                                    We don’t know where those patches are and they are not submitted from umn.edu emails. What follows are only speculations.

                                                                                    https://lore.kernel.org/linux-nfs/YIEqt8iAPVq8sG+t@sol.localdomain/

                                                                                    I think that (two of?) the accounts they used were James Bond jameslouisebond@gmail.com (https://lore.kernel.org/lkml/?q=jameslouisebond%40gmail.com) and George Acosta acostag.ubuntu@gmail.com (https://lore.kernel.org/lkml/?q=acostag.ubuntu%40gmail.com). Most of their patches match up very closely with commits they described in their paper:

                                                                                    Figure 9 = https://lore.kernel.org/lkml/20200809221453.10235-1-jameslouisebond@gmail.com/

                                                                                    Figure 10 = https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.ubuntu@gmail.com/

                                                                                    Figure 11 = https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.ubuntu@gmail.com/

                                                                                    1. 1

                                                                                      umn.edu​

                                                                                      I guess they must be in there? These are commits from the main author. https://github.com/torvalds/linux/commits?author=QiushiWu

                                                                                    2. [Comment removed by author]

                                                                                      1. 15

                                                                                        What a funny message to receive over a BSD socket.

                                                                                        You know, there is more than one kind of elitism.

                                                                                        1. 13

                                                                                          Yes, the contributions to GNU, BSD, CMU Mach, LLVM, etc from academe are minimal.

                                                                                          1. 6

                                                                                            Because a lot of good work comes out of academia? LLVM started life as an academic research project. A lot of Haskell features over the years have been built by academics, etc

                                                                                            1. 2

                                                                                              Not to mention all the incredible work on finding and reducing compiler bugs, like Csmith and C-Reduce. And industry+academic collaborations like Project Everest.

                                                                                            2. 5

                                                                                              This is pretty divisive and short-sighted, especially seeing the huge amount of good universities do for OSS on the whole. :/

                                                                                              1. 4

                                                                                                Good thing Linus Torvalds wasn’t a student when he started Linux … oh, wait.

                                                                                                1. 3

                                                                                                  I fail to see how that’s relevant? Students are not academia

                                                                                                  1. 4

                                                                                                    Who wrote and submitted the Linux kernel patches currently under discussion?

                                                                                                    Please never download anything from OSUOSL ever again. I don’t care if their mirrors are useful to you; they’re administered by students. I recall working there while I attended Oregon State. Consider also avoiding all of the other academic-maintained mirrors.

                                                                                                2. 3

                                                                                                  Because not all universities are like that.