1. 6

Abstract: Biases against women in the workplace have been documented in a variety of studies. This paper presents the largest study to date on gender bias, where we compare acceptance rates of contributions from men versus women in an open source software community. Surprisingly, our results show that women’s contributions tend to be accepted more often than men’s. However, when a woman’s gender is identifiable, they are rejected more often. Our results suggest that although women on GitHub may be more competent overall, bias against them exists nonetheless.

  1.  

  2. 3

    Looking at the data, there seems to be a possibility that we are looking at noise and not necessarily observing a bias. With insiders, there is hardly any difference between men and women, in fact, insider women seem to be favored when their gender is known. With outsiders, the tables flip. However, the difference between male and female acceptance there is about 1-2%, not enough to suggest a prevalent bias, IMO.

    Also, insider men seem to be favored more than insider women when their gender is not known, which seems to disprove the idea that women are producing better code. It seems that code quality is actually about equal.

    EDIT: Just realized the paper covers my last point.

    1. 2

      Yeah, the paper breaks it down in a lot of different and interesting ways. The headlining outsider-gendered bias is one of the trickier graphs (fig 5). Just looking at the error bars, it’s close (though, as the article mentions, the gap is bigger for women). I wonder why gender-identifiable accounts do worse for both genders? And I wonder what the total sample size is of each of the four bars, for example, what percentage of women chose gender-neutral accounts vs. men.

    2. [Comment from banned user removed]

      1. 3

        That’s an interesting idea. Relevant stuff from the paper: 1. they found that women made fewer pull requests that referenced existing issues (potentially “less needed”) by a small amount. Maybe some of these were pronoun changes? 2. they found that women made larger changes in PRs than men (both in absolute terms and in lines-added-minus-lines-removed), by a surprisingly (to me) big difference, which does not sound like pronoun changes. The distribution on this would be interesting, but they don’t include it.

        They also attempted to eliminate bots with huge numbers of commits (like those that crawl and auto-PR pronoun changes) from skewing the results, according to the Threats section at the end.

        I would highly recommend reading the whole paper – I found it very accessible and very interesting!

        edit: they also tried controlling for changes with a majority number of lines-changed being in Turing-complete languages, which did not affect the outcomes.

        1. [Comment from banned user removed]

          1. 2

            They detected changes to turing-complete languages by filename extension. So this would include things like adding more docstrings. Editing a pronoun to be gender-neutral would add one line and remove one line, so it would be excluded in their second analyses where they took the difference.

            Against? To me it looks like the bias is for them.

            When the accounts were gender-neutral (ie., the reviewers likely did not know that the submitter was a woman), women’s PRs had a higher acceptance rate than men. If the assumption is that the reviewers did not know the gender of the submitter, this is not a gendered bias for women.

            When the accounts were not gender-neutral (ie., automated and some manual analysis could not identify the gender of the user) women’s PRs had a lower acceptance rate than men. Assuming the reviewers could identify the submitter’s gender, then one explanation for the lower acceptance rate is a gendered bias against women.

            As I wrote in my other comment, this aspect of the paper is interesting, but I wish they explored it further and broke it down in different ways, like comparing the rate of choosing gender-neutral accounts for men vs. women. There is a ton more very interesting other analysis in the paper though :)

            edits: typos

      2. 2

        If we’re going to get into it, might as well link the original study.

        EDIT:

        They propose three different explanations:

        • Men are actually functioning in the face of a bias.
        • Women are not taking as many risks, and so are more likely to have their stuff accepted.
        • Women are more competent than men.

        The authors discuss each in turn, and whether or not one thinks they present a good argument, it’s at least nice to see an attempt.

        1. 1

          It’s possible that the type of female developer who has a gendered profile is also worse at programming, but the only way to really know would be to have a group of men use github as women and study the results.

          1. 4

            In the study, men identifiable as men also have lower acceptance than neutral. The effect is smaller, but there’s really only one conclusion to draw:

            Coders with Github profile pictures are less competent.

            1. 1

              woops, is there a way to combine these stories? I swear I searched for the title but didn’t see anything some up. ><

            2. [Comment removed by author]

              1. 1

                Yeah, as I said, the article title is misleading and I think a little click baity.