1. 3
  1. 7

    I would consider, in order, pull requests, feature requests, bug reports, fetches, clones and downloads as metrics of engagement. Stars and forks are largely hype driven reactions.

    1. 1

      Thanks for the feedback.

      I agree with the metrics you would use, however, I don’t think that the fork-stars ratio would be easy to fake (via marketing). That’s the main thing I was trying to accomplish with this: easily weed out those repositories that have seen a lot of traffic but not a lot of engagement. While forks are hype-driven too, stars are much more so.

      Furthermore, I would be willing to bet that forks and pull-requests are linearly dependent on one another most of the time.

      1. 1

        All of these would be easy to fake, right? Github accounts are free, and so if there were some benefit to accumulating stars and forks you would expect to see people with bot farms selling stars (the same way people sell twitter or instagram followers).

        1. 2

          We crack down on this kind of thing pretty hard; our anti-spam team is very proactive.

          1. 1

            Oh that’s interesting! I wouldn’t have guessed that there were enough incentives to bot farming on Github for that to really be a thing. If I might ask: what are people trying to do with their bot herds?

            But either way, I think my point still stands: if Github is good at spam/bot detection, then stars and forks shouldn’t be any more or less forgeable than any of the things that kghose mentioned, right?

            1. 2

              what are people trying to do with their bot herds?

              Honestly, I don’t even know; I do systems engineering over on one side of the org, whereas our spam team very much feels like “over there somewhere”. But I see their posts on the internal blog about what they do/trends in spammy activity and it’s like, woah, that’s some sophisticated stuff/some very neat tools they’ve put together to help in combatting it.

              stars and forks shouldn’t be any more or less forgeable than any of the things that kghose mentioned, right?

              Right. The only difference would be that spammy PRs/issues are very noticeable and often reported by third parties who see something weird going on, whereas I imagine it’d be less likely for stars/forks.

        2. 1

          I would argue that actually forks are not often related to pull-requests.

          I sometimes fork a repo just to “collect” it in a list and then, when I have time for it, look into it and maybe contribute.

      2. 8

        Every budding Software Engineer longs for the day that one of their GitHub projects hits 100 stars. I am proud to say that I recently hit this coveted milestone

        Congratulations, and your points are valid, but I disagree with the “every budding engineer bit”. I don’t write and release software, longing for 100 stars. I write it for me, and offer it to people in case they might benefit from it, or find joy in it.

        1. 1

          Thanks for the feedback. I agree with what you’re saying.

          I didn’t mean for it to sound like I work on open source projects for the sole purpose of getting more stars (or, more generally, recognition). But in order to grow as a developer, it helps to have other developers use and critique my work. The more people that view and recognize my work (i.e. the more stars I get), the more developers will actually use my code. This is not necessarily true, of course, but it is the goal.

        2. 4

          I… don’t think engagement is a reasonable ranking metric. There are incredibly useful repositories out there that see little to no engagement, because there’s no need to. They do their thing, and that’s it. They may not have many stars, or even many forks, let alone pull requests. Yet, they may be at the root of your dependency chain, making all of it possible (or at least easier).

          What I’m trying to say, trying to reduce repositories to numbers, to determine some kind of “rank” that signals whether one’s worth more than the other is ultimately a mistake. It’s never going to be fair. It’s never going to be accurate (far from it). Instead, both of these metrics should be hidden, to discourage such abuse.

          I’ll show an example: the Model01 Firmware has - as of this writing - 101 stars, 207 forks, 47 PRs opened total, and 21 issues. In comparison, the Kaleidoscope repo has 330 stars, 91 forks, 285 PRs, 199 issues in total.

          The first is meant to be a starting point upon which you build your own layout. Forking is encouraged, but most forks are for personal changes, there’s no reason any of those should be contributed back. A very small percentage of forks were made to contribute something back. Based on your algorithm, it would have a rank of 4.

          The second is the firmware itself, where local changes make much less sense, so sending them back upstream is encouraged. Your ranking system ranks it at 3.48.

          The repository that has seen much less engagement is ranked higher, pretty much defeating your algorithm.

          However, from a user point of view, this makes sense, because the Model1-Firmware repo is much more useful for an end user. Not quite so from a developer point of view (which your ranking systems seem to want to support). Which just strengthens my point that ranking repos is a mistake: you need to take intent into account as well. And once you do, you can no longer reduce a repo’s worth to a single number. And that’s good. We shouldn’t rank repos this way anyway. It’s not healthy, and does not foster a healthy ecosystem either.

          1. 1

            I’m not sure I agree with most of this, but I do thank you for your feedback.

            I think that much of your argument boils down to this statement: “No matter what, the Rank Algorithm won’t be perfect.” But this is inherently true of life. We don’t discourage GitHub’s Trending page’s because their algorithm isn’t perfect (its based almost entirely on stars and is thus very limited). The Rank Algorithm I proposed is no different. Whereas GitHub’s Trending page rewards the most “popular” repositories, the Rank Algorithm rewards the most “useful” repositories. Both efforts are destined to fail if we demand perfection.

            As far as your example goes, I would say that the Model01_Firmware repository is the one that tricks the system, whereas the other repository’s rank is more or less accurate (or as accurate as can be expected when using my admittedly juvenile algorithm).

            As far as the morals of it all and whether or not a ranking system foster’s a “healthy ecosystem”, I think the argument will quickly devolve into the likes of whether or not our children should all get trophies or not. I’m going to side-step that one because I don’t see us resolving that century-old debate today. I will note, however, that it is nearly impossible to improve upon a skill without first having some way of measuring that skill. A ranking algorithm is just one more attempt at measuring progress, and I don’t see anything wrong with it.

            1. 2

              My main issue is not with ranking per-se, but with attaching “worthiness” to rank. You can rank them whatever way you want, and that can be useful, but it’s best done with the ranking being in the hands of the one who’s looking for something. If I’m looking for an active project, I’d use a different ranking than when I’m looking for something tried and true (and usually considerably less active). There’s no single ranking algorithm that would yield the desired results in both cases. Therefore, attaching worthiness to any ranking system is flawed, and that practice should be discouraged.

              argument will quickly devolve into the likes of whether or not our children should all get trophies or not

              Except you usually give trophies at the end of a competition. Open source development is not a competition. Ideally, it would be cooperation.

              I will note, however, that it is near impossible to improve upon a skill without first having some way of measuring that skill.

              Measuring skill is one thing. Reducing one’s worth to a set of skills is another. I have problem with the latter.

              1. 2

                Okay. I understand where you’re coming from now. It seems like it was just an issue of semantics. I struggled to find the right words for this article; “worth” is one I probably should have left out.