This is happening way too often.
And could easily avoided (in this case at least) with 2FA enabled on account.
And it’s going to accelerate, in all package registries. Every time one is discovered and publicized it will give ideas to new attackers.
I wonder if good old gpg signatures, verified by the registry on releases couldn’t help decouple these account hijackings a bit?
So, suppose you require each account to associate at least one key. But then someone gets access to your account and either attaches another key, or replaces your key (because you need to have some sort of mechanism for replacing/rotating keys). And now you’re back to square one, except with a worse false sense of security because the malicious package will pass signature verification – it’s signed by a key associated with the correct account.
So the short answer is that “just use PGP signatures” is not a solution. The long answer digs into the fact that package signing in, say, Linux distros only works because those distros have a relatively small number of people authorized to produce the packages and that group can more easily be thoroughly vetted; language-specific package indexes like RubyGems, npm, PyPI, etc. are wide open to anyone who wants to publish packages, and intensely vetting the trustworthiness of every package author in those systems is not feasible and likely never will be.
What happens when a dev loses their key?
it would seem to me that it’s still rather harder to lose a key than a password. of course maybe I’m wrong… as far as I know, the java maven central nexus has this requirement
It might be harder, but it happens. Consider not just “losing control of the key” but the simpler “literally losing a key”, like your harddrive dies and it didn’t get backed up due to neglect, misconfiguration, bad timing, or a billion other things.
How should that case be handled? If the user account can change the key that gems need to be signed by, we’re back to square one – once Attacker has guessed the password they just need to change the key before uploading the malicious version.
If the signing key can’t be changed, the gem dies?
as far as I know, the java maven central nexus has this requirement
as far as I know, the java maven central nexus has this requirement
Maven Central requires that the package be signed with a public key visible in some public keyserver, yes. But nothing stops an attacker from generating a public key for some plausible looking name and putting that public key in the MIT PGP Public Key Server.
You need to verify that the key that signed the package actually belongs to the person you expect it to. That’s the hard part, and most packages in the Maven Repo don’t have cross signed keys via various parties so that you can build some level of trust that the key that signed it is a key that belongs to someone you expect it to. The signing is in most cases merely theatre – you’re not trusting much more than that the package was submitted by some unknown party who uploaded a key somewhere, who may or may not be who you expect.
It the makes a lot of noise and generally sucks, until the new key has reaches some designated level of trust.
Or someone else (or ideally multiple people) commit privilege bless the key.
I feel like the issue with gpg signing packages is less that we don’t know how to do it, more that its just a pain in the ass and makes other best practices more difficult.
Many linux distributions seem to have solved this problem.
One entity (the distro) attaches source URLs to signatures. All the software makers need to do is maintain the URL (which can be anything from an ftp host to a GitHub release bucket).
This is nice for everyone… If you want to back down on safety, you go get the software from the URL (oversimplified, you will want a recipe describing dependencies as well).
Most linux distributions have ‘core’ packages, which are signed and verified by trusted individuals, ‘community’ packages (where the requirements are less strict) and ‘user’ packages (no requirements, no signatures, or bring your own).
It’s common knowledge in these communities that getting ‘user’ packages is at your own risk. But, even so, the security of most user repositories is better then your npms or pips.
Note some details, the registry has to pull in packages that it wants to maintain (developers aren’t generally given keys). The maintainers and developers are free to work out whatever they want if they want to speed up the process a bit.
The people holding keys at the registry are also free to come up with any intense security schemes they like, including key rotation, revocation, in-person verification, whatever, without inconveniencing the developers.
This works so well, that most linux distributions will package libraries for languages which have their own package managers… nix and guix are particularly awesome in this respect, and are so comprehensive that ‘a whole distribution running the software described’ is just another build target. (Since you can capture dependencies across languages, like a database dependency, which something that none of the language specific tools can do.)
We need infrastructure to establish a better correlation between the source and the build output.
This won’t prevent the github accounts from being hacked but at least the change will be more visible.
It is very sad, that all this story started with some called “security researcher”, who invented this method which any scriptkiddy can use, wrote 75 pages of a bachelor thesis about making his method the most efficient possible, and brag abput his discovery, such as every criminal will know about. https://incolumitas.com/2016/06/08/typosquatting-package-managers/ (Yes betwwen the publication of his bachelor thesis march 17th, 2016 and his blogpost in june 06th, 2016, nobody noticed that he created a close to deadly weapon). There is so much way to analyse existing criminal behavior, that I have no idea, why one thinks, that it is necessary to invent new criminal behavior.
The idea here is to get package repositories (and the larger community) to take the problem seriously by demonstrating how easy it is. I guess it didn’t work in this case, but you can look back at many examples of things that did help, despite also having malicious potential. For instance, Firesheep was a Firefox extension that allowed you to log in to the Facebook account of anyone on the same Wifi as you who was browsing Facebook; it got a huge amount of press, and I would argue it was the final push that made Facebook (and then other websites) finally pick up SSL for their entire site, not just the login page. Upside-down-ternet was a wifi router mod that would screw with the web traffic of any unauthorized users (e.g. turning graphics on websites upside-down), and made individuals understand the danger of using random wifi access points they stumbled across.
In this case, I guess it didn’t work so well; package repositories have been very slow to address these issues, and almost entirely reactive rather than proactive. 🤷