Dylan Beatie suggested something like this for NuGet in his “Open Source, Open Mind: The cost of free software” talk, where he suggests this as an easy way to make sure personal users can still get stuff for free, but corporations can’t un-duly profit from open-source work.
I think the hope there was that the folks collecting the money would give some of it to the developers creating the software. I suspect this situation is about docker trying to get paid for their hosting services.
Remember docker previously tried to “sunset” free image hosting for FOSS projects, I think related to the costs. After outcry, that decision was reverted, but I think there is still a process in place for FOSS projects to publish.
Now they will be charging the users of the images instead, at least doesn’t put the burden on those producing the images.
The previous sunset attempt came just when I needed to publish docker images. So I set up a registry myself instead and have been worry-free since. I also now use fully qualified paths for docker hub images, starting with docker.io/, to not give any special treatment to images on docker hub.
For anyone whose employer runs things in AWS, it’s probably worth just using AWS for this too.
And they have a public mirror of basically everything Docker Hub has (at least, I’ve never found anything missing), and their free/unauthenticated tier quotas have always been a lot more generous. Generally you can just replace FROM some_image with FROM public.ecr.aws/docker/library/some_image and it works.
I also now use fully qualified paths for docker hub images, starting with docker.io/, to not give any special treatment to images on docker hub.
I use Podman for personal projects since I don’t have to justify my weird hippie choices to anyone and that actually forces this on you, which is kind of a nice way to build the habit.
not quite forced, you can set unqualified-search-registries in the config, but it comes with a big warning about how falling back on a search path (rather than using fully-qualified names always) is a crazy risk.
Yes, the distribution was different but the mechanism was largely the same. I’m not saying this is a good solution, or that I support it, but perhaps it’s a step in the right direction.
We switched from Dockerhub to GitHub after they started demanding open source projects pay them $420 a year. The switch was annoying, but the experience is comparable and it doesn’t have these limits or unexpected fees.
Unauthenticated users
10 per IPv4 address or IPv6 /64 subnet
This seems to be geared more towards user tracking instead of reducing their server load to me. 10 pulls per single IPv4 is ridiculous, especially when someone’s behind CGNAT (and CGNAT + no IPv6 for consumers is quite popular, at least in Poland), as this will be used in no time.
UP; sorry, my internet is flaky and somehow I managed to post the same comment three times
A pull for a normal image makes one pull for a single manifest.
I think this is per layer limit, so one more complicated image could drain the limit itself. Very nice of DH, makes me dislike MSFT and IBM just a little less.
TBH, they should probably be looking at caching these images regardless. Not only does it put excessive strain on the likes of DH, but it’s brittle. Some kind of caching proxy is just good sense.
Ironically, one of the projects docker built back in the day, linuxkit, uses dockerhub as its package repository (with each linux package as a separate repository)… I feel like this change is liable to break any serious user of that tool if they don’t switch registries, or pay for increased limits.
$ linuxkit build ./examples/docker-for-mac.yml
Process init image: docker.io/linuxkit/vpnkit-expose-port:b30e8456ac128b2ac360329898368b309ea6e477
Process init image: docker.io/linuxkit/init:3c0baa0abe9b513538b1feee36f01667161f17dd
Process init image: docker.io/linuxkit/ca-certificates:7b32a26ca9c275d3ef32b11fe2a83dbd2aee2fdb
...
If you’re willing to bring Google into the loop, you can mitigate this by setting up the GCR Dockerhub mirror.
Dylan Beatie suggested something like this for NuGet in his “Open Source, Open Mind: The cost of free software” talk, where he suggests this as an easy way to make sure personal users can still get stuff for free, but corporations can’t un-duly profit from open-source work.
I think the hope there was that the folks collecting the money would give some of it to the developers creating the software. I suspect this situation is about docker trying to get paid for their hosting services.
Remember docker previously tried to “sunset” free image hosting for FOSS projects, I think related to the costs. After outcry, that decision was reverted, but I think there is still a process in place for FOSS projects to publish.
Now they will be charging the users of the images instead, at least doesn’t put the burden on those producing the images.
The previous sunset attempt came just when I needed to publish docker images. So I set up a registry myself instead and have been worry-free since. I also now use fully qualified paths for docker hub images, starting with docker.io/, to not give any special treatment to images on docker hub.
For anyone whose employer runs things in AWS, it’s probably worth just using AWS for this too.
And they have a public mirror of basically everything Docker Hub has (at least, I’ve never found anything missing), and their free/unauthenticated tier quotas have always been a lot more generous. Generally you can just replace
FROM some_imagewithFROM public.ecr.aws/docker/library/some_imageand it works.I use Podman for personal projects since I don’t have to justify my weird hippie choices to anyone and that actually forces this on you, which is kind of a nice way to build the habit.
not quite forced, you can set
unqualified-search-registriesin the config, but it comes with a big warning about how falling back on a search path (rather than using fully-qualified names always) is a crazy risk.Yes, the distribution was different but the mechanism was largely the same. I’m not saying this is a good solution, or that I support it, but perhaps it’s a step in the right direction.
We switched from Dockerhub to GitHub after they started demanding open source projects pay them $420 a year. The switch was annoying, but the experience is comparable and it doesn’t have these limits or unexpected fees.
… yet. I wonder how long will it take for Github to start limiting downloads. Fortunately it’s easy to migrate images.
Unlike the Docker company Microsoft has lots of ways to make money from these services that isn’t charging for bandwidth. It’s a much safer bet.
it actually has limits, they are not exactly visible though as it applies to certain images/accounts (I’m guessing it’s due to high pull count)
Kids these days, complaining they can’t reinstall an OS more than 10 times per hour.
In my days, if I wanted a fresh Linux install, I had to go and buy a CD-ROM and spend two days installing it.
Kind reminder that you should not depend on external resources and run your own registry. (for any serious use case)
This seems to be geared more towards user tracking instead of reducing their server load to me. 10 pulls per single IPv4 is ridiculous, especially when someone’s behind CGNAT (and CGNAT + no IPv6 for consumers is quite popular, at least in Poland), as this will be used in no time.
UP; sorry, my internet is flaky and somehow I managed to post the same comment three times
UPC / Play, eh? I think they’re the biggest CGNAT culprit here.
EDIT2: if you use podman, consider using Google’s mirror https://github.com/containers/podman/blob/1e7f810f714240f5d68f92baa1ab39ee53a249f5/test/registries.conf#L17
EDIT: nope, I was wrong
I think this is per layer limit, so one more complicated image could drain the limit itself. Very nice of DH,makes me dislike MSFT and IBM just a little less.This seems like it is going to break a lot of CI pipelines.
TBH, they should probably be looking at caching these images regardless. Not only does it put excessive strain on the likes of DH, but it’s brittle. Some kind of caching proxy is just good sense.
Layers are cached, but normally CI would want to pull the actually latest latest.
I wouldn’t recommend running anything on
:latest, picking a specific version seems like the far better practice and would also cache better.Wonder if this is going to lead to yet another concentration on GitHub, as Actions has (AFAIK) pre-negotiated limit exemptions with DockerHub.
Ironically, one of the projects docker built back in the day, linuxkit, uses dockerhub as its package repository (with each linux package as a separate repository)… I feel like this change is liable to break any serious user of that tool if they don’t switch registries, or pay for increased limits.
23 images total for that toy example.
Docs have been updated to the correct date of April 1st. Might be worth updating the title as well.
But I guess I cannot edit it now.
[Comment removed by author]
[Comment removed by author]