While this is a handy way to build ad hoc packages, it’s not the “Debian way”. The Debian Packaging Tutorial has good background on the various options.
One of the things I appreciate about many “older” repository schemes is they can usually be hosted as just a directory full of files and a web server. DEB, RPM, pip… RPM at least can even do without the web server, supporting file:// schemes.
I find it somewhat annoying when a repository format requires that you run some custom server specific to that format. I’ve dealt with this most frequently with Docker registries, but iirc NPM and Ruby gem repos also have this requirement.
My main criticism of the apt repository format is that it’s difficult to update in an atomic way - to add or update a package, you have to update both Packages (or Sources, for a source package) and the Release file. If you try to do this in-place, there’s going to be at least a split second where the checksum in the Release file doesn’t match the contents of the Package file, and if a client hits the repo during that interval it’s going to complain.
The checksums in the Release file also make it difficult to generate repositories on the fly; before you can serve the Release file, you must generate all the Packages and Sources files and checksum them, and then what you serve up once the client requests the package indices dang well better match up to what you served in the Release file. This pretty much makes generating a repo on-the-fly a non-starter; to serve the first request you need to do an expensive iteration over your entire database of packages and then basically cache the results and do no further thinking. I’ve worked on various internal build systems that tried to introduce some determinism into apt-get and you’re more-or-less stuck with using snapshots.debian.org or rolling your own moral equivalent for anything that’s not on there already.
(And yeah you can avoid some of that by using “flat” repos but they’re semi-deprecated and it’s a crapshoot if tools other than apt-get that purport to grok repos understand them at all and they give you a lot less to hang apt pinning rules off of than the full-blown repos…)
Totally, npm was an utter disaster when I was looking to run my own. The CouchDB requirement tells me that it was a prototype that was retroactively made the standard rather than something that was well designed ahead of time. NPM really does reflect its ecosystem: immature and overly complex.
I run my own Rubygems repo which is served only by Apache. Bundler did add further requirements but I get away with pregenerating the response for the “dynamic” calls.
While this is a handy way to build ad hoc packages, it’s not the “Debian way”. The Debian Packaging Tutorial has good background on the various options.
One of the things I appreciate about many “older” repository schemes is they can usually be hosted as just a directory full of files and a web server. DEB, RPM, pip… RPM at least can even do without the web server, supporting file:// schemes.
I find it somewhat annoying when a repository format requires that you run some custom server specific to that format. I’ve dealt with this most frequently with Docker registries, but iirc NPM and Ruby gem repos also have this requirement.
My main criticism of the apt repository format is that it’s difficult to update in an atomic way - to add or update a package, you have to update both Packages (or Sources, for a source package) and the Release file. If you try to do this in-place, there’s going to be at least a split second where the checksum in the Release file doesn’t match the contents of the Package file, and if a client hits the repo during that interval it’s going to complain.
The checksums in the Release file also make it difficult to generate repositories on the fly; before you can serve the Release file, you must generate all the Packages and Sources files and checksum them, and then what you serve up once the client requests the package indices dang well better match up to what you served in the Release file. This pretty much makes generating a repo on-the-fly a non-starter; to serve the first request you need to do an expensive iteration over your entire database of packages and then basically cache the results and do no further thinking. I’ve worked on various internal build systems that tried to introduce some determinism into apt-get and you’re more-or-less stuck with using snapshots.debian.org or rolling your own moral equivalent for anything that’s not on there already.
(And yeah you can avoid some of that by using “flat” repos but they’re semi-deprecated and it’s a crapshoot if tools other than apt-get that purport to grok repos understand them at all and they give you a lot less to hang apt pinning rules off of than the full-blown repos…)
Totally, npm was an utter disaster when I was looking to run my own. The CouchDB requirement tells me that it was a prototype that was retroactively made the standard rather than something that was well designed ahead of time. NPM really does reflect its ecosystem: immature and overly complex.
I run my own Rubygems repo which is served only by Apache. Bundler did add further requirements but I get away with pregenerating the response for the “dynamic” calls.