It’s rather unconventional for the build step to be doing systems-wide changes. Having an additional recipe, e.g. make deps, which handled installing the dependencies and could be optionally ran would be reasonable, but historically and conventionally make by itself handles the build step for the application, and just the build step.
In this particular case, a lot of the “build dependencies” aren’t even that: they’re cross-build dependencies, e.g. if you want to create an ARM binary on your amd64 machine. You probably don’t want to do that when you’re compiling it for just yourself though.
I’m not sure if any of those dependencies are needed actually, it installs the SQLite3 headers, but the go-sqlite3 package already includes these and shouldn’t be needed. The only thing you should need is a C compiler (which I assume is included in the build-essential package?)
None of this may be immediately obvious if you’re not very familiar with Go and/or compiling things in general; it’s adding a lot of needless friction.
That Makefile does a number of other weird things: it runs the clean target on the build for example, which deletes far more than you’d might expect such as DB, config, log files, and a number of other things. That, much more than the apt-get usage, seems a big 😬 to me. It can really destroy people’s data.
Having an additional recipe, e.g. make deps, which handled installing the dependencies and could be optionally ran would be reasonable
That’s what I meant, it’s a reasonable way of running apt within make. Didn’t mean as a default procedure when running make.
EDIT: I know that historically and conventionally make doesn’t do that, but, you know, it’s two lines, it’s about getting dependencies required for building… I don’t think it’s that much of a flex.
No, that is not a monoculture, this is you using a niche OS in the world of another “niche” OS.
As a sane person, I fetched the code from GitHub using fetch, extracted the tarball and ran make. Nothing happens. Let’s see the Makefile, shall we?
Just because you’re meaning to get other peoples code running on your specific environment, simply doesn’t mean you’ll be greeted with the most straightforward way possible. You should always read such code before executing it, or rather even documentation instead of navigating simply on assumptions how everything is supposed to be. We don’t live in 2005 anymore.
Everyone’s time is limited and if you want to develop FreeBSD to also host more applications, then take care of that yourself. There is no reason why others should have to take care of that, besides being friendly, I could even imagine some people will possible still disregard them because they don’t know the “true” way one supports applications on FreeBSD because the devs have never been using FreeBSD in their life.
Docker an amazing and valid way of distributing and building code, maybe instead of dumping on other developers try supporting Docker on FreeBSD.
Having same/similar development environments as all other contributors will rule out a great amount of issues, allowing you to actually develop features and improve stability. Good example for that is Steam and Linux support, there has been way greater extend of bugs on Linux hosts, even though it were fewer people running the code - but everyone’s *nix has to be slightly different - making development much harder at times.
First, please stop putting your binaries in /app, please, pretty-please? We have /usr/local/bin/ for that.
For what reason? /app is a convention in the world of container images, it avoids typing long paths and will usually also contain other code. What you’re asking for is usualy rather /opt/... and not /usr/local/bin/. Also why bother with /usr/local/bin if all what you’re doing is distributing a single, self-contained executable in a container image.
[…] please put yourself in my shoes. You’ve been looking around for a simple monitoring solution […]
No, you should put yourself into other peoples shoes. You are acting rude, disrespectful and narrow minded with such blog postings. If you’re writing this way you’ll rather make people avoid FreeBSD even more. If you want to use something that supports less tools, but maybe those better, then stick with the defaults as others do (nagios?).
How about you actually open an issue before submitting a patch or even worse, write a rant. There hasn’t been a single request in Gatus about *BSDs, same for statping until now.
For the record, I (the author) opened that issue in Statping and the other projects for adding BSD support and I’ve patched all of them in the last 10 days :)
Yes I know, sorry that I haven’t made that clear in my comment. I appreciate your work, though I stance with my comment regarding that such rants are rather negative light onto FreeBSD. Your issue was much friendlier, though giving less context.
I can understand people assuming apt exists on the system because:
Most of the times they’ll be correct
People that doesn’t have apt will probably know how to find the equivalent packages in their equally bad linux package manager of choice.
Can understand, too, people using a SQLite implementation for Go that doesn’t depend on CGo, because CGo has it’s own issues.
Everything is hot garbage, doesn’t matter if you’re in Ubuntu or not. Don’t expect to have a gazillion of scripts that install all the dependencies in every package manager imaginable, none of those is good enough to deserve that much of attention, it won’t happen. At least apt is popular.
That’s a reason Docker is so popular: It’s easier to work over an image with an specific package manager that will not change between machines. Doesn’t matter the distribution or the equally bad linux package manager of choice as long as you are on Linux and have Docker. And Dockerfiles end up being a great resource to know all the required quirks that allow the code to work.
And finally:
First, please stop putting your binaries in /app, please, pretty-please? We have /usr/local/bin/ for that.
Never. Linux standard paths are a tragedy and will actively avoid them as much as possible. It’s about choices and avoiding monoculture, right?
Linux standard paths are a tragedy and will actively avoid them as much as possible. It’s about choices and avoiding monoculture, right?
No, it’s about writing software that nicely integrates with the rest of the chosen deployment method/system, not sticking random files all over the user’s machine.
This hits a slightly wider point with the article - half the things the author is complaining about aren’t actually to do with Docker, despite the title.
The ones that are part of the Docker build… don’t necessarily matter? Because their effects are limited to the Docker image, which you can simply delete or rebuild without affecting any other part of your system.
I understand the author’s frustration - I’ve been through trying to compile things on systems the software wasn’t tested against, it’s a pain and it would be nice if everything was portable and cross-compatible. But I think it’s always been a reality of software maintenance that this is a difficult problem.
In fact, Docker could be seen as an attempt to solve the exact problem the author is complaining about, in a way which works for a reasonable number of people. It won’t work for everyone and there are reasons not to like it or use it, but I’d prefer to give people the benefit of the doubt instead of ranting.
Speaking of ranting, this comment’s getting long - but despite not really liking the tone of the article, credit to the author for raising issues and doing the productive work as well. That’s always appreciated.
OP here. aww thank you! Yes as noted in the disclaimer at the top, I was very frustrated! hopefully I ported it all, now trying to clean up some code so I can make patches.
According to commenters on the Yellow Site, it’s not wise to “just send a patch” or “rant”, they way it’s better to open an issue first. Which honestly I still don’t understand. Care someone explain that to me?
As a open-source maintainer, I like only two types of issues. 1) here’s a bug, 2) here’s a feature request and how to implement. But if someone made an issue saying “Your code is not running on latest version of QNX”, I would rather see them “Here’s a patch that makes the code run on QNX”.
Regardless, I tried an experiment and opened a “discussion issue” in one of the tools, hoping for the best.
According to commenters on the Yellow Site, it’s not wise to “just send a patch” or “rant”, they way it’s better to open an issue first. Which honestly I still don’t understand. Care someone explain that to me?
Receiving patches without prior discussion on the scope/goals is potentially something frustrating since basic communication can easily avoid unnecessary extra work for both maintainers but also contributors. Maybe a feature is already being worked on? Maybe they’ve had prior conversations on the topic that you couldn’t have seen? Maybe they simply don’t have the time to review things at the moment? Or maybe they won’t be able to maintain a certain contribution?
Also for end-users, patches without a linked issue can be a source of frustration. Usually the MR contains discussion on the code/implementation details and issues conversation around the goals of the implementation.
Of course that always depends, if you’re only contributing minor improvements/changes - a discussion is often not needed.
Sending patches directly is a non-collaborative approach to open source. Issues are made for discussion, and PRs for resolutions; as a matter of fact, some projects state this explicitly, in order to waste maintainers’ time with unproductive PRs.
This is only a mediocre example, because with go there should only be one binary (or a handful of them) - but yes, if you put your software in a container, I am very happy if everything is in /app and if I want to have a look I don’t have to dissect /etc/, /usr/local and maybe /var. If the container is the sole point of packaging one app, I see no downside to ignoring the FHS and putting it all together. There’s a reason that most people do that for custom-built software (as opposed to “this container just does “apt-get install postgresql-server”, then I would expect the stuff to be there where the distro puts it)
Summary: The author has a lot of expectations that mainstream no longer cares about.
But it is true that because Unix in practice is a PITA to manage - especially the gnarly userspace with tendencies to try to support all niche variants, even from 30 years ago, and since storage is so cheap, the mainstream industry turned into packaging each app with its userspace.
Hardcore Unixers on BSD are not having a great time, and I can sympathize. The whole world moves into a direction that is making them less relevant. Each Linux-specific extensions embraced by mainstream like cgroups/containers, systemd, etc.is a barrier to interoperability which they relied on.
Okay, fine, each application now comes with its own copies of shared libraries … and the point of having shared libraries is … ? If the point of a shared library to to, you know, share it among processes, then what the hell is gained with Docker when you bundle all the shared libraries an application uses with it? There’s no sharing going on, which (in my opinion) defeats the actual purpose of shared libraries.
what the hell is gained with Docker when you bundle all the shared libraries an application uses with it?
Reliable and (somewhat) reproducible build, deployment and behavior at runtime.
defeats the actual purpose of shared libraries.
Yes. No one cares. Computers have gigabytes of ram and terrybytes of storage. And in industrial application, people don’t even work on one system anyway. One part of the system gets 5 computers, the other 10, and so on.
Shared libraries were always and still are a PITA, and are simply not worthwhile tradeoff anymore. People are running even most trivial applications build on stuff like Electron, throwing away gigabytes of memory upfront. Why would anyone care about couple of megabytes.
It depends a bit on how it’s implemented, but this can still be possible with a container-based workflow, in three ways:
Containers may include multiple individual programs. These can trivially share mappings to libraries that are common to them.
Containers are made up of layers. If two containers are built on top of the same base layer then any mapping of pages in this layer (e.g. libc) will be shared. If they use different base layers then they don’t get sharing but this is graceful fallback for when they have different libraries.
In a lot of server deployments, you’re running one container per VM, so you’re relying on page deduplication for sharing. In Linux, you can turn on KSM even if you’re running multiple containers in a single VM, so you can share identical pages even if they’re from different base layers.
The shared base layer bit also helps with the ‘reduced distribution size’ benefit of shared libraries.
In addition, you get deterministic builds and non-interference between containers. This is the same underlying idea as PC-BSD’s old PBI format, which would ship all dependencies bundled with an app and use hard links to deduplicate them. If two containers contain the same thing, there are multiple ways in which they can share on-disk and in-memory resources but when they want to have different versions then everything gracefully fails back to the non-shared case.
First, please stop putting your binaries in /app, please, pretty-please? We have /usr/local/bin/ for that.
It’s not putting binaries in /app. It’s a multistage docker build, and the first stage is putting the source in /app, and the second stage is taking that compiled binary and putting it in the default workdir of the scratch image, which I guess is /, which is still not ideal, I suppose.
I get that it’s annoying when things don’t work on your system out of the box, but… should we really expect that for non-mainstream systems? Should I expect things to just work on FreeBSD? Dragonfly? Debian/Hurd? Haiku? What’s the cutoff point for ranting that maintainers don’t test/support your system?
The situation is better these days, but a few years ago it was hard to even get any BSD running in CI. And then you need to actually learn how to properly manage it.
I’m glad the author will submit patches to make things work. As a library author I accepted patches like that, but it doesn’t change the fact I don’t run that system and won’t know if something breaks in the future. Most things will only be supported on BSDs if people who run them volunteer their time for maintenance.
I’m glad the author will submit patches to make things work. As a library author I accepted patches like that, but it doesn’t change the fact I don’t run that system and won’t know if something breaks in the future. Most things will only be supported on BSDs if people who run them volunteer their time for maintenance.
I guess the point of the rant is that this is being made nigh-impossible by relying on (strictly speaking unnecessary) components that aren’t portable and highly complex. By requiring Docker for the official build, it’s going to be extremely difficult to even get it to work on BSD in the first place, and once it’s made to work, because it’s not using the official build strategy, things are more likely to fall apart over time. A simple patch here or there would in olden days be applied and then require not that much effort to keep it working (or unbreak it when things do break) as development continues.
None of those things require docker. Statping also has an official statically compiled tarball release for Linux. In statping case a single make target could be changed to deal with BSD differently. (Or even just made conditional on “NO_INSTALL_DEPS” or something…)
I am a Linux user and I generally steer clear from software wish uses docker as main means of distribution/installation.
I don’t quite get this rant. For starters, all the software he is trying to use, for me, falls in the category of bloated borderline useless stuff that could be replaced with a something built in one afternoon or so.
But judgements asside… Surely the authors of those projects made the assumptions that fit well their target audience. It’s not like the author is their boss.
Clone the repos and tweak to your needs if worthy of your time? What is the point of ranting if the code is available?
Don’t get me wrong, why can’t you run Docker on FreeBSD? :-)
Great article by the way, and you pointed lots of bad practices in the repositories, but somehow I have the impression that the world is really focusing on 3 OSs and that’s it. In this sense, my first question applies, why can’t you run Docker on FreeBSD? Wouldn’t this be the best solution? (https://reviews.freebsd.org/D21570)
MacOS is not Linux, they run a VM that can have a docker environment and just works.
I feel like we’re just skipping over the elephant in the makefile here. Why the hell is a Makefile running
apt install
in the first place?Why not? It’s ensuring the dependencies are in place.
It’s rather unconventional for the build step to be doing systems-wide changes. Having an additional recipe, e.g.
make deps
, which handled installing the dependencies and could be optionally ran would be reasonable, but historically and conventionallymake
by itself handles the build step for the application, and just the build step.In this particular case, a lot of the “build dependencies” aren’t even that: they’re cross-build dependencies, e.g. if you want to create an ARM binary on your amd64 machine. You probably don’t want to do that when you’re compiling it for just yourself though.
I’m not sure if any of those dependencies are needed actually, it installs the SQLite3 headers, but the go-sqlite3 package already includes these and shouldn’t be needed. The only thing you should need is a C compiler (which I assume is included in the build-essential package?)
None of this may be immediately obvious if you’re not very familiar with Go and/or compiling things in general; it’s adding a lot of needless friction.
That Makefile does a number of other weird things: it runs the clean target on the build for example, which deletes far more than you’d might expect such as DB, config, log files, and a number of other things. That, much more than the apt-get usage, seems a big 😬 to me. It can really destroy people’s data.
That’s what I meant, it’s a reasonable way of running apt within
make
. Didn’t mean as a default procedure when runningmake
.EDIT: I know that historically and conventionally make doesn’t do that, but, you know, it’s two lines, it’s about getting dependencies required for building… I don’t think it’s that much of a flex.
Oh yeah, definitely. If it’s there just not the default, that’s great and I’d totally +1 it. It’s handy!
Just please no system-wide changes by running just
make
:(It only does that on one particular flavor of Linux. Even if we ignore BSDs, not everyone is Debian-derived.
Is it typically the job of a Makefile to evaluate dependencies (maybe) and install them (maybe not typically)?
No, that is not a monoculture, this is you using a niche OS in the world of another “niche” OS.
Just because you’re meaning to get other peoples code running on your specific environment, simply doesn’t mean you’ll be greeted with the most straightforward way possible. You should always read such code before executing it, or rather even documentation instead of navigating simply on assumptions how everything is supposed to be. We don’t live in 2005 anymore.
Everyone’s time is limited and if you want to develop FreeBSD to also host more applications, then take care of that yourself. There is no reason why others should have to take care of that, besides being friendly, I could even imagine some people will possible still disregard them because they don’t know the “true” way one supports applications on FreeBSD because the devs have never been using FreeBSD in their life.
Docker an amazing and valid way of distributing and building code, maybe instead of dumping on other developers try supporting Docker on FreeBSD.
Having same/similar development environments as all other contributors will rule out a great amount of issues, allowing you to actually develop features and improve stability. Good example for that is Steam and Linux support, there has been way greater extend of bugs on Linux hosts, even though it were fewer people running the code - but everyone’s *nix has to be slightly different - making development much harder at times.
For what reason?
/app
is a convention in the world of container images, it avoids typing long paths and will usually also contain other code. What you’re asking for is usualy rather/opt/...
and not/usr/local/bin/
. Also why bother with/usr/local/bin
if all what you’re doing is distributing a single, self-contained executable in a container image.No, you should put yourself into other peoples shoes. You are acting rude, disrespectful and narrow minded with such blog postings. If you’re writing this way you’ll rather make people avoid FreeBSD even more. If you want to use something that supports less tools, but maybe those better, then stick with the defaults as others do (nagios?).
How about you actually open an issue before submitting a patch or even worse, write a rant. There hasn’t been a single request in Gatus about *BSDs, same for statping until now.
For the record, I (the author) opened that issue in Statping and the other projects for adding BSD support and I’ve patched all of them in the last 10 days :)
Yes I know, sorry that I haven’t made that clear in my comment. I appreciate your work, though I stance with my comment regarding that such rants are rather negative light onto FreeBSD. Your issue was much friendlier, though giving less context.
I can understand people assuming
apt
exists on the system because:apt
will probably know how to find the equivalent packages in their equally bad linux package manager of choice.Can understand, too, people using a SQLite implementation for Go that doesn’t depend on CGo, because CGo has it’s own issues.
Everything is hot garbage, doesn’t matter if you’re in Ubuntu or not. Don’t expect to have a gazillion of scripts that install all the dependencies in every package manager imaginable, none of those is good enough to deserve that much of attention, it won’t happen. At least
apt
is popular.That’s a reason Docker is so popular: It’s easier to work over an image with an specific package manager that will not change between machines. Doesn’t matter the distribution or the equally bad linux package manager of choice as long as you are on Linux and have Docker. And Dockerfiles end up being a great resource to know all the required quirks that allow the code to work.
And finally:
Never. Linux standard paths are a tragedy and will actively avoid them as much as possible. It’s about choices and avoiding monoculture, right?
No, it’s about writing software that nicely integrates with the rest of the chosen deployment method/system, not sticking random files all over the user’s machine.
In this example /app is being used inside the docker image. It is most definitely not sticking random files all over the users machine.
This hits a slightly wider point with the article - half the things the author is complaining about aren’t actually to do with Docker, despite the title.
The ones that are part of the Docker build… don’t necessarily matter? Because their effects are limited to the Docker image, which you can simply delete or rebuild without affecting any other part of your system.
I understand the author’s frustration - I’ve been through trying to compile things on systems the software wasn’t tested against, it’s a pain and it would be nice if everything was portable and cross-compatible. But I think it’s always been a reality of software maintenance that this is a difficult problem.
In fact, Docker could be seen as an attempt to solve the exact problem the author is complaining about, in a way which works for a reasonable number of people. It won’t work for everyone and there are reasons not to like it or use it, but I’d prefer to give people the benefit of the doubt instead of ranting.
Speaking of ranting, this comment’s getting long - but despite not really liking the tone of the article, credit to the author for raising issues and doing the productive work as well. That’s always appreciated.
OP here. aww thank you! Yes as noted in the disclaimer at the top, I was very frustrated! hopefully I ported it all, now trying to clean up some code so I can make patches.
According to commenters on the Yellow Site, it’s not wise to “just send a patch” or “rant”, they way it’s better to open an issue first. Which honestly I still don’t understand. Care someone explain that to me?
As a open-source maintainer, I like only two types of issues. 1) here’s a bug, 2) here’s a feature request and how to implement. But if someone made an issue saying “Your code is not running on latest version of QNX”, I would rather see them “Here’s a patch that makes the code run on QNX”.
Regardless, I tried an experiment and opened a “discussion issue” in one of the tools, hoping for the best.
Receiving patches without prior discussion on the scope/goals is potentially something frustrating since basic communication can easily avoid unnecessary extra work for both maintainers but also contributors. Maybe a feature is already being worked on? Maybe they’ve had prior conversations on the topic that you couldn’t have seen? Maybe they simply don’t have the time to review things at the moment? Or maybe they won’t be able to maintain a certain contribution?
Also for end-users, patches without a linked issue can be a source of frustration. Usually the MR contains discussion on the code/implementation details and issues conversation around the goals of the implementation.
Of course that always depends, if you’re only contributing minor improvements/changes - a discussion is often not needed.
Or in other words, as posted on ycombinator news:
Exactly
This is only a mediocre example, because with go there should only be one binary (or a handful of them) - but yes, if you put your software in a container, I am very happy if everything is in
/app
and if I want to have a look I don’t have to dissect/etc/
,/usr/local
and maybe/var
. If the container is the sole point of packaging one app, I see no downside to ignoring the FHS and putting it all together. There’s a reason that most people do that for custom-built software (as opposed to “this container just does “apt-get install postgresql-server”, then I would expect the stuff to be there where the distro puts it)It’s like running Linux in the 90s when everyone was assumed to be running Solaris.
Summary: The author has a lot of expectations that mainstream no longer cares about.
But it is true that because Unix in practice is a PITA to manage - especially the gnarly userspace with tendencies to try to support all niche variants, even from 30 years ago, and since storage is so cheap, the mainstream industry turned into packaging each app with its userspace.
Hardcore Unixers on BSD are not having a great time, and I can sympathize. The whole world moves into a direction that is making them less relevant. Each Linux-specific extensions embraced by mainstream like cgroups/containers, systemd, etc.is a barrier to interoperability which they relied on.
Okay, fine, each application now comes with its own copies of shared libraries … and the point of having shared libraries is … ? If the point of a shared library to to, you know, share it among processes, then what the hell is gained with Docker when you bundle all the shared libraries an application uses with it? There’s no sharing going on, which (in my opinion) defeats the actual purpose of shared libraries.
Reliable and (somewhat) reproducible build, deployment and behavior at runtime.
Yes. No one cares. Computers have gigabytes of ram and terrybytes of storage. And in industrial application, people don’t even work on one system anyway. One part of the system gets 5 computers, the other 10, and so on.
Shared libraries were always and still are a PITA, and are simply not worthwhile tradeoff anymore. People are running even most trivial applications build on stuff like Electron, throwing away gigabytes of memory upfront. Why would anyone care about couple of megabytes.
It depends a bit on how it’s implemented, but this can still be possible with a container-based workflow, in three ways:
The shared base layer bit also helps with the ‘reduced distribution size’ benefit of shared libraries.
In addition, you get deterministic builds and non-interference between containers. This is the same underlying idea as PC-BSD’s old PBI format, which would ship all dependencies bundled with an app and use hard links to deduplicate them. If two containers contain the same thing, there are multiple ways in which they can share on-disk and in-memory resources but when they want to have different versions then everything gracefully fails back to the non-shared case.
It’s not putting binaries in
/app
. It’s a multistage docker build, and the first stage is putting the source in/app
, and the second stage is taking that compiled binary and putting it in the default workdir of thescratch
image, which I guess is/
, which is still not ideal, I suppose.I get that it’s annoying when things don’t work on your system out of the box, but… should we really expect that for non-mainstream systems? Should I expect things to just work on FreeBSD? Dragonfly? Debian/Hurd? Haiku? What’s the cutoff point for ranting that maintainers don’t test/support your system?
The situation is better these days, but a few years ago it was hard to even get any BSD running in CI. And then you need to actually learn how to properly manage it.
I’m glad the author will submit patches to make things work. As a library author I accepted patches like that, but it doesn’t change the fact I don’t run that system and won’t know if something breaks in the future. Most things will only be supported on BSDs if people who run them volunteer their time for maintenance.
I guess the point of the rant is that this is being made nigh-impossible by relying on (strictly speaking unnecessary) components that aren’t portable and highly complex. By requiring Docker for the official build, it’s going to be extremely difficult to even get it to work on BSD in the first place, and once it’s made to work, because it’s not using the official build strategy, things are more likely to fall apart over time. A simple patch here or there would in olden days be applied and then require not that much effort to keep it working (or unbreak it when things do break) as development continues.
None of those things require docker. Statping also has an official statically compiled tarball release for Linux. In statping case a single make target could be changed to deal with BSD differently. (Or even just made conditional on “NO_INSTALL_DEPS” or something…)
I am a Linux user and I generally steer clear from software wish uses docker as main means of distribution/installation.
I don’t quite get this rant. For starters, all the software he is trying to use, for me, falls in the category of bloated borderline useless stuff that could be replaced with a something built in one afternoon or so.
But judgements asside… Surely the authors of those projects made the assumptions that fit well their target audience. It’s not like the author is their boss.
Clone the repos and tweak to your needs if worthy of your time? What is the point of ranting if the code is available?
Never give in. BSDs are the torch bearer of Unix on servers. Linux is for phones and tablets.
Don’t get me wrong, why can’t you run Docker on FreeBSD? :-)
Great article by the way, and you pointed lots of bad practices in the repositories, but somehow I have the impression that the world is really focusing on 3 OSs and that’s it. In this sense, my first question applies, why can’t you run Docker on FreeBSD? Wouldn’t this be the best solution? (https://reviews.freebsd.org/D21570)
MacOS is not Linux, they run a VM that can have a docker environment and just works.
Whine much?