These are more generally called build-time (as opposed to run-time) dependencies. While tools are the most common use-case, another example of a build-time dependency would be a plugin for the build system. The main reason why we need to distinguish between the two types of dependencies is because we may need to build them with different configurations (for example, optimized for build-time and debug for run-time) or even for different targets in case of cross-compilation.
The biggest issue, in my opinion, is the shared version resolution. The main project and all tools must share a single version of each dependency.
I can see why all the tools would share a dependency but also forcing the run-time case to use the same version is strange. Simplicity of implementation seems like would be the only reason. Strictly speaking, even tools (but not build system plugins) need to share the same dependency since they each run in a separate process.
No, these are not build-time dependencies. When you run go build, these tools do not run. They are development-time dependencies. The programmer simply has these tools at their disposal to run for when they are developing code. It’s not a build-time thing.
Thanks for the clarification, I missed the part that the tools have to be run manually.
I feel conceptually development-time (I would call it development-only) is orthogonal to build/run-time: you can theoretically have a development-only run-time dependency (for example, some instrumentation support library which is only useful during development). So this Go tool thing would be more accurately called manually-run build-time dependency (which is necessarily development-only). It would still be built for host rather than target when cross-compiling, right?
I think this misses that the tool run time (and number of runs) vary drastically from build time and number of builds. Eg generating code for protocol, which might have happened once and then been rebuilt on hundreds of different commits. I’m not sure l calling this build time is all that useful.
What Go has is a special case of the following more general arrangement: a build-time dependency with its output stored in the repository (BTW, a practice which is frowned upon by many for valid reasons). In this setup the tool will only be executed (by the build system) when the input changes. Otherwise, the build will use the pre-generated output, even for a freshly cloned repository.
In the Go case, the “update output when input changes” part is handled manually, “for secure” I assume (or maybe “for simple”). But I still think it’s helpful to use more general terminology.
While I appreciate it I think your focus on terminology is making this more complicated than it has to be. go get -tool is essentially a package manager, nothing more than that. These tools might not be dependencies at all or might never be ran on any machine, development or otherwise.
A little further and we have macros! Though a lisper, I miss macros in Go when I’d like to precompute things, load data into the binary etc. As is, it sadly demands a separate prebuild step, computing and placing such information in .go files, which can then be compiled.
Does the embed package not work for that? Doesn’t involve a separate prebuild step and exposes the data quite nicely through the standard fs package. I’ve used that to embed, eg., templates and other external data.
Doing a go get or go build by itself won’t implicitly invoke go tool and execute arbitrary code, as far as I can tell.
It seems to me like “go tool goimports” now is hardly different from having a makefile with a target of “go run golang.org/x/tools/cmd/goimports”… it just version controls the thing actually executed there explicitly in the go.mod alongside other go dependencies.
Doing a go get or go build by itself won’t implicitly invoke go tool and execute arbitrary code, as far as I can tell.
Ah, somehow I misremembered of misunderstood some previous post about this, where it basically was said “You don’t need to put eg. stringer into your makefile anymore” which made it sound like go build might now do something not expected.
hardly different from having a makefile with a target of “go run golang.org/x/tools/cmd/goimports”
Yes, but that differentiation is important. Like with every command you should be sure what it does and doesn’t do. Calling make means I need to know what the Makefile does, vs go build where the expectation very well might be that this will only compile code with the tools intended for it, and not execute any third party code. The resulting binary can then be run in an isolated manner.
I mean arbitrary as in, if I run go build will code that isn’t the Go compiler (incl. its toolchain) run or something that isn’t.
So in other words, is running go build safe without knowing the project. Not talking about the produced binaries or anything else here. That’s a separate topic of course.
I get your point, and I am not sure if I like this, I’m surely not defending.
But I mostly find the difference arbitrary, as some projects run tests (aka code in the repo) automatically, and some don’t. Ive not used go in a while, so couldn’t tell you if go build would just compile or also run tests.
These are more generally called build-time (as opposed to run-time) dependencies. While tools are the most common use-case, another example of a build-time dependency would be a plugin for the build system. The main reason why we need to distinguish between the two types of dependencies is because we may need to build them with different configurations (for example, optimized for build-time and debug for run-time) or even for different targets in case of cross-compilation.
I can see why all the tools would share a dependency but also forcing the run-time case to use the same version is strange. Simplicity of implementation seems like would be the only reason. Strictly speaking, even tools (but not build system plugins) need to share the same dependency since they each run in a separate process.
No, these are not build-time dependencies. When you run
go build, these tools do not run. They are development-time dependencies. The programmer simply has these tools at their disposal to run for when they are developing code. It’s not a build-time thing.Thanks for the clarification, I missed the part that the tools have to be run manually.
I feel conceptually development-time (I would call it development-only) is orthogonal to build/run-time: you can theoretically have a development-only run-time dependency (for example, some instrumentation support library which is only useful during development). So this Go tool thing would be more accurately called manually-run build-time dependency (which is necessarily development-only). It would still be built for host rather than target when cross-compiling, right?
Yes, they would be built for the host.
I think this misses that the tool run time (and number of runs) vary drastically from build time and number of builds. Eg generating code for protocol, which might have happened once and then been rebuilt on hundreds of different commits. I’m not sure l calling this build time is all that useful.
What Go has is a special case of the following more general arrangement: a build-time dependency with its output stored in the repository (BTW, a practice which is frowned upon by many for valid reasons). In this setup the tool will only be executed (by the build system) when the input changes. Otherwise, the build will use the pre-generated output, even for a freshly cloned repository.
In the Go case, the “update output when input changes” part is handled manually, “for secure” I assume (or maybe “for simple”). But I still think it’s helpful to use more general terminology.
While I appreciate it I think your focus on terminology is making this more complicated than it has to be.
go get -toolis essentially a package manager, nothing more than that. These tools might not be dependencies at all or might never be ran on any machine, development or otherwise.Rather tangentially:
A little further and we have macros! Though a lisper, I miss macros in Go when I’d like to precompute things, load data into the binary etc. As is, it sadly demands a separate prebuild step, computing and placing such information in .go files, which can then be compiled.
Does the
embedpackage not work for that? Doesn’t involve a separate prebuild step and exposes the data quite nicely through the standardfspackage. I’ve used that to embed, eg., templates and other external data.Embed only takes in e.g. file contents, but doesn’t let you preprocess (e.g. label, sort etc.) during compilation.
So do I get this straight that now a build runs arbitrary code, when it didn’t do before?
I don’t think there’s any real difference here.
Doing a
go getorgo buildby itself won’t implicitly invokego tooland execute arbitrary code, as far as I can tell.It seems to me like “go tool goimports” now is hardly different from having a makefile with a target of “go run golang.org/x/tools/cmd/goimports”… it just version controls the thing actually executed there explicitly in the
go.modalongside other go dependencies.Ah, somehow I misremembered of misunderstood some previous post about this, where it basically was said “You don’t need to put eg. stringer into your makefile anymore” which made it sound like
go buildmight now do something not expected.Yes, but that differentiation is important. Like with every command you should be sure what it does and doesn’t do. Calling
makemeans I need to know what the Makefile does, vsgo buildwhere the expectation very well might be that this will only compile code with the tools intended for it, and not execute any third party code. The resulting binary can then be run in an isolated manner.Depending on your definition of arbitrary.
“As intended by the developers of the repo” - no.
“Deterministic as of the last commit, when only the software changed, or the dependencies, not the tool” - yes
I mean arbitrary as in, if I run
go buildwill code that isn’t the Go compiler (incl. its toolchain) run or something that isn’t.So in other words, is running go build safe without knowing the project. Not talking about the produced binaries or anything else here. That’s a separate topic of course.
I get your point, and I am not sure if I like this, I’m surely not defending.
But I mostly find the difference arbitrary, as some projects run tests (aka code in the repo) automatically, and some don’t. Ive not used go in a while, so couldn’t tell you if go build would just compile or also run tests.
Why defending? I merely asked a question and then clarified on my definition of it.
I wouldn’t call differences between what commands do arbitrary. Especially when it’s about whether or not they might execute any kind of code or not.
I agree running tests as part of go build would indeed change the situation.