Threads for aarroyoc

  1. 2

    Has anyone here tried Plastic SCM?

    1. 3

      Yes, it was built in a startup in the city I live. It’s probably the best VCS out there if you’re a GUI person or work with big binary files, however, the setup was a bit complicated at the time (it is both distributed and centralized VCS, but to accomplish that, you should run several components at the same time). They’re now focused on videogames use cases and never wanted to replicate the famous free and social aspects of GitHub or GitLab on their hosted services.

      1. 4

        Perforce has also been relegated to the gamedev niche too. Surprised it’s big enough for multiple players, and that LFS hasn’t eaten it yet.

        1. 2

          LFS (and Mercurial’s equivalent) are painful compared to Perforce, Plastic, or even Subversion: the gap between what’s in the repository, and what the repository means, is a problem unique to that style of “fix”. Narrow and shallow cloning combined get pretty close to matching the quality of Plastic/Perforce/Subversion/etc., but it’s still half-baked when present in the FOSS DVCSes.

    1. 3

      BSD make is great for small projects which don’t have a lot of files and do not have any compile time option. For larger projects in which you want to enable/disable options at compilation time, you might have to use a more complete build system.

      Here’s the problem: Every large project was once a small project. The FreeBSD build system, which is built on top of bmake, is an absolute nightmare to use. It is slow, impossible to modify, and when it breaks it’s completely incomprehensible trying to find out why.

      For small projects, a CMake build system is typically 4-5 lines of CMake, so bmake isn’t really a win here, but CMake can grow a lot bigger before it becomes an unmaintainable mess and it’s improving all of the time. Oh, and it can also generate the compile_commands.json that your LSP implementation (clangd or whatever) uses to do syntax highlighting. I have never managed to make this work with bmake (@MaskRay published a script to do it but it never worked for me).

      1. 17

        The problem is that cmake is actually literal hell to use. I would much rather use even the shittiest makefile than cmake.

        Some of the “modern” cmake stuff is slightly less horrible. Maybe if the cmake community had moved on to using targets, things would’ve been a little better. But most of the time, you’re still stuck with ${FOO_INCLUDE_DIRS} and ${FOO_LIBRARIES}. And the absolutely terrible syntax and stringly typed nature won’t ever change.

        Give me literally any build system – including an ad-hoc shell script – over cmake.

        1. 6

          Agreed. Personally, I also detest meson/ninja in the same way. The only thing that I can tolerate writing AND using are BSD makefiles, POSIX makefiles, and plan9’s mkfiles

          1. 2

            You are going to have a very fun time dealing with portability. Shared libraries, anyone?

            1. 2

              Not really a problem, pkg-config tells your makefile what cflags and ldflags/ldlibs to add.

              1. 2

                Using it is less the problem - creating shared libraries is much harder. Every linker is weird and special, even with ccld. As someone dealing with AIX in a dayjob…

          2. 5

            The problem is that cmake is actually literal hell to use. I would much rather use even the shittiest makefile than cmake.

            Yes. The last time I seriously used cmake for cross compiles (trying to build third-party non-android code to integrate into an Android app) I ended up knee deep in strace to figure out which of the hundreds of thousands of lines of cmake scripts were being included from the system cmake directory, and then using gdb on a debug build of cmake to try to figure out where it was constructing the incorrect strings, because I had given up on actually being able to understand the cmake scripts themselves, and why they were double concatenating the path prefix.

            Using make for the cross compile was merely quite unpleasant.

            Can we improve on make? Absolutely. But cmake is not that improvement.

            1. 2

              What were you trying to build? I have cross-compiled hundreds of CMake things and I don’t think I’ve ever needed to do anything other than give it a cross-compile toolchain file on the command line. Oh, and that was cross-compiling for an experimental CPU, so no off-the-shelf support from anything, yet CMake required me to write a 10-line text file and pass it on the command line.

              1. 2

                This was in 2019-ish, so I don’t remember which of the ported packages it was. It may have been some differential equation packages, opencv, or some other packages. There was some odd interaction between their cmake files and the android toolchain’s cmake helpers that lead to duplicated build directory prefixes like:

                 /home/ori/android/ndk//home/ori/android/ndk/$filepath
                

                which was nearly impossible to debug. The fix was easy once I found the mis-expanded variable, but tracking it down was insanely painful. The happy path with cmake isn’t great but the sad path is bad enough that I’m not touching it in any new software I write.

                1. 2

                  The happy path with cmake isn’t great but the sad path is bad enough that I’m not touching it in any new software I write.

                  The sad path with bmake is far sadder. I spent half a day trying to convince a bmake-based build system to compile the output from yacc as C++ instead of C before giving up. There was some magic somewhere but I have no idea where and a non-trivial bmake build system spans dozens of include files with syntax that looks like line noise. I’ll take add_target_option over ${M:asdfasdfgkjnerihna} any day.

                  1. 3

                    You’re describing the happy path.

                    Cmake ships with just over 112,000 lines of modules, and it seems any non trivial project gets between hundreds and thousands of lines of additional cmake customizations and copy-pasted modules on top of that. And if anything goes wrong in there, you need to get in and debug that code. In my experience, it often does.

                    With make, its usually easier to debug because there just isn’t as much crap pulled in. And even when there is, I can hack around it with a specific, ad-hoc target. With cmake, if something goes wrong deep inside it, I expect to spend a week getting it to work. And because I only touch cmake if I have to, I usually don’t have the choice of giving up – I just have to deal with it.

                    I’m very happy that these last couple years, I spend much of my paid time writing Go, and not dealing with other people’s broken build systems.

                    1. 1

                      Cmake ships with just over 112,000 lines of modules, and it seems any non trivial project gets between hundreds and thousands of lines of additional cmake customizations and copy-pasted modules on top of that.

                      The core bmake files are over 10KLoC, which doesn’t include the built-in rules, and do far less than the CMake standard library (which includes cross compilation, finding dependencies using various tools, and so on). They are not namespaced, because bmake does not have any notion of scopes for variables, and so any one of them may define some variable that another consumes and

                      With make, its usually easier to debug because there just isn’t as much crap pulled in.

                      That is not my experience with any large project that I’ve worked on with a bmake or GNU make build system. They build some half-arsed analogue of a load of the CMake modules and, because there’s no notion of variable scope in these systems, everything depends on some variable that is set somewhere in a file that’s included at three levels of indirection by the thing that includes the Makefile for the component that you’re currently looking at. Everything is spooky action at a distance. You can’t find the thing that’s setting the variable, because it’s constructing the variable name by applying some complex pattern to the string. When I do find it, instead of functions with human-readable names, I discover that it’s a line like _LDADD_FROM_DPADD= ${DPADD:R:T:C;^lib(.*)$;-l\1;g} (actual line from a bmake project, far from the worst I’ve seen, just the first one that jumped out opening a random .mk file), which is far less readable than anything I’ve ever read in any non-Perl language.

                      In contrast, modern CMake has properties on targets and the core modules are work with this kind of abstraction. There are a few places where some global variables still apply, but these are easy to find with grep. Everything else is scoped. If a target is doing something wrong, then I need to look at how that target is constructed. It may be as a result of some included modules, but finding they relevant part is usually easy.

                      The largest project that I’ve worked on with a CMake build system is LLVM, which has about 7KLoC of custom CMake modules. It’s not wonderful, but it’s far easier to modify the build system than I’ve found for make-based projects a tenth the size. The total time that I’ve wasted on CMake hacking for it over the last 15 years is less than a day. The time I’ve wasted failing to get Make-based (GNU Make or bmake) projects to do what I want is weeks over the same period.

            2. 3

              Modern CMake is a lot better and it’s being aggressively pushed because things like vcpkg require modern CMake, or require you to wrap your crufty CMake in something with proper exported targets. Importing external dependencies.

              I’ve worked on projects with large CMake infrastructure, large GNU make infrastructure, and large bmake infrastructure. I have endured vastly less suffering as a result of the CMake infrastructure than the other two. I have spent entire days trying to change things in make-based build systems and given up, whereas CMake I’ve just complained about how ugly the macro language is.

              1. 2

                Would you be interested to try build2? I am willing to do some hand-holding (e.g., answer “How do I ..?” questions, etc) if that helps.

                To give a few points of comparison based on topics brought up in other comments:

                1. The simple executable buildfile would be a one-liner like this:

                  exe{my-prog}: c{src1} cxx{src2}
                  

                  With the libzstd dependency:

                  import libs = libzstd%lib{zstd}
                  
                  exe{my-prog}: c{src1} cxx{src2} $libs
                  
                2. Here is a buildfile from a library (Linux Kconfig configuration system) that uses lex/yacc: https://github.com/build2-packaging/kconfig/blob/master/liblkc/liblkc/buildfile

                3. We have a separate section in the manual on the available build debugging mechanisms: https://build2.org/build2/doc/build2-build-system-manual.xhtml#intro-diag-debug

                4. We have a collection of HOWTOs that may be of interest: https://github.com/build2/HOWTO/#readme

                1. 3

                  I like the idea of build2. I was hoping for a long time that Jon Anderson would finish Fabrique, which had some very nice properties (merging of objects for inheriting flags, a file type in the language that was distinct from a string and could be mapped to a path or a file descriptor on invocation).

                  exe{my-prog}: c{src1} cxx{src2}

                  Perhaps it’s just me, but I really don’t find that to be great syntax. Software in general (totally plausible rule of thumb that I was told and believe) is read around 10 times more than it is written. For build systems, that’s probably closer to 100, so terse syntax scares me.

                  The problem I have now is ecosystem lock in. 90% of the things that I want to depend on provides a CMake exported project. I can use vcpkg to grab thousands of libraries to statically link against and everything just works. From this example:

                  With the libzstd dependency:

                  import libs = libzstd%lib{zstd}

                  How does it find zstd? Does it rely on an export target that zstd exposed, a built-in package, or some other mechanism?

                  CMake isn’t what I want, but I can see a fairly clear path to evolving it to be what I want. I don’t see that path for replacing it with something new and for the new thing to be worth replacing CMake it would need to be an order of magnitude better for my projects and able to consume CMake exported targets from other projects (not pkg-config, which can’t even provide flags for compiler invocations for Objective-C, let alone handle any of the difficult configuration cases). If it can consume CMake exported targets, then my incentive for libraries is to use CMake because then I can export a target that both it and CMake can consume.

                  1. 2

                    Perhaps it’s just me, but I really don’t find that to be great syntax. Software in general (totally plausible rule of thumb that I was told and believe) is read around 10 times more than it is written. For build systems, that’s probably closer to 100, so terse syntax scares me.

                    No, it’s not just you, this is a fairly common complaint from people who first see it but interestingly not from people who used build2 for some time (we ran a survey). I believe the terse syntax is beneficial for common constructs (and what I’ve shown is definitely one of the most common) because it doesn’t get in the way when trying to understand more complex buildfiles. At least this has been my experience.

                    How does it find zstd? Does it rely on an export target that zstd exposed, a built-in package, or some other mechanism?

                    That depends on whether you are using just the build system or the build system and the package manager stack. If just the build system, then you can either specify the development build to import explicitly (e.g., config.import.libzstd=/tmp/libzstd), bundle it with your project (in which it gets found automatically) or, failed all of the above, build2 will try to find the installed version (and extract additional options/libraries from pkg-config files, if any).

                    If you are using the package manager, then by default it will download and build libzstd from the package (but you can also instruct the package manager to use the system-installed version if you prefer). We happen to have the libzstd package sitting in the submission queue: https://queue.cppget.org/libzstd

                    But that’s a pretty vanilla case that most tools can handle these days. The more interesting one is lex/yacc from the buidfile I linked. It uses the same import mechanism to find the tools:

                    import! [metadata] yacc = byacc%exe{byacc}
                    import! [metadata] flex = reflex%exe{reflex}
                    

                    And we have them packaged: https://cppget.org/reflex and https://cppget.org/byacc. And the package manager will download and build them for you. And it’s smart enough to know to do it in a seperate host configuration so that they can still be executed during the build even if you are cross-compiling. This works auto-magiclaly, even on Windows. (Another handy tool that can be used like that is xxd: https://cppget.org/xxd).

                    CMake isn’t what I want, but I can see a fairly clear path to evolving it to be what I want. I don’t see that path for replacing it with something new and for the new thing to be worth replacing CMake it would need to be an order of magnitude better for my projects.

                    I am clearly biased but I think it’s actually not that difficult to be an order of magnitude better than CMake, it’s just really difficult to see if all you’ve experienced is CMake (and maybe some make-based projects).

                    Firstly, CMake is a meta build system which closes the door on quite a few things (for an example, check how CMake plans to support C++20 modules; in short it’s a “let’s pre-scan the world” approach). Then, on one side of this meta build system sandwich you have a really primitive build model with the famous CMake macro language. On the other you have the lowest common denominator problem of the underlying build systems. Even arguably the best of them (ninja) is quite a basic tool. The result is that every new functionality, say support for a new source code generator, has to be implemented in this dreaded macro language with an eye on the underlying build tools. In build2, in contrast, you can implement you own build system module in C++ and the toolchain will fetch, build, and load it for you automatically (pretty much the same as the lex/yacc tools above). Here is a demo I’ve made of a fairly elaborate source code generator setup for a user (reportedly it took a lot of hacking around to support in CMake and was the motivation for them to switch to build2): https://github.com/build2/build2-dynamic-target-group-demo/

                    1. 3

                      No, it’s not just you, this is a fairly common complaint from people who first see it but interestingly not from people who used build2 for some time (we ran a survey)

                      That’s a great distinction to make. Terse syntax is fine for operations that I will read every time I look in the file, but it’s awful for things that I’ll see once every few months. I don’t know enough about build2 to comment on where it falls on this spectrum.

                      For me, the litmus test of a build systems is one that is very hard to apply to new ones: If I want to modify a build system for a large project that has aggregated for 10-20 years, how easy is it for me to understand their custom parts? CMake is not wonderful here, but generally the functions and macros are easy to find and to read once I’ve found them. bmake is awful because its line-noise syntax is impossible to search for (how do you find what the M modifier in an expression does in the documentation? “M” as a search string gives a lot of false positives!).

                      That depends on whether you are using just the build system or the build system and the package manager stack. If just the build system, then you can either specify the development build to import explicitly (e.g., config.import.libzstd=/tmp/libzstd), bundle it with your project (in which it gets found automatically) or, failed all of the above, build2 will try to find the installed version (and extract additional options/libraries from pkg-config files, if any).

                      My experience with pkg-config is not very positive. It just about works for trivial options but is not sufficiently expressive for even simple things like different flags for debug and release builds, let alone anything with custom configuration options.

                      If you are using the package manager, then by default it will download and build libzstd from the package (but you can also instruct the package manager to use the system-installed version if you prefer). We happen to have the libzstd package sitting in the submission queue: https://queue.cppget.org/libzstd

                      That looks a lot more promising, especially being able to use the system-installed version. Do you provide some ontology that allows systems to map build2 package names to installed packages so that someone packaging a project that I build with build2 without having to do this translation for everything that they package?

                      And we have them packaged: https://cppget.org/reflex and https://cppget.org/byacc. And the package manager will download and build them for you. And it’s smart enough to know to do it in a seperate host configuration so that they can still be executed during the build even if you are cross-compiling. This works auto-magiclaly, even on Windows. (Another handy tool that can be used like that is xxd: https://cppget.org/xxd).

                      This is a very nice property, though one that I already get from vcpkg + CMake.

                      Firstly, CMake is a meta build system which closes the door on quite a few things (for an example, check how CMake plans to support C++20 modules; in short it’s a “let’s pre-scan the world” approach). Then, on one side of this meta build system sandwich you have a really primitive build model with the famous CMake macro language.

                      The language is pretty awful, but the underlying object model doesn’t seem so bad and is probably something that could be exposed to another language with some refactoring (that’s probably the first thing that I’d want to do if I seriously spent time trying to improve CMake).

                      In build2, in contrast, you can implement you own build system module in C++ and the toolchain will fetch, build, and load it for you automatically (pretty much the same as the lex/yacc tools above). Here is a demo I’ve made of a fairly elaborate source code generator setup for a user (reportedly it took a lot of hacking around to support in CMake and was the motivation for them to switch to build2):

                      That’s very interesting and might be a good reason to switch for a project that I’m currently working on.

                      I have struggled in the past with generated header files with CMake, because the tools can build the dependency edges during the build, but I need a coarse-grained rule for the initial build that says ‘do the step that generates these headers before trying to build this target’ and there isn’t a great way of expressing that this is a fudge and so I can break that arc for incremental builds. Does build2 have a nice model for this kind of thing?

                      1. 2

                        If I want to modify a build system for a large project that has aggregated for 10-20 years, how easy is it for me to understand their custom parts?

                        In build2, there are two ways to do custom things: you can write ad hoc pattern rules in a shell-like language (similar to make pattern rules, but portable and higher-level) and everything else (more elaborate rules, functions, configuration, etc) is written in C++(14). Granted C++ can be made an inscrutable mess, but at least it’s a known quantity and we try hard to keep things sane (you can get a taste of what that looks like from the build2-dynamic-target-group-demo/libbuild2-compiler module I linked to earlier).

                        My experience with pkg-config is not very positive. It just about works for trivial options but is not sufficiently expressive for even simple things like different flags for debug and release builds, let alone anything with custom configuration options.

                        pkg-config has its issues, I agree, plus most build systems don’t (or can’t) use it correctly. For example, you wouldn’t try to cram both debug and release builds into a single library binary (e.g., .a or .so; well, unless you are Apple, perhaps) so why try to cram both debug and release (or static/shared for that matter) options into the same .pc file?

                        Plus, besides the built-in values (Cflags, etc), pkg-config allows for free-form variables. So you can extend the format how you see fit. For example, in build2 we use the bin.whole variable to signal that the library should be linked in the “whole archive” mode (which we then translate into the appropriate linker options). Similarly, we’ve used pkg-config variable to convey C++20 modules information and it also panned out quite well. And we now convey custom C/C++ library metadata this way.

                        So the question is do we subsume all the existing/simple cases and continue with pkg-config by extending its format for more advanced cases or do we invent a completely new format (which is what WG21’s SG15 is currently trying to do)?

                        Do you provide some ontology that allows systems to map build2 package names to installed packages so that someone packaging a project that I build with build2 without having to do this translation for everything that they package?

                        Not yet, but we had ideas along these lines though in a different direction: we were thinking of each build2 package also providing a mapping to the system package names for the commonly used distributions (e.g., libzstd-dev for Debain/Ubuntu, libzstd-devel for Fedora/etc) so that the build2 package manager can query the installed package’s version (e.g., to make sure the version constraints are satisfied) or to invoke the system package manager to install the system package. If we had such a mapping, it would also allow us to also achieve what you are describing.

                        This is a very nice property, though one that I already get from vcpkg + CMake.

                        Interesting. So you could ask vcpkg to build you a library without even knowing it has build-time dependencies on some tools and vcpkg will automatically create a suitable host configuration, build those tools there, and pass them to the library’s so that it can execute them during its build?

                        If so, that’s quite impressive. For us, the “create a suitable host configuration” part turned into a particularly deep rabbit hold. What is “suitable”? In our case we’ve decided to use the same compiler/options as what was used to build build2. But what if the PATH environment variable has changed and now clang++ resolves to something else? So we had to invent a notion of hermetic build configurations where we save all the environment variables that affect every tool involved in the build (like CPATH and friends). One nice off-shot of this work is that now in non-hermetic build configurations (which are the default), we detect changes to the environment variables besides everything else (sources, options, compiler versions, etc).

                        I have struggled in the past with generated header files with CMake, because the tools can build the dependency edges during the build, but I need a coarse-grained rule for the initial build that says ‘do the step that generates these headers before trying to build this target’ and there isn’t a great way of expressing that this is a fudge and so I can break that arc for incremental builds. Does build2 have a nice model for this kind of thing?

                        Yes, in build2 you normally don’t need any fudging, the C/C++ compile rules are prepared to deal with generated headers (via -MG or similar). There are use-cases where it’s impossible to handle the generated headers fully dynamically (for example, because the compiler may pick up a wrong/outdated header from another search path) but this is also taken care of. See this article for the gory details: https://github.com/build2/HOWTO/blob/master/entries/handle-auto-generated-headers.md

                        That’s very interesting and might be a good reason to switch for a project that I’m currently working on.

                        As I mentioned earlier, I would be happy to do some hand-holding if you want to give it a try. Also, build2 is not exactly simple and has a very different mental model compared to CMake. In particular, CMake is a “mono-repo first” build system while build2 is decidedly “multi-repo first”. As a result, some things that are often taken as gospel by CMake users (like the output being a subdirectory of the source directory) is blasphemy in build2. So there might be some culture shock.

                        BTW, in your earlier post you’ve mentioned Fabrique by Jon Anderson but I can’t seem to find any traces of it. Do you have any links?

                        1. 2

                          Granted C++ can be made an inscrutable mess, but at least it’s a known quantity and we try hard to keep things sane (you can get a taste of what that looks like from the build2-dynamic-target-group-demo/libbuild2-compiler module I linked to earlier).

                          This makes me a bit nervous because it seems very easy for non-portable things to creep in with this. To give a concrete example, if my build environment is a cloud service then I may not have a local filesystem and anything using the standard library for file I/O will be annoying to port. Similarly, if I want to use something like Capsicum to sandbox my build then I need to ensure that descriptors for files read by these modules are provided externally.

                          It looks as if the abstractions there are fairly clean, but I wonder if there’s any way of linting this. It would be quite nice if this could use WASI as the host interface (even if compiling to native code) so that you had something that at least can be made to run anywhere.

                          pkg-config has its issues, I agree,

                          My bias against pkg-config originates from trying to use it with Objective-C. I gave up trying to add an --objc-flags and –objcxx-flags` option because the structure of the code made this kind of extension too hard. Objective-C is built with the same compiler as C/C++ and takes mostly the same options, yet it wasn’t possible to support. This made me very nervous that the system could adapt to any changes in requirements from C/C++ and no chance of providing information for any other language. This was about 15 years ago, so it may have improved since thne.

                          Not yet, but we had ideas along these lines though in a different direction: we were thinking of each build2 package also providing a mapping to the system package names for the commonly used distributions

                          That feels back to front because you’re traversing the graph in the opposite direction to the edge that must exist. Someone packaging libFoo for their distribution must know where libFoo comes from and so is in a position to maintain this mapping (we could fairly trivially automate it from the FreeBSD ports system for any package that we build from a cppget source, for example). In contrast, the author of a package doesn’t always know where things come from here. I’ve looked on repology at some of my code and discovered that I haven’t even heard of a load of the distributions that package it, so expecting me to maintain a list of those (and keep it up to date with version information) sounds incredibly hard and likely to lead to a two-tier system (implicit in your use of the phrase ‘commonly used distributions’) where building on Ubuntu and Fedora is easy, building on less-popular targets is harder.

                          Interesting. So you could ask vcpkg to build you a library without even knowing it has build-time dependencies on some tools and vcpkg will automatically create a suitable host configuration, build those tools there, and pass them to the library’s so that it can execute them during its build?

                          Yes, but there’s a catch: vcpkg runs its builds as part of the configure stage, not as part of the build stage. This means that running cmake may take several minutes, when then running ninja completes in a second or two. If you modify vcpkg.json then this will force CMake to re-run and that will cause the packages to re-build. vcpkg packages have a notion of host tools, which are built with the triplet for your host configuration and are then exposed for the rest of the build. There are some known issues with it, so they might be starting down the same rabbit hole that you ended up with.

                          Yes, in build2 you normally don’t need any fudging, the C/C++ compile rules are prepared to deal with generated headers (via -MG or similar).

                          It’s the updating that I’m particularly interested in. Imagine that I have I have a make-headers build step that has sub-targets that generate foo.h and bar.h and then a I step for compiling prog.cc, which includes foo.h. On the first (non-incremental) build, I want the compile step that consumes prog.cc to depend on make-headers (big hammer, so that I don’t have to track which generated headers my prog.cc depends on). But after that I want the compiler to update the rule for prog.cc so that it depends only on foo.h. I’ve managed to produce some hacks that do this in CMake but they’re ugly and fragile. I’d love to have some explicit support for over-approximate dependencies that will be fixed during the first build. bmake’s meta mode does this by using a kernel module to watch the files that the compiler process reads and dynamically updating the build rules to depend on those. This has some nice side effects, such as causing a complete rebuild if you upgrade your compiler or a shared library that the compiler depends on.

                          Negative dependencies are a separate (and more painful problem).

                          As I mentioned earlier, I would be happy to do some hand-holding if you want to give it a try. Also, build2 is not exactly simple and has a very different mental model compared to CMake. In particular, CMake is a “mono-repo first” build system while build2 is decidedly “multi-repo first”. As a result, some things that are often taken as gospel by CMake users (like the output being a subdirectory of the source directory) is blasphemy in build2. So there might be some culture shock.

                          All of my builds are done from a separate ZFS dataset that has sync turned off, so out-of-tree builds are normal for me, but I’ve not had any problems with that in CMake. One of the projects that I’m currently working on looks quite a lot like a cross-compile SDK and so build2 might be a good fit (we provide some build tools and components and want consumers to pick up our build system components). I’ll do some reading and see how hard it would be to port it over to build2. It’s currently only about a hundred lines of CMake, so not so big that a complete rewrite would be painful.

                          1. 1

                            This makes me a bit nervous because it seems very easy for non-portable things to creep in with this.

                            These are interesting points that admittedly we haven’t though much about, yet. But there are plans to support distributed compilation and caching which, I am sure, will force us to think this through.

                            One thing that I have been thinking about lately is how much logic should we allow one to put in a rule (since, being written in C++, there is not much that cannot be done). In other words, should rules be purely glue between the build system and the tools that do the actual work (e.g., generate some source code) or should we allow the rules to do the work themselves without any tools? To give a concrete example, it would be trivial in build2 to implement a rule that provides the xxd functionality without any external tools.

                            Either way I think the bulk of the rules will still be the glue type simply because nobody will want to re-implement protoc or moc directly in the rule. Which means the problem is actually more difficult: it’s not just the rules that you need to worry about, it’s also the tools. I don’t think you will easily convince many of them to work without a local filesystem.

                            That feels back to front because you’re traversing the graph in the opposite direction to the edge that must exist. Someone packaging libFoo for their distribution must know where libFoo comes from and so is in a position to maintain this mapping […]

                            From this point of view, yes. But consider also this scenario: whomever is packaging libFoo for, say, Debian is not using build2 (because libFoo upstream is, say, still uses CMake) and so has no interest in maintaining this mapping.

                            Perhaps this should just be a separate registry where any party (build2 package author, distribution package author, or an unrelated third party) can contribute the mapping. This will work fairly well for archive-based package repositories where we can easily merge this information into the repository metadata. But not so well for git-based where things are decentralized.

                            Imagine that I have I have a make-headers build step that has sub-targets that generate foo.h and bar.h and then a step for compiling prog.cc, which includes foo.h. On the first (non-incremental) build, I want the compile step that consumes prog.cc to depend on make-headers (big hammer, so that I don’t have to track which generated headers my prog.cc depends on). But after that I want the compiler to update the rule for prog.cc so that it depends only on foo.h.

                            You don’t need such “big hammer” aggregate steps in build2 (unless you must, for example, because the tool can only product all the headers at once). Here is a concrete example:

                            hxx{*}: extension = h
                            
                            cxx.poptions += "-I$out_base" "-I$src_base"
                            
                            gen = foo.h bar.h
                            
                            ./: exe{prog1}: cxx{prog1.cc} hxx{$gen}
                            ./: exe{prog2}: cxx{prog2.cc} hxx{$gen}
                            
                            hxx{foo.h}:
                            {{
                              echo '#define FOO 1' >$path($>)
                            }}
                            
                            hxx{bar.h}:
                            {{
                              echo '#define BAR 1' >$path($>)
                            }}
                            

                            Where prog1.cc looks like this (in prog2.cc substitute foo with bar):

                            #include "foo.h"
                            
                            int main ()
                            {
                              return FOO;
                            }
                            

                            While this might look a bit impure (why does exe{prog1} depends on bar.h even though none of its sources use it), this works as expected. In particular, given a fully up-to-date build, if you remove foo.h, only exe{prog1} will be rebuilt. The mental model here is that the headers you list as prerequisites of an executable or library are a “pool” from which its source can “pick” what they need.

                            I’ll do some reading and see how hard it would be to port it over to build2. It’s currently only about a hundred lines of CMake, so not so big that a complete rewrite would be painful.

                            Sounds good. If this is public (or I can be granted access), I could even help.

                            1. 1

                              Either way I think the bulk of the rules will still be the glue type simply because nobody will want to re-implement protoc or moc directly in the rule. Which means the problem is actually more difficult: it’s not just the rules that you need to worry about, it’s also the tools. I don’t think you will easily convince many of them to work without a local filesystem.

                              That’s increasingly a problem. There was a post here a few months back where someone had built clang as an AWS Lambda. I expect a lot of tools in the future will end up becoming things that can be deployed on FaaS platforms and then you really want the build system to understand how to translate between two namespaces (for example, to provide a compiler with a json dictionary of name to hash mappings for a content-addressable filesytem).

                              I forgot to provide you with a link to Farbique last time. I worked a bit on the design but never had time to do much implementation and Jon got distracted by other projects. We wanted to be able to run tools in Capsicum sandboxes (WASI picked up the Capsicum model, so the same requirements would apply to a WebAssembly/WASI FaaS service): the environment is responsible for opening files and providing descriptors into the tool’s world. This also has the nice property for a build system that the dependencies are, by construction, accurate: anything where you didn’t pass in a file descriptor is not able to be accessed by the task (though you can pass in directory descriptors for include directories as a coarse over approximation).

                              From this point of view, yes. But consider also this scenario: whomever is packaging libFoo for, say, Debian is not using build2 (because libFoo upstream is, say, still uses CMake) and so has no interest in maintaining this mapping.

                              I don’t think that person has to care, the person packaging something using libFoo needs to care and that creates an incentive for anyone packaging C/C++ libraries to keep the mapping up to date. I’d imagine that each repo would maintain this mapping. That’s really the only place where I can imagine that it can live without getting stale.

                              I’m more familiar with the FreeBSD packaging setup than Debian, so there may be some key differences. FreeBSD builds a new package set from the top of the package tree every few days. There’s a short lag (typically 1-3 days) between pushing a version bump to a port and users seeing the package version. Some users stay on the quarterly branch, which is updated less frequently. If I create a port for libFoo v1.0, then it will appear in the latest package set in a couple of days and, if I time it right, in the quarterly one soon after. Upstream libFoo notices and updates their map to say ‘FreeBSD has version 1.0 and it’s called libfoo`. Now I update the port to v1.1. Instantly, the upstream mapping is wrong for anyone who is building package sets themselves. A couple of days later, it’s wrong for anyone installing packages from the latest branch. A few weeks later, it’s wrong for anyone on the quarterly branch. There is no point at which the libFoo repo can hold a map that is correct for everyone unless they have three entries for FreeBSD, and even then they need to actively watch the status of builders to get it right.

                              In contrast, if I add a BUILD2_PACKAGE_NAME= and BUILD2_VERSION= line to my port (the second of which can default to the port version, so needs setting in a few corner cases), then it’s fairly easy to add some generic infrastructure to the ports system that builds a complete map for every single packaged library when you build a package set. This will then always be 100% up to date, because anyone changing a package will implicitly update it. I presume that the Debian package builders could do something similar with something in the source package manifest.

                              Note that the mapping needs to contain versions as well as names because the version in the package often doesn’t directly correspond to the upstream version. This gets especially tricky when the packaged version carries patches that are not yet upstreamed.

                              Oh, and options get more fun here. A lot of FreeBSD ports can build different flavours depending on the options that are set when building the package set. This needs to be part of the mapping. Again, this is fairly easy to drive from the port description but an immense amount of pain for anyone to try to generate from anywhere else. My company might be building a local package set that disables (or enables) an option that is the default upstream, so when I build something that uses build2 I may need to statically link a version of some library rather than using the system one, even though the default for a normal FreeBSD user would be to just depend on the package.

                              While this might look a bit impure (why does exe{prog1} depends on bar.h even though none of its sources use it), this works as expected. In particular, given a fully up-to-date build, if you remove foo.h, only exe{prog1} will be rebuilt. The mental model here is that the headers you list as prerequisites of an executable or library are a “pool” from which its source can “pick” what they need.

                              That is exactly what I want, nice! It feels like a basic thing for a C/C++ build system, yet it’s something I’ve not seen well supported anywhere else.

                              Sounds good. If this is public (or I can be granted access), I could even help.

                              It isn’t yet, hopefully later in the year…

                              Of course, the thing I’d really like to do (if I ever find myself with a few months of nothing to do) is replace the awful FreeBSD build system with something tolerable and it looks like build2 would be expressive enough for that. It has some fun things like needing to build the compiler that it then uses for later build steps, but it sounds as if build2 was designed with that kind of thing in mind.

            3. 2

              Not all small projects will necessarily grow into a large project. The trick is recognizing when or if the project will outgrow its infrastructure. Makefiles have a much lower conceptual burden, because Makefiles very concretely describe how you want your build system to run; but they suffer when you try to add abstractions to them, to support things like different toolchains, or creating the compilation database (I assume you’ve seen bear?). If you need your build described more abstractly (like, if you need to do different things with the dependency tree than simply build), then a different build tool will work better for you. But it can be hard to understand what the build tool is actually doing, and how it decided to do it. There’s no global answer.

              1. 4

                This is the CMake file that you need for a trivial C/C++ project:

                cmake_minimum_required(VERSION 3.20)
                add_executable(my-prog src1.c src2.cc)
                

                That’s it. That gives you targets to make my-prog, to clean the build, and will work on Windows, *NIX, or any other system that has a vaguely GCC or MSVC-like toolchain, supports debug and release builds, and generates a compile_commands.json for my editor to consume. If I want to add a dependency, let’s say on zstd, then it becomes:

                find_package(zstd CONFIG REQUIRED)
                cmake_minimum_required(VERSION 3.20)
                add_executable(my-prog src1.c src2.cc)
                target_link_libraries(my-prog PRIVATE zstd::libzstd_static)
                

                This will work with system packages, or with something like vcpkg installing a local copy of a specific version for reproduceable builds.

                Even for a simple project, the equivalent bmake file is about as complex and won’t let you target something like AIX or Windows without a lot more work, doesn’t support cross-compilation without some extra hoop jumping, and so on.

                1. 1

                  The common Makefile for this use case will be more lines of code (I never use bsd.prog.mk, etc., unless I’m actually working on the OS), but I think the word “complex” here obscures something important: that a Makefile can be considered simpler due to a very simple execution model, or a CMakeLists.txt can be considered simpler since it describes the compilation process more abstractly, allowing it to do a lot more with less.

                  For an example of why I think Makefile‘s are conceptually simpler, it is just as easy to use a Makefile with custom build tools as it is to compile C code. It’s much easier to understand:

                  %.c : %.precursor
                      python my_tool.py %< -o $@
                  

                  than it is to figure out how to use https://cmake.org/cmake/help/latest/command/add_custom_command.html to similar effect; or to try to act like a first-class citizen, and make add_executable to work with .precursor files.

              2. 2

                CMake gets a lot of criticism, but I think a fair share of its problems it’s just that people haven’t stopped to learn about the tool. It’s a second-class language for some people, just like CSS.

                1. 2

                  There’s an association issue here too. Compiling C++ sucks. It is significantly trickier than many other languages. The dependency ecosystem is far less automated too. Many dependencies are incorporated into a conglomerate project. The build needs of those depdencies come along for the ride. The problems with all of these constituents are exposed as a symptom of the top line build utility for the parent project. If cmake had made it’s first inroads with another language it would likely have a more nuanced reputation. Not that it doesn’t bring its own problems too, but it surely takes the blame for a lot of C++’s problems.

              1. 1

                I managed to boot my Mango Pi MQ Pro (a RISC-V 64 cheap SBC) with this new images: https://bret.dk/armbian-on-the-mangopi-mq-pro/ so I’m going to test some of my software in real RISC-V hardware!

                1. 2

                  I bought a Mango Pi MQ-PRO (a small SBC with a RISC-V64 processor and 1GB of RAM). I’ll try to run it as software support isn’t very good and if I succeed, I will try to compile some of my projects and play with RISC-V!

                  1. 1

                    Mango Pi MQ-PRO

                    How’d you actually get to buy one of these?

                    1. 2

                      It has been available on AliExpress for short periods of time. I was lucky enough to be able to get the 1 GB version and it arrived just last weekend.

                  1. 1

                    One of the best frameworks to use if you start a new project on the JVM with Kotlin. In this release, they have adopted a more “common” terminology but the essence is very similar to Ktor 1.0, which already makes heavy use of Kotlin-only features.

                    1. 3

                      I would like to add Prolog, even though it’s logic programming, some idioms are also used in FP. It’s already 50 years old, and the Prolog core is still the same. It was standardized in the 90s and there are lots of systems.

                      The bad news is that the ISO standard is a bit small when it comes to real-world applications, and systems outside the standard can implement a lot of different things.

                      1. 1

                        I like Prolog too, I’ve used it quite a bit, but it has different use cases than your usual prog lang. I could see both Prolog and SML living forever.

                      1. 2

                        This talk suffers from “if you have a hammer, everything looks like a nail”. At the end of the talk, it tries to apply Conway’s Law for everything (libraries, containers, microservices, …) saying it’s just the same reason behind it. But it is not. Of course, these things allow us to split the work but many of them are not done in the first place because of that, but because of DRY, reliability, reproducibility, isolation,…

                        Still, most of the message applies, the communication bandwidth between team members is higher than with the rest of the teams. And that is going to create not-so-focused products. So, as long as we’re still humans, the law would apply.

                        1. 1

                          I’ll try to finish reading the book “The Craft of Prolog”

                          1. 3

                            As I want to specialize more in the niche I’m currently more active, I’ll try to learn more Prolog. This means finishing The Craft of Prolog and probably reading the WAM Book. Also, I’ve been pondering about making my own toy Prolog but I have doubts because it’s a long task even if it’s a toy and probably that time is better invested in making patches and libraries for Scryer Prolog. However, Scryer Prolog is not yet ready, it’s missing three important things to use it myself more: GC, FFI and threading.

                            Also, I have some interest in Erlang but it’s another independent ecosystem (I’m already into Python, JS, Rust, Prolog and Java/Kotlin worlds) so I’m not sure if it’s worth it.

                            Also, I’d like to start a game project, but nor Prolog nor Erlang are going to fit well (Prolog maybe, but some technical bits are missing to be usable). Not having a real type system also makes me wonder if Rust is just a better option.

                            1. 1

                              I really hope they keep improving Kotlin/Native, it’s one of the ideas with the most potential. However, right now is not yet ready

                              1. 1

                                Have they sorted out the memory management and threading situation, or is that still “we’ll break the world every release until we find something that sticks”? Because I’m super excited about the idea of Kotlin/Native, but the third or fourth time they redefined such core stuff, I decided to check out for awhile.

                              1. 6

                                The pattern fits fine but I wonder how much of this boilerplate could be reduced with some kind of named arguments support.

                                  1. 2

                                    I’m preparing for the classes I’m going to give starting next week: Scala+Spark and Prolog at my alma mater university

                                    1. 4

                                      For quite a while there were competing languages for RDBMS, the most famous was QUEL, which was the original language of the father of Postgres: Ingres. More info here: https://www.holistics.io/blog/quel-vs-sql/

                                      1. 4

                                        I’d like to work on Prolog for some stuff. For me, it’s a language that is very concise and I feel very confident about the code I make that it is correct even if I write fewer tests. It is not perfect, there are sometimes performance issues, lack of tooling, so for anything serious, I will probably not choose it. My last public work with Prolog is an HTTP Server for Scryer Prolog. It still needs more work but it is already there if you want to try it. Some months ago I also shipped an HTML5 game with some parts written in Tau Prolog.

                                        1. 1

                                          What is your experience with Prolog? Do you use any of the advanced CLP features in some of the Prologs?

                                        1. 25

                                          From time to time this seems to pop up, and they talked as if it was never tried at big scale. But it’s wrong, it has been already tried. In my country, Spain, every citizen has a digital certificate (request process is tedious, you need a Java program and a going physically to some places) but you can use it at any bureaucratic thing, including paying taxes, buying national debt, request certificates from the judge, … But also you can use it on some banks and even in my university. I, as a user, think that it has some real benefits but also some cons.

                                          However, in the end, most people don’t like the workflow. Administrations have put login/password systems for the most commonly used services because of the request process, how to secure the certificates, how to move them between computers and so on it’s complicated. Also, the web browser interfaces are bad and scary, and for a lot of the time, some browsers (Firefox looking at you) rejected the certificates as invalid (Chrome and IE did just fine).

                                          Some other people prefer the electronic ID card, which is similar but you need special hardware, but in the end people reason about that as easier than a pure digital certificate.

                                          1. 4

                                            Do you think it would be better with a USB smart-card such as a Yubikey or something similar? IMO any system that expects a user to know how to secure and use a certificate file is a bad system, but it’s a usability problem, not really a technical one.

                                            1. 7

                                              That’s basically WebAuthentication.

                                              1. 1

                                                I’m talking like a government issued USB smartcard or hardware token, not the APIs that would use those tokens.

                                                1. 2

                                                  I believe this is what Estonia does with “e-Estonia”; I don’t really know of the details though.

                                                  1. 2

                                                    Yes – we get an ID card which is also a smartcard. You use it with something like this, and you can authenticate and sign using two different PINs on it. It uses the normal smartcard APIs so it tends to Just Work in browsers on major operating systems. They recently introduced a smartphone version that you can use alongside, but you need to register first using either the smartcard or visiting a bank.

                                              2. 3

                                                Sweden’s Bank ID makes managing certificates really easy. I use it to authenticate to everything from my bank, the tax authority, my company’s payroll software, and our kid’s school’s attendancy interface.

                                                The certificates are handled via my bank. The initial startup requires an personal visit with ID, after that you get a reminder every few years that it’s time to renew.

                                                https://www.bankid.com/en/

                                                1. 2

                                                  Same in Luxembourg with https://www.luxtrust.lu/ it works really well and you have the start card version (mostly for corporate) and the 2FA version.

                                                  1. 1

                                                    Looks very similar but instead of government-issued, they’re bank-issued :)

                                                    1. 2

                                                      Here in Sweden, the traditional issuers of ID cards have been banks. It’s changed now since the rules around passports have tightened, so the government offers a “national ID card” in addition to a passport (which is now accepted as ID - it used not to be up to the standards of ID cards).

                                                      You do need a bank “partnership” (usually a checking account) but these are easy to get, and free up until you’re starting to earn some money. Our kid got an account, debit card and a Bank ID at 16.

                                                  2. 1

                                                    Yes, the electronic ID card I was talking at the end is a SmartCard with password protection, but you need a reader and (another) Java program in the middle. If you forget your password you can go to the police office and they have machines that reset the password. It is not perfect but it feels more natural for regular users.

                                                  3. 1

                                                    In Poland it’s other way around - you may use your bank account as some sort of id. It works surprisingly well, some formalities may be done using your bank’s interface.

                                                    1. 1

                                                      It may not have worked out that well in Spain, but in Estonia it seems to be working well (from the outside, at least. Any Estonians can feel free to correct me). I don’t like how much information is stored in the ID, but the digital signature and ID verification seems to work well, at least according to Wikipedia https://en.wikipedia.org/wiki/Estonian_identity_card#Uses_for_identification

                                                      1. 1

                                                        The identity card of Estonia and the one from Spain are the same model! When a security bug was discovered in Estonia they needed to invalidate a whole bunch of cards here, because they were the same :) However, the physical smartcard and the pure client-side certificate (which the post was talking about) are two related but different systems (which I think complicates more its understanding for the normal citizen). The smartcard flow is not really standard, needs a Java program in the middle.

                                                    1. 16

                                                      The year of Prolog! Yes, I’m seriuous, last years we’ve seen flourish a new wave of Prolog environments (Tau, Trealla and Scryer) which this year can reach a production-ready status. At least, that’s what I hope and I’m helping this environments with some patches as well.

                                                      1. 19

                                                        year_of(prolog, 2021).

                                                        1. 6

                                                          There was even a new stable release of Mercury late last year. It’s, uh, I’m not personally betting on it getting wide scale adoption, but I do personally feel that it’s one of the most aesthetically pleasing bits of technology I’ve ever tried.

                                                          1. 5

                                                            A couple years ago I hacked on a Python type inferencer someone wrote in Prolog. I wasn’t enlightened, despite expecting to be, from a bunch of HN posts like this.

                                                            https://github.com/andychu/hatlog

                                                            For example, can someone add good error messages to this? It didn’t really seem practical. I’m sure I am missing something, but there also seemed to be a lot of deficiencies.

                                                            In fact I think I learned the opposite lesson. I have to dig up the HN post, but I think the point was “Prolog is NOT logic”. It’s not programming and it’s not math.

                                                            (Someone said the same thing about Project Euler and so forth, and I really liked that criticism. https://lobste.rs/s/bqnhbo/book_review_elements_programming )

                                                            Related thread but I think there was a pithy blog post too: https://news.ycombinator.com/item?id=18373401 (Prolog Under the Hood)

                                                            Yeah this is the quote and a bunch of the HN comments backed it up to a degree:

                                                            Although Prolog’s original intent was to allow programmers to specify programs in a syntax close to logic, that is not how Prolog works. In fact, a conceptual understanding of logic is not that useful for understanding Prolog.

                                                            I have programmed in many languages, and I at least have a decent understanding of math. In fact I just wrote about the difference between programming and math with regards to parsing here:

                                                            http://www.oilshell.org/blog/2021/01/comments-parsing.html#what-programmers-dont-understand-about-grammars

                                                            But I had a bad experience with Prolog. Even if you understand programming and math, you don’t understand Prolog.

                                                            I’m not a fan of the computational complexity problem either; that makes it unsuitable for production use.

                                                            1. 2

                                                              Same. Every time I look at Prolog-ish things I want to be enlightened. It just never clicks. However, I feel like I know what the enlightenment would look like.

                                                              I don’t fully grok logic programs, so I think of them as incredibly over-powered regexes over arbitrary data instead of regular strings. They can describe the specific shape of hypergraphs and stuff like that. So it makes sense to use it when you have an unwieldy blob of data that can only be understood with unwieldy blobs of logic/description, and you need an easy way to query or calculate information about it.

                                                              I think the master would say “ah, but what is programming if not pattern matching on data”? And at these words a PhD student is enlightened. It seems to makes sense for both describing the tree of a running program and smaller components like conditionals. It also seems like the Haskell folk back their way into similar spaces. But my brain just can’t quite get there.

                                                              1. 2

                                                                Sorry to hear that. For me Prolog is mainly about unification (which is different from most pattern matching I’ve seen because you need to remember the unifications you’ve done before between variables) and backtracking (which was criticized for being slow but in modern systems you can use different strategies for every predicate, the most famous alternative is tabling). For the rest, it should be used as a purely functional language (it is not and lots of tutorials use side effects, but keeping yourself pure you can reason about a lot of things making debugging way easier).

                                                                I did Prolog at university (which is not very rare in Europe) and we studied the logic parts of Prolog and where they come from, and yes, it’s logic but it’s heavily modified from the “usual way” to perform better and it’s not 100% mathematically equivalent (for example using negation can produce bad results, no occurs check, …) and it uses backward-chaining which is the reverse to what people usually learn. Also lots of people use ! which can be used to improve performance to cut solutions, but it makes the code non-pure and harder to reason about.

                                                                However what I really liked about Prolog was the libraries that are made using the simple constructs. Bidirectional libraries that are very useful like dcgs (awesome stuff, I did some Advent of Code problems only using this “pattern matching over lists” helpers), findall, clpz, reif, dif, CHR if you want forward-chaining logic is also available in most Prolog systems,…

                                                                Yes, computational complexity is a problem, having backtrackable data structures will always have a penalty but it’s not unfixable and there are ongoing efforts like the recent hashtable library.

                                                                Having that said, at the end it’s also a matter of preference. I’ve seen in the repo that you consider Haskell easier. In my case it’s just the opposite, Prolog fits my mind better there are fewer abstractions going on than in Haskell IMO.

                                                                For some modern Prolog you can checkout my HTTP/1.0 server library made in Prolog: https://github.com/mthom/scryer-prolog/pull/726/files

                                                                1. 1

                                                                  FWIW I think I’m more interested in logic programming than Prolog.

                                                                  I am probably going to play with the Souffle datalog compiler.

                                                                  And a hybrid approach of ML + SMT + Datalog seems promising for a lot of problems:

                                                                  https://arxiv.org/abs/2009.08361

                                                                  Prolog just feels too limited and domain specific. I think I compared it to Forth, which is extremely elegant for some problems, but falls off a cliff for others.

                                                            1. 1

                                                              I use feedly both in desktop and mobile since the early days (at the beginning you needed a Firefox extension to run it). It’s not perfect (nothing is) but it serves me well and I don’t plan to change. I use the free version (with ads) but it’s the kind of ads I tolerate.

                                                              1. 2

                                                                My personal tip is to use Docker Compose. Some people might disagree here but I think containers make it easier at the end (you do not depend on the host for almost anything). Just be sure you can execute your app in your dev computer using Docker Compose, and build/move the same containers to the servers. You are also free to use whatever service you want. Also, if you use Docker Compose you don’t need to learn/configure systemd, … because it will manage that for you. If you want automatic restarts on app crashes, you might need Swarm, which is very similar to Compose.

                                                                Also, prepare a script (I use Ansible, but even Bash could work) for automatic backups of the data (storing it on Azure or Amazon S3 is simple and not expensive, checkout the Cold Storage/Glacier options if they suit you).

                                                                1. 3

                                                                  I don’t know if this applies to AGPL but in GPL is not mandatory to have a GitHub/GitLab or even a tarball with your code. It’s just that if they ask for the code you should make it available for them, but it doesn’t need to be released in public. So if you ask a company to put a link to the source code, that’s not what GPL says.

                                                                  1. 2

                                                                    For distribution under the regular gpl you’re correct. If you ship a product, it’s advised to provide a cd or usb with the source code, then you never have to worry or make it public. General best practice is to just make it available online for convince. You can also include a written offer valid at least 3 years from the last release / support date. Then you have to ship a physical medium with the code. That’s how my company does it for the coffee machines, on request you get a usb stick. I’ve included pictures of that in the article.

                                                                    For the AGPL I’m not sure due to the network aspect. Any case, they never provided the source, not online nor offline.

                                                                    1. 7

                                                                      Okay I think I know, it has to be available over the network. Quoting the AGPL :

                                                                      13. Remote Network Interaction; Use with the GNU General Public License.

                                                                      Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph.

                                                                      https://www.gnu.org/licenses/agpl-3.0.en.html

                                                                      1. 3

                                                                        Thanks for the detailed reply, that’s another aspect of AGPL that it’s different from GPL then (in a good way I think). My intuition that might be similar was wrong. I’ll edit my comment.