1. 9
  1. 3

    I struggle with this kind of thing because I don’t really know what a ${LANGUAGE} package manager is. In the .NET / Java world, this is something I can just about understand for pure-Java/.NET packages: it’s a set of bytecode files that provide a set of classes in a given namespace. It gets a bit more interesting when this depends on some native library. If, for example, I want to have a Java wrapper around libavcodec, do I expect a Java package manager to provide the libavcodec shared library? Now it’s not a Java package manager, it’s a Java + Linux/Windows/macOS/*BSD/whatever package manager and needs to know how to get the right version of the library for each of my build targets. If I have a .NET or Python wrapper, does the same apply? Do all three need to have their own logic for fetching the libavcodec binary? If I’m running on a different architecture, do they all need to now have the logic for building a large C library from source? Do they also fetch and build all of the dependent libraries (the FreeBSD package lists 9 build dependencies and 22 dependent libraries for ffmpeg. The Python wrapper when installed from the OS package manager, picks these up automatically, I have no idea how it works with pip).

    For C/C++, this makes even less sense. The difference between the install package for a library and the development one is just whether it contains header files (possibly debug symbols) and on non-Debian systems everyone realises that headers are tiny and so puts them in the same package. The only reason that you need a C/C++ package manager is if you want to bundle the library along with your program for distribution (either via static linking or by shipping the shared library). If you do that, then the library can’t be updated independently from the program if someone finds a security vulnerability. If you’re doing anything open source then you’re making work for downstream packages by using a tool like this, because they’ll want to avoid duplication and depend on the version of the library that is already packaged for the OS that they’re building packages for.

    So the real use of this seems to be for proprietary software depending on open-source libraries and wanting to ship bundles of a program and all dependencies.

    The thing I’d actually like is some kind consistent naming so that I can declare in a manifest for my program that I depend on libfoo.so version 4.2.1 or later and have that mapped automatically to whatever package installs that library, its headers, and its pkg-config files. Then apt, yum, pkg, and so on can all just be extended to parse the manifest and install the build dependencies for me (and when anyone wants to build packages for any of these, they just need to run the tool once on the dependency manifest to generate the package build and run dependency lists).

    1. 1

      I can declare in a manifest for my program that I depend on libfoo.so version 4.2.1 or later and have that mapped automatically to whatever package installs that library

      The problem is that ‘libfoo.so version 4.2.1 or later’ is actually not as specific as you might think.

      Sometimes, there are multiple libraries with the same name. Different operating systems choose differently how to deal with the collision.

      Sometimes, there are multiple libraries with different names but which nominally implement the same API. For example, blas and openblas. Maybe you targeted blas for your project, but it would also work fine with openblas.

      Or, maybe you require some openblas-specific or ncurses-specific features that aren’t guaranteed to be offered by providers of libblas.so/libcurses.so.

      Different libraries have different standards for stability. If you targeted libfoo version 4.2.1, then version 4.2.6 will probably work fine too. Will 4.3.0? 5.0.0?

      1. 1

        The problem is that ‘libfoo.so version 4.2.1 or later’ is actually not as specific as you might think.

        That’s exactly my point. The thing that I want is a canonical name for each library, all of its build-time options, and its version. I then want each OS to provide a map from each of those to things available in the package system.

      2. 1

        In C / C++ context, it only makes sense to me if you want to statically link everything. And if you want to do that, a package manager fetching binary blob would definitely not be the right generic manager (at risk of pointing to different stdc++, libc etc). Dynamic libraries, as much as I disliked, was handled OK by apt / yum / pkg and so on (as you mentioned). Since you are doing system-wide changes with these anyway, wrapped in a Docker image is fine for these.

      3. 2

        C/C++ needs an officially supported package manager like other languages have

        1. 2

          Good luck trying to implement such a thing on all the platforms C/C++ supports.

          1. 1

            Does Cargo from Rust not support many platforms? Which platform does it not support which is supported by C/C++. Also why does it need to support 100% of all platforms. Why not just 99% of users and their use cases (note: I didn’t say 99% of use cases, but users). Doesn’t the same thing apply to compilers?

            1. 3

              We have open sourced a memory allocator written in C++ that supports the following platforms:

              • Windows
              • Linux
              • macOS
              • OpenEnclave
              • FreeBSD (+FreeBSD kernel mode)
              • OpenBSD
              • NetBSD
              • Solaris
              • Haiku

              And the following architectures (32- and 64-bit variants):

              • x86
              • Arm
              • PowerPC
              • MIPSp
              • RISC-V

              This is a fairly small set of mostly mainstream platforms that C++ supports. We don’t (yet?) support any systems, mainframe platforms, and so on. Clang supports platforms that we don’t and clang supports far fewer targets than GCC. Rust and Cargo support far fewer platforms than this. A large part of the value of C++ comes from the fact that the same code can work on any of these platforms and architectures. Rust may get there in time but it’s starting from supporting a few architectures on basically POSIX systems and expanding, so can take any tooling developed along the way with it. Mandating something like this for the C++ standard would require either designating a large number of C++ targets as second-class (which will never get WG21 approval) or designing something that can support all C++ targets (including being useable in cross-compile toolchains).

              1. 1

                You’d have to support all the various architectures, binary formats, encodings, path syntax, etc. of the various platforms. The C/C++ standards are tortured reads because they leave so much up to implementations because of that. Such a package manager would be better as an external thing - and indeed, vcpkg/conan already are.

          2. 1

            There’s no actual code on the github linked to this, one has to click through to another repository

            1. 1

              Because it is implemented based on xmake, xrepo is just a wrap script, all implementations are in xmake, so as long as xmake is installed, the xrepo command can be used directly.