I don’t deny most of the frustrations people feel with package management but some of these examples look too hard at one side of the ledger.
re: Firefox on Debian, there’s no reason the Firefox packagers could not take on the responsibility of packaging the newer versions of the dependencies giving them trouble. The conflict isn’t between multiple versions of a library. The conflict is on the shortest, coolest, default name. Usually the shortest, default name like ‘libxyz’ goes to the newest version and outdated versions get a version tag in their name. In some prominent cases the default name is abandoned:
sudo aptitude search '~i libpng'
i A libimage-png-libpng-perl - Perl interface to libpng
i libpng-dev - PNG library - development (version 1.6)
i A libpng-tools - PNG library - tools (version 1.6)
i libpng12-0 - PNG library - runtime
i libpng12-0:i386 - PNG library - runtime
i A libpng16-16 - PNG library - runtime (version 1.6)
i A libpng16-16:i386 - PNG library - runtime (version 1.6)
When they become experienced, they stop seeing these flaws as actual problems and start seeing them as something more or less normal.
I don’t consider these pains normal but I do think distros have provided our industry with a very helpful forcing function. Many many many software packages have up-to-date dependencies because of the upgrade pressure provided by distros. This leads to a better overall security landscape and better compatibility between applications. The Firefox example is about when that effort worked in the other direction, pressuring applications to retain outdated dependencies.
containerization lets applications use their own dependencies
To my mind, the value of this statement depends on the balance distro packaging provides as an upgrading forcing function versus holding applications back from distributing against newer dependencies. I personally think web browsers, with their immense attack surface and consequent frequent updates, are a somewhat special case. Debian and other distros should loosen some of their dependency policy management to allow these special case programs to bundle private dependencies, similar to the way Firefox has already done it with their self-packaged, self-updating installation option.
This conflict for “coolest name” causes practical pain for build systems, because you can’t just hardcode pkg-config libpng, or just specify the package in Ansible or whatever, because it’s sometimes libpng, sometimes libpng16, sometimes it’s some oddball package that requires running libpng-config instead of pkg-config, and all this will break if libpng ever releases 1.8.
Arbitrarily stuffing version number in the package name is just a crude workaround for package management systems that are missing the important multi-versioning feature. They should be able to find libpng under its canonical name, and let you specify which version you want. It’s silly to argue they don’t need the ability to have multiple versions of the same package on the same OS when they already do that, just poorly.
They should be able to find libpng under its canonical name, and let you specify which version you want. It’s silly to argue they don’t need the ability to have multiple versions of the same package on the same OS when they already do that, just poorly.
This is not a Linux distro problem. It’s just the same problem that’s always existed with runtime dynamic linking, and has gone by many names (like “DLL hell”).
The only ways to “solve” it are:
To isolate each application’s dependencies into a separate space that no other application will see or use. Which is a solution that gets attacked and laughed at and criticized as too complex when, say, a language package manager adopts it (see: node_modules, Python virtual environments), or
To abolish runtime dynamic linking and force all applications to statically link all dependencies at compile time. This is the approach taken by Rust and Go, and most praise for how good/simple their “packaging” is really derives from the fact that neither language tries to solve the dynamic-linking problem.
I think this should be a bit more precise. Both Rust and Go still use dynamically loaded libraries for ffi with all the usual issues described here. It’s not dynamic linking really, but in this context they work the same really, so I wouldn’t go as far as “abolish”.
The dependency isolation also could use point 1.1 with example of nixpkgs. It mixes both worlds by both isolating the dependencies which allows multiple versions and global sharing, which in turn prevents duplication and still has the effect of the ecosystem pushing applications to work with updated deps.
It is entirely a Linux distro problem, because all of these issues stem from the design they’ve chosen. Naming and selection of versions is entirely under their control. They have chosen to be only a thin layer on top of dumping files into predetermined file system paths and C-specific dynamic linking. This is very limiting, but these are their own limits they’ve chosen and are unwilling to redesign.
BTW: Rust does support dynamic linking, even for Rust-native libraries. It doesn’t have officially-sanctioned standard ABI, and macros, generics, and inline functions end up in a wrong binary. This is technically the same situation C++ is in, only more visible in Rust, because it uses dependencies more.
This is technically the same situation C++ is in, only more visible in Rust, because it uses dependencies more.
This is not quite right. C++, like C, does not mandate an ABI as part of the standard. This is an intentional choice to allow for a large space of possible implementations. Some platforms do not standardise an ABI at all. On embedded platforms, it’s common to recompile everything together, so no standard ABI is needed. On Windows, there was a strong desire for ABIs to be language agnostic, so things like COM were pushed for cross-library interfaces, allowing C++, VB, and .NET to share richer interfaces.
In contrast, *NIX platforms standardised on the Itanium C++ ABI about 20 years ago. GCC, Clang, XLC, ICC, and probably other C++ compilers have used this ABI and therefore been able to generate libraries that work with different compilers and different versions of compilers for all of this time. This includes rules about inline functions and templates, as well as exceptions, RTTI, thread-safe static initialisers, name mangling, and so on.
Existence of a de-facto ABI still does not make templates, macros, and inlining actually support dynamic linking. It just precisely defines how they don’t work. These features are fundamentally tied to the code on the user side, and end up at least partially in the user binary. Some libraries are just very careful about not using macros/inlines/templates near their ABI in ways that would expose that they don’t work correctly with dynamic linking (having a symbol for a particular template instantiation is not sufficient, because a change to template’s source code could have changed what needs to be instantiated). Rust can do the same.
Swift is the only language that has a real ABI for dynamic linking of generics that actually supports changing the generic code, but it comes at a significant runtime cost.
Existence of a de-facto ABI still does not make templates, macros, and inlining actually support dynamic linking
The Itanium ABI for C++ is not a de-facto ABI, it is a de-jure standard that has been specified as the platform ABI for pretty much every platform except Windows (Fuchsia has a few minor tweaks as do a couple of other platforms, but they are all documented standards). It is maintained by a group of folks working on compiler toolchains.
It does support dynamic linking of templates. Macros have no run-time existence and so this does not matter, however you can declare a template specialisation in a header, define it in a DSO, and link it to get the instantiation. If you want it to work with a custom type, then you need to place it in the header, but then the ABI specifies the way that these are lowered to COMDATs so that if two DSOs provide the same specialisation the run-time linker will select a single version at load time.
With the macro problem I mean a situation like a dependency exporting a macro:
#define ADD(a,b) (a-b)
And when the the dependency releases a new version with a bugfix:
#define ADD(a,b) ((a)+(b))
There’s no way to apply this bugfix to downstream users of this macro via dynamic linking. The bug is on the wrong side of the ABI boundary, and you have to recompile every binary that used the macro. Inline functions and templates are a variation of this problem.
This is true but is intrinsic to macros and not solvable for any language that has an equivalent feature and ships binaries. You can address it in languages that distribute some kind of IR that has a final on-device compile and link step.
You are conflating two concepts though:
Does the (implementation of the) language provide a stable ABI for dynamic linking?
Does a specific library provide a stable ABI for dynamic linking?
For C and C++, the answer is yes to the first question, but there is no guarantee that it is yes to the second for any given library. A C library with macros, inline functions, or structure definitions in the header may break ABI compatibility across releases. Swift provides tools for keeping structure layouts mom-fragile across libraries but it’s still possible to add annotations that generate better code for library users and break your ABI if you change the structure.
In C++, you can declare a template in a header and use external definitions to specify that your library provides specific instantiations. The ABI defines how these are then linked. Or you can allow different instantiations in consumers and, if two provide the same definition, then the ABI describes how these are resolved with COMDAT merging so that a single definition is used in a running program. If you change something that is inlined then the language explicitly states that this is undefined behaviour (via the one definition rule). The normal work around for this is to use versioned namespaces so that v1 and v2 of the same template can coexist.
C++ provides a well-defined ABI on all non-Windows platforms and a set of tools that allow long-term stable ABIs. It also provides tools that let you sacrifice ABI stability for performance. It is up to the library author which of these are used.
Some libraries are just very careful about not using macros/inlines/templates near their ABI in ways that would expose that they don’t work correctly with dynamic linking
But that’s where I disagree. All of those things do work correctly with dynamic linking. Macros trivially work, because they always end up at the call site. Inline functions are emitted as COMDATs in every compilation unit that uses them, one version is kept in the final binary at static link time and if multiple shared libraries use them then they will all point to the same canonical version. Templates can be explicitly instantiated in a shared library and used just like any other function from outside. If they are declared in the header then they follow the same rules as for inline functions.
All of these work for dynamic linking in ways that are well specified by the ABI. The thing that doesn’t work with dynamic (or static) linking is having two, different, incompatible versions of the same function in the same linked program. That also doesn’t work with a single compilation unit: if you define the same function twice with different bodies then you get a compile error. If you define the same function twice with different implementations in different compilation units for static linking, then you get ODR violations and either linker errors are undefined behaviour at run time. Adding dynamic linking on top does not change this at all.
I think we have different definitions of “work”. I don’t mean it in the trivial sense “ld returns a success code”. I mean it in the context Linux distros want it to work: if there’s a new version of the library, especially a security fix, they want to just drop a new .so file and not recompile any binaries using it.
This absolutely can’t work if the security fix was in an inlined function. The inlined function is not in the updated .so (at least not as anything more than dead code). It’s smeared all over many copies of downstream binaries that inlined it, in its old not-fixed version.
I mean ‘work’ in the sense that the specification defines the behaviour in a way that is portable across implementations and allows reasoning about the behaviour at the source level.
Your definition of ‘works’ can only be implemented in languages that are distributed in source or high-level IR form. Swift, C, and so on do not provide these guarantees because the only way that you can provide them is by late binding everything that is exposed across a library boundary and that is too expensive (Swift defaults to this but library vendors typically opt out for some things to avoid the penalty).
This is why, in FreeBSD, we build packages as package sets and rebuild all dependencies if a single library changes. Since a single moderately fast machine can build all 30,000 packages in less than a day, there’s no reason not to do this.
It sounds like you agree that traditional distro packaging is sustainable as a model but has fell behind on tooling. Multi-versioning package management tools would allow packages to continue to depend on older packages indefinitely.
The point I was trying to get across was that the crudeness of the workarounds in the existing distros has acted as a forcing function to upgrade dependencies. That is partially a social thing because you don’t want to be the last package forcing crudely named libssl0.9.8 to stick around. It is partial a balance of effort question because the easiest thing for a packager to do is often to push patches upstream. Since outdated packages provide a greater attack surface (as pointed out in the article) and other problems, what I’m hoping to hear from Flatpak proponents: without distros, what will push maintainers to upgrade their dependencies? I get frustrated with conventional package managers but I will also be very frustrated in a few years when I have 12 versions of org.freedesktop.Platform (or whatever other major dependency) installed by Flatpak because I have ~15 applications that just won’t re-package on a newer platform version.
Distros provide value in curation, security patching, and the hard battles with snowflake build systems required to make (mostly reproducible) binaries for every library. But the tooling used to get these benefits is IMHO past breaking point. Just telling everyone to stop using any dependencies that aren’t old versions of dynamically-linked C libraries isn’t sustainable. Manual repackaging every npm/pypi/bundler/cargo dependency into a form devs don’t want to use isn’t sustainable. Lack of robust support for closed-source software is a problem too.
Getting ecosystem to move together is a social problem, but I don’t think shaming and causing frustration is the right motivator to keep it moving. The last package that depends on a dead version of a library is usually like that because the author has quit maintaining it, so they won’t care either way. And if the tooling makes this painful, it only punishes other users who aren’t in position to fix it.
There are no silver bullets here, but Linux distro tooling has room to improve to make it easier. Instead of complaining that all these new semver-based multi-version-supporting package managers developed for programming languages make it too easy to add dependencies, they should look into making dependencies “too easy” in Linux too.
The article seems to be claiming that a solution that, intrinsically, requires more work is more sustainable. I don’t see how this can be true. If A and B both depend on libfoo, then libfoo needs to be packaged. If there is a security vulnerability in libfoo then the package needs to be updated and A and B should be rebuilt (not always required, but if the fix changed macros or inline functions then it may be). If they depend on different versions of libfoo then the lastest version probably gets bathe security fix from upstream but the one depending on the older version will need the distribution to do a security back port, or to analyse commit logs and determine that the bug was introduced after the packaged release (and not back-ported with any bug-fix backports).
This work is not avoided by using containers, isolated namespaces for library dependencies, or any similar things. If anything, these technologies make it more likely that you will get into a state where a single vulnerability disclosure leads to a combinatorial amount of work.
Functional package managers don’t get a mention here, but it is interesting to consider them in this article’s context.
On one hand they’re much like traditional package managers, in the sense that the burdon of packaging is picked up by a third party who might not know the project intimately. This sort of burdon still requires a great number of volunteer hours.
On the other, they side-step many of the messier aspects of dependency management and installation complexity through their isolation of packages and outputs. In a way one could see them as entirely ancillary to the article’s focus, because they can output either their own native package objects, or another format entirely (ie. a docker image in practice, but hypothetically, why not an appimage or flatpak?).
Returning to volunteer hours, I’ve also seen many a project that isn’t packaged by a third party but iinstead comes with a nix or guix manifest or package in the source tree. Even if this article is totally right, there could still be arguments for functional systems in the packaging of flatpaks by app devs, with the added benefit of functional tooling for dev environments in day-to-day work and onboarding c:
I don’t deny most of the frustrations people feel with package management but some of these examples look too hard at one side of the ledger.
re: Firefox on Debian, there’s no reason the Firefox packagers could not take on the responsibility of packaging the newer versions of the dependencies giving them trouble. The conflict isn’t between multiple versions of a library. The conflict is on the shortest, coolest, default name. Usually the shortest, default name like ‘libxyz’ goes to the newest version and outdated versions get a version tag in their name. In some prominent cases the default name is abandoned:
I don’t consider these pains normal but I do think distros have provided our industry with a very helpful forcing function. Many many many software packages have up-to-date dependencies because of the upgrade pressure provided by distros. This leads to a better overall security landscape and better compatibility between applications. The Firefox example is about when that effort worked in the other direction, pressuring applications to retain outdated dependencies.
To my mind, the value of this statement depends on the balance distro packaging provides as an upgrading forcing function versus holding applications back from distributing against newer dependencies. I personally think web browsers, with their immense attack surface and consequent frequent updates, are a somewhat special case. Debian and other distros should loosen some of their dependency policy management to allow these special case programs to bundle private dependencies, similar to the way Firefox has already done it with their self-packaged, self-updating installation option.
This conflict for “coolest name” causes practical pain for build systems, because you can’t just hardcode
pkg-config libpng
, or just specify the package in Ansible or whatever, because it’s sometimeslibpng
, sometimeslibpng16
, sometimes it’s some oddball package that requires runninglibpng-config
instead ofpkg-config
, and all this will break iflibpng
ever releases 1.8.Arbitrarily stuffing version number in the package name is just a crude workaround for package management systems that are missing the important multi-versioning feature. They should be able to find
libpng
under its canonical name, and let you specify which version you want. It’s silly to argue they don’t need the ability to have multiple versions of the same package on the same OS when they already do that, just poorly.This is not a Linux distro problem. It’s just the same problem that’s always existed with runtime dynamic linking, and has gone by many names (like “DLL hell”).
The only ways to “solve” it are:
node_modules
, Python virtual environments), orI think this should be a bit more precise. Both Rust and Go still use dynamically loaded libraries for ffi with all the usual issues described here. It’s not dynamic linking really, but in this context they work the same really, so I wouldn’t go as far as “abolish”.
The dependency isolation also could use point 1.1 with example of nixpkgs. It mixes both worlds by both isolating the dependencies which allows multiple versions and global sharing, which in turn prevents duplication and still has the effect of the ecosystem pushing applications to work with updated deps.
It is entirely a Linux distro problem, because all of these issues stem from the design they’ve chosen. Naming and selection of versions is entirely under their control. They have chosen to be only a thin layer on top of dumping files into predetermined file system paths and C-specific dynamic linking. This is very limiting, but these are their own limits they’ve chosen and are unwilling to redesign.
BTW: Rust does support dynamic linking, even for Rust-native libraries. It doesn’t have officially-sanctioned standard ABI, and macros, generics, and inline functions end up in a wrong binary. This is technically the same situation C++ is in, only more visible in Rust, because it uses dependencies more.
This is not quite right. C++, like C, does not mandate an ABI as part of the standard. This is an intentional choice to allow for a large space of possible implementations. Some platforms do not standardise an ABI at all. On embedded platforms, it’s common to recompile everything together, so no standard ABI is needed. On Windows, there was a strong desire for ABIs to be language agnostic, so things like COM were pushed for cross-library interfaces, allowing C++, VB, and .NET to share richer interfaces.
In contrast, *NIX platforms standardised on the Itanium C++ ABI about 20 years ago. GCC, Clang, XLC, ICC, and probably other C++ compilers have used this ABI and therefore been able to generate libraries that work with different compilers and different versions of compilers for all of this time. This includes rules about inline functions and templates, as well as exceptions, RTTI, thread-safe static initialisers, name mangling, and so on.
Existence of a de-facto ABI still does not make templates, macros, and inlining actually support dynamic linking. It just precisely defines how they don’t work. These features are fundamentally tied to the code on the user side, and end up at least partially in the user binary. Some libraries are just very careful about not using macros/inlines/templates near their ABI in ways that would expose that they don’t work correctly with dynamic linking (having a symbol for a particular template instantiation is not sufficient, because a change to template’s source code could have changed what needs to be instantiated). Rust can do the same.
Swift is the only language that has a real ABI for dynamic linking of generics that actually supports changing the generic code, but it comes at a significant runtime cost.
The Itanium ABI for C++ is not a de-facto ABI, it is a de-jure standard that has been specified as the platform ABI for pretty much every platform except Windows (Fuchsia has a few minor tweaks as do a couple of other platforms, but they are all documented standards). It is maintained by a group of folks working on compiler toolchains.
It does support dynamic linking of templates. Macros have no run-time existence and so this does not matter, however you can declare a template specialisation in a header, define it in a DSO, and link it to get the instantiation. If you want it to work with a custom type, then you need to place it in the header, but then the ABI specifies the way that these are lowered to COMDATs so that if two DSOs provide the same specialisation the run-time linker will select a single version at load time.
With the macro problem I mean a situation like a dependency exporting a macro:
And when the the dependency releases a new version with a bugfix:
There’s no way to apply this bugfix to downstream users of this macro via dynamic linking. The bug is on the wrong side of the ABI boundary, and you have to recompile every binary that used the macro. Inline functions and templates are a variation of this problem.
This is true but is intrinsic to macros and not solvable for any language that has an equivalent feature and ships binaries. You can address it in languages that distribute some kind of IR that has a final on-device compile and link step.
You are conflating two concepts though:
For C and C++, the answer is yes to the first question, but there is no guarantee that it is yes to the second for any given library. A C library with macros, inline functions, or structure definitions in the header may break ABI compatibility across releases. Swift provides tools for keeping structure layouts mom-fragile across libraries but it’s still possible to add annotations that generate better code for library users and break your ABI if you change the structure.
In C++, you can declare a template in a header and use external definitions to specify that your library provides specific instantiations. The ABI defines how these are then linked. Or you can allow different instantiations in consumers and, if two provide the same definition, then the ABI describes how these are resolved with COMDAT merging so that a single definition is used in a running program. If you change something that is inlined then the language explicitly states that this is undefined behaviour (via the one definition rule). The normal work around for this is to use versioned namespaces so that v1 and v2 of the same template can coexist.
C++ provides a well-defined ABI on all non-Windows platforms and a set of tools that allow long-term stable ABIs. It also provides tools that let you sacrifice ABI stability for performance. It is up to the library author which of these are used.
This is what I’ve meant by:
But that’s where I disagree. All of those things do work correctly with dynamic linking. Macros trivially work, because they always end up at the call site. Inline functions are emitted as COMDATs in every compilation unit that uses them, one version is kept in the final binary at static link time and if multiple shared libraries use them then they will all point to the same canonical version. Templates can be explicitly instantiated in a shared library and used just like any other function from outside. If they are declared in the header then they follow the same rules as for inline functions.
All of these work for dynamic linking in ways that are well specified by the ABI. The thing that doesn’t work with dynamic (or static) linking is having two, different, incompatible versions of the same function in the same linked program. That also doesn’t work with a single compilation unit: if you define the same function twice with different bodies then you get a compile error. If you define the same function twice with different implementations in different compilation units for static linking, then you get ODR violations and either linker errors are undefined behaviour at run time. Adding dynamic linking on top does not change this at all.
I think we have different definitions of “work”. I don’t mean it in the trivial sense “
ld
returns a success code”. I mean it in the context Linux distros want it to work: if there’s a new version of the library, especially a security fix, they want to just drop a new.so
file and not recompile any binaries using it.This absolutely can’t work if the security fix was in an inlined function. The inlined function is not in the updated
.so
(at least not as anything more than dead code). It’s smeared all over many copies of downstream binaries that inlined it, in its old not-fixed version.I mean ‘work’ in the sense that the specification defines the behaviour in a way that is portable across implementations and allows reasoning about the behaviour at the source level.
Your definition of ‘works’ can only be implemented in languages that are distributed in source or high-level IR form. Swift, C, and so on do not provide these guarantees because the only way that you can provide them is by late binding everything that is exposed across a library boundary and that is too expensive (Swift defaults to this but library vendors typically opt out for some things to avoid the penalty).
This is why, in FreeBSD, we build packages as package sets and rebuild all dependencies if a single library changes. Since a single moderately fast machine can build all 30,000 packages in less than a day, there’s no reason not to do this.
It sounds like you agree that traditional distro packaging is sustainable as a model but has fell behind on tooling. Multi-versioning package management tools would allow packages to continue to depend on older packages indefinitely.
The point I was trying to get across was that the crudeness of the workarounds in the existing distros has acted as a forcing function to upgrade dependencies. That is partially a social thing because you don’t want to be the last package forcing crudely named libssl0.9.8 to stick around. It is partial a balance of effort question because the easiest thing for a packager to do is often to push patches upstream. Since outdated packages provide a greater attack surface (as pointed out in the article) and other problems, what I’m hoping to hear from Flatpak proponents: without distros, what will push maintainers to upgrade their dependencies? I get frustrated with conventional package managers but I will also be very frustrated in a few years when I have 12 versions of
org.freedesktop.Platform
(or whatever other major dependency) installed by Flatpak because I have ~15 applications that just won’t re-package on a newer platform version.Distros provide value in curation, security patching, and the hard battles with snowflake build systems required to make (mostly reproducible) binaries for every library. But the tooling used to get these benefits is IMHO past breaking point. Just telling everyone to stop using any dependencies that aren’t old versions of dynamically-linked C libraries isn’t sustainable. Manual repackaging every npm/pypi/bundler/cargo dependency into a form devs don’t want to use isn’t sustainable. Lack of robust support for closed-source software is a problem too.
Getting ecosystem to move together is a social problem, but I don’t think shaming and causing frustration is the right motivator to keep it moving. The last package that depends on a dead version of a library is usually like that because the author has quit maintaining it, so they won’t care either way. And if the tooling makes this painful, it only punishes other users who aren’t in position to fix it.
There are no silver bullets here, but Linux distro tooling has room to improve to make it easier. Instead of complaining that all these new semver-based multi-version-supporting package managers developed for programming languages make it too easy to add dependencies, they should look into making dependencies “too easy” in Linux too.
The article seems to be claiming that a solution that, intrinsically, requires more work is more sustainable. I don’t see how this can be true. If A and B both depend on libfoo, then libfoo needs to be packaged. If there is a security vulnerability in libfoo then the package needs to be updated and A and B should be rebuilt (not always required, but if the fix changed macros or inline functions then it may be). If they depend on different versions of libfoo then the lastest version probably gets bathe security fix from upstream but the one depending on the older version will need the distribution to do a security back port, or to analyse commit logs and determine that the bug was introduced after the packaged release (and not back-ported with any bug-fix backports).
This work is not avoided by using containers, isolated namespaces for library dependencies, or any similar things. If anything, these technologies make it more likely that you will get into a state where a single vulnerability disclosure leads to a combinatorial amount of work.
Functional package managers don’t get a mention here, but it is interesting to consider them in this article’s context.
On one hand they’re much like traditional package managers, in the sense that the burdon of packaging is picked up by a third party who might not know the project intimately. This sort of burdon still requires a great number of volunteer hours.
On the other, they side-step many of the messier aspects of dependency management and installation complexity through their isolation of packages and outputs. In a way one could see them as entirely ancillary to the article’s focus, because they can output either their own native package objects, or another format entirely (ie. a docker image in practice, but hypothetically, why not an appimage or flatpak?).
Returning to volunteer hours, I’ve also seen many a project that isn’t packaged by a third party but iinstead comes with a nix or guix manifest or package in the source tree. Even if this article is totally right, there could still be arguments for functional systems in the packaging of flatpaks by app devs, with the added benefit of functional tooling for dev environments in day-to-day work and onboarding c: