But that’s less about development environments in general than about consulting in that space.
I feel this pain fairly regularly, and that’s working on mainly one project. There’s an implicit assumption that it’s possible for everyone to install the latest version of library/language X with no friction, which is rarely true before you even get to conflicts.
Most recently for example I had to install ansible 2.0, which broke my home ansible setup which was on 1.9.
This is exactly the problem I hoped to address by switching to NixOS; I suppose I’ll know in a year or so whether it has helped, from the absence of problems.
I had a memorable issue once where I discovered that an old version of MacOS used to break in fascinating, deep ways when users tried to install Ruby from source, conflicting with its internals. And, before Cabal sandboxes, it used to be a major effort to wipe and reinstall the Haskell world for the project I wanted to work on on any given day.
So, absolutely. “Just install version X of foo” is not a trivial instruction.
As someone that has switched to nixos for linux stuff.
It helps, but its no panacea. You’ll still run into issues like the gp with things like breaking config file updates.
BUT, nix helps there too letting you go: eff this, i’ll rollback to the prior version of things and sort this out later. So you win some/lose some still. That said, it really is a lot nicer to test things in as an example. I’m a big fan. I also use nix on osx as I’m a sadist but there too its a huge help. Though I have to fix a lot of the derivations to get them working on osx but whatever.
Yeah, having to fix stuff before I can use it doesn’t bother me; I feel like most ways I’ve ever chosen to work have wound up needing that, and I might as well choose one where I feel like there’s some chance it’ll stay fixed.
I wish it had a solution for configuration state that lives in the home directory. That’s obviously pretty complicated since files that need to be mutable and files that I’d like not to be are side-by-side, so…
Because in making one, you’re expected to be a low-level sysadmin for your own machine.
I don’t think this is some absurd requirement at all - you’re a software developer, you ought to be the sysadmin for your own machine. Educate your clients that setting up a dev environment involves some overhead, and get used to installing (and debugging the installation of) the various runtimes, libraries, etc that you commonly work with, so that it isn’t so slow and intimidating. That’s what we always did when I was a consultant. If you can’t figure out how to install rbenv, or how to use git, well… like it or not, those tools are now part of your expected toolkit, and your time is better spent familiarizing yourself with them than bemoaning that you need to use them at all.
Git: when the de facto standard is so busted that it intractably hoses the computer of one of the best designers on the planet, then you have a rather serious problem on your hands.
Being a good designer has almost nothing to do with being proficient with git.
TL;DR: Clueless “designer” thinks highly of himself, but can’t figure out Git, VMs, or the concept of not working from his personal machine. Therefore Git, developers, and developer environments are stupid and terrible and need to change.
“Virtual machines” is IME the correct answer to this problem. No more “install Python 3.4 but-not-3.5 system-wide, then spin up MySQL and Apache and put Redis on this TCP port and aaaargh”. Just “here’s an OVA file” or “here’s a Vagrantfile” or “here’s a Dockerfile” or “here’s a Docker image”. Push the “go” button, go make tea or coffee, come back and it’s done, and the whole mess is completely isolated from whatever messes you’ve got on your own computer.
They represent a retreat. They work today, because the software we want to use is designed to work well within this level of isolation. It’s entirely predictable that if they become commonplace, people will start inventing host-VM integration tools that break the isolation, and then we’ll have to deal with the same versioning and dependency issues with those.
The sustainable way to deal with versioning and dependency issues is to actually deal with them, through better distribution systems and better isolation of builds.
Use what you need to in the short term, I’m not arguing with your approach right now. But alarm bells should be going off.
I think the big, monolithic, isolated software island is going to keep having a value even if “virtual machines” in general evolve to become more package-like. Giving easy access to a development environment to contributors that are uncomfortable about deployment seems like a reasonable use-case for these. I certainly agree that virtual machines do not solve the distribution problem (or configurability, maintainability, etc.) in general, and that they do not obviate the search for improvement of the underlying software, but they seem like a good fit in that scenario.
They are a “retreat”, in the sense that we should constantly push to remove the barriers that make them valuable, and try to make them unnecessary. But these barriers have a cost that depends on the user, and what passes as “simple enough that you can do it directly without a virtual machine” to most users may still be out of reach for some.
A similar discussion is happening in academia around the idea of “reproducibility” of computer science papers. Researchers are working on an artifact evaluation process, whose purpose is to check whether the software provided along an article is usable by some people to at least double-check the claims made in the article. (There is a lot of room for stronger notions of reproducibility, such as being able to adapt the software to perform other similar experiments, but that’s a first step.) Many authors use virtual machine as a way to guarantee a reproducible environment for artifact evaluators, but that is unsatisfying in many respects, for example it does not answer the question of “ten years from now, will the current software platforms be able to run this experiment?”.
I find that sort-of ok in some cases, but they can also multiply maintenance problems. The problems are now decoupled, but they still exist, and there are more of them. Every separate Docker image becomes a thing that needs its own separate maintenance, security updates, etc., which gets tedious, and kind of a magnet for bitrotting old code sitting around everywhere. At times it feels like the Bad Old Days when proprietary Unix apps installed their own self-contained world into /opt.
The way I interpret his conclusion is that he refuses to collaborate and will only throw his work over a fence to a team to do what they will with it. I think this is an unprofessional attitude, though I understand the frustration.
It’s an unfortunate state of affairs that it can be so hard to get a development environment to work - as a consultant it can be a real problem since you’re bound to work on a variety of projects of varying degrees of bootstrapability. A lot of times what I’ve done when onboarding on to a new project is to document all the steps taken in order to get things to work - as applications grow organically the original teams often lose sight of what they’ve cobbled together. This is coming from the perspective of a developer who -has- to have a working environment to really contribute.
I have to go on a tangent for a moment to talk about this:
when the de facto standard [git] is so busted that it intractably hoses the computer of one of the best designers on the planet, then you have a rather serious problem on your hands.
I watched the video, it neither hosed the computer of the designer nor was it intractable. What I was able to gather from the 2 minute video is that she was being asked to rebase and force push as a part of their process, which is a terrible process to ask people to use on a collaborative project without understanding what you’re doing. I don’t argue that git’s UX is great, or that you can’t shoot yourself in the foot with it, but that rhetoric was over the top. I hope the author is NOT suggesting to not learn git because this can and does happen.
Back to the rest of the piece. I think there’s a divide working with designers and developers in the same development environment - developers want things their way, and designers want things their way. Naturally. I think it’s important to work out a process where there’s some compromise, which means both sides are a little out of their comfort zones in order to accommodate the “other side”.
The perspective of a consultant vs an employee is going to be a lot different here also, since a consultant is going to be running into this hurdle all over the place, where other employees might have to deal with it just once. So long as the client understands and pays for the time spent invoking the eldritch gods to be able to work on a project, refusing to do it seems like a cop out to me. In fact, the more you do it I think the better at it you get, which can be a valuable tool in your toolbox. If you’re a consultant and you’re able to avoid it entirely based on principle I guess that’s lucky, and you’re probably good at the hustle, but I don’t think you’re really doing yourself any favors with that attitude.
EDIT: fixed “not suggesting not learning” to match intent, despite poor double-negativity grammar
I’ve found that this is one of those things that will probably never change. I feel it falls under the same category of “which text editor is best (and why is it emacs)” or “linux/osx/windows”. Its either an ideological fight, or an attack on a developer’s productivity to go and force configuration with tools like boxen. The best way i’ve found to fight this though is to go and delete your development environment once and a while and try to see how hard it is to get it set back up again.
I think this is a special case of the general rule that operations (done well) is harder than it seems to management, and thus eternally underfunded/understaffed.
[Comment removed by author]
I feel this pain fairly regularly, and that’s working on mainly one project. There’s an implicit assumption that it’s possible for everyone to install the latest version of library/language X with no friction, which is rarely true before you even get to conflicts.
Most recently for example I had to install ansible 2.0, which broke my home ansible setup which was on 1.9.
This is exactly the problem I hoped to address by switching to NixOS; I suppose I’ll know in a year or so whether it has helped, from the absence of problems.
I had a memorable issue once where I discovered that an old version of MacOS used to break in fascinating, deep ways when users tried to install Ruby from source, conflicting with its internals. And, before Cabal sandboxes, it used to be a major effort to wipe and reinstall the Haskell world for the project I wanted to work on on any given day.
So, absolutely. “Just install version X of foo” is not a trivial instruction.
As someone that has switched to nixos for linux stuff.
It helps, but its no panacea. You’ll still run into issues like the gp with things like breaking config file updates.
BUT, nix helps there too letting you go: eff this, i’ll rollback to the prior version of things and sort this out later. So you win some/lose some still. That said, it really is a lot nicer to test things in as an example. I’m a big fan. I also use nix on osx as I’m a sadist but there too its a huge help. Though I have to fix a lot of the derivations to get them working on osx but whatever.
Yeah, having to fix stuff before I can use it doesn’t bother me; I feel like most ways I’ve ever chosen to work have wound up needing that, and I might as well choose one where I feel like there’s some chance it’ll stay fixed.
I wish it had a solution for configuration state that lives in the home directory. That’s obviously pretty complicated since files that need to be mutable and files that I’d like not to be are side-by-side, so…
I don’t think this is some absurd requirement at all - you’re a software developer, you ought to be the sysadmin for your own machine. Educate your clients that setting up a dev environment involves some overhead, and get used to installing (and debugging the installation of) the various runtimes, libraries, etc that you commonly work with, so that it isn’t so slow and intimidating. That’s what we always did when I was a consultant. If you can’t figure out how to install rbenv, or how to use git, well… like it or not, those tools are now part of your expected toolkit, and your time is better spent familiarizing yourself with them than bemoaning that you need to use them at all.
Being a good designer has almost nothing to do with being proficient with git.
TL;DR: Clueless “designer” thinks highly of himself, but can’t figure out Git, VMs, or the concept of not working from his personal machine. Therefore Git, developers, and developer environments are stupid and terrible and need to change.
Virtual machines.
“Virtual machines” is IME the correct answer to this problem. No more “install Python 3.4 but-not-3.5 system-wide, then spin up MySQL and Apache and put Redis on this TCP port and aaaargh”. Just “here’s an OVA file” or “here’s a Vagrantfile” or “here’s a Dockerfile” or “here’s a Docker image”. Push the “go” button, go make tea or coffee, come back and it’s done, and the whole mess is completely isolated from whatever messes you’ve got on your own computer.
They represent a retreat. They work today, because the software we want to use is designed to work well within this level of isolation. It’s entirely predictable that if they become commonplace, people will start inventing host-VM integration tools that break the isolation, and then we’ll have to deal with the same versioning and dependency issues with those.
The sustainable way to deal with versioning and dependency issues is to actually deal with them, through better distribution systems and better isolation of builds.
Use what you need to in the short term, I’m not arguing with your approach right now. But alarm bells should be going off.
I think the big, monolithic, isolated software island is going to keep having a value even if “virtual machines” in general evolve to become more package-like. Giving easy access to a development environment to contributors that are uncomfortable about deployment seems like a reasonable use-case for these. I certainly agree that virtual machines do not solve the distribution problem (or configurability, maintainability, etc.) in general, and that they do not obviate the search for improvement of the underlying software, but they seem like a good fit in that scenario.
They are a “retreat”, in the sense that we should constantly push to remove the barriers that make them valuable, and try to make them unnecessary. But these barriers have a cost that depends on the user, and what passes as “simple enough that you can do it directly without a virtual machine” to most users may still be out of reach for some.
A similar discussion is happening in academia around the idea of “reproducibility” of computer science papers. Researchers are working on an artifact evaluation process, whose purpose is to check whether the software provided along an article is usable by some people to at least double-check the claims made in the article. (There is a lot of room for stronger notions of reproducibility, such as being able to adapt the software to perform other similar experiments, but that’s a first step.) Many authors use virtual machine as a way to guarantee a reproducible environment for artifact evaluators, but that is unsatisfying in many respects, for example it does not answer the question of “ten years from now, will the current software platforms be able to run this experiment?”.
Yes, I agree with all of that.
I find that sort-of ok in some cases, but they can also multiply maintenance problems. The problems are now decoupled, but they still exist, and there are more of them. Every separate Docker image becomes a thing that needs its own separate maintenance, security updates, etc., which gets tedious, and kind of a magnet for bitrotting old code sitting around everywhere. At times it feels like the Bad Old Days when proprietary Unix apps installed their own self-contained world into /opt.
The way I interpret his conclusion is that he refuses to collaborate and will only throw his work over a fence to a team to do what they will with it. I think this is an unprofessional attitude, though I understand the frustration.
It’s an unfortunate state of affairs that it can be so hard to get a development environment to work - as a consultant it can be a real problem since you’re bound to work on a variety of projects of varying degrees of bootstrapability. A lot of times what I’ve done when onboarding on to a new project is to document all the steps taken in order to get things to work - as applications grow organically the original teams often lose sight of what they’ve cobbled together. This is coming from the perspective of a developer who -has- to have a working environment to really contribute.
I have to go on a tangent for a moment to talk about this:
I watched the video, it neither hosed the computer of the designer nor was it intractable. What I was able to gather from the 2 minute video is that she was being asked to rebase and force push as a part of their process, which is a terrible process to ask people to use on a collaborative project without understanding what you’re doing. I don’t argue that git’s UX is great, or that you can’t shoot yourself in the foot with it, but that rhetoric was over the top. I hope the author is NOT suggesting to not learn git because this can and does happen.
Back to the rest of the piece. I think there’s a divide working with designers and developers in the same development environment - developers want things their way, and designers want things their way. Naturally. I think it’s important to work out a process where there’s some compromise, which means both sides are a little out of their comfort zones in order to accommodate the “other side”.
The perspective of a consultant vs an employee is going to be a lot different here also, since a consultant is going to be running into this hurdle all over the place, where other employees might have to deal with it just once. So long as the client understands and pays for the time spent invoking the eldritch gods to be able to work on a project, refusing to do it seems like a cop out to me. In fact, the more you do it I think the better at it you get, which can be a valuable tool in your toolbox. If you’re a consultant and you’re able to avoid it entirely based on principle I guess that’s lucky, and you’re probably good at the hustle, but I don’t think you’re really doing yourself any favors with that attitude.
EDIT: fixed “not suggesting not learning” to match intent, despite poor double-negativity grammar
I’ve found that this is one of those things that will probably never change. I feel it falls under the same category of “which text editor is best (and why is it emacs)” or “linux/osx/windows”. Its either an ideological fight, or an attack on a developer’s productivity to go and force configuration with tools like boxen. The best way i’ve found to fight this though is to go and delete your development environment once and a while and try to see how hard it is to get it set back up again.
I think this is a special case of the general rule that operations (done well) is harder than it seems to management, and thus eternally underfunded/understaffed.