Aren’t we coming the full circle again? In the early days, everything was done on the mainframe, then personal computers came, so more work was able to be done there, while the binaries were shipped to servers. Now cloud is beginning to look like one big mainframe, and everything is moving onto it, so personal computers are becoming irrelevant — the main difference with the old days is that current personal computers are quite powerful. Aren’t we just lazy at making reproducible environments?
There is the aspect of laziness about making reproducible environments, but there are projects where it’s legitimately about computing power, and there’s also a shifting of expectations and requirements in terms of “terminals”, as in, things that the developers use.
I mean yes, Docker is basically “works on my machine”-as-a-service, but I’ve really seen (and, once or twice, worked) on projects where compile times on a very beefy high-end laptop were on the order of 3-4 hours, and 32-core build servers were a necessity. Even with laptops that cost a fortune, compilation speeds were a real bottleneck.
And then… there’s also the matter of the development machines themselves. For a variety of reasons, some legitimate, some extremely stupid, you really do see more and more thin, light, two-port machines being used for these things. Being able to develop on a comfortable, quiet, low-power system, while outsourcing the beefy build to loud, air-conditioned datacentres is pretty attractive.
Running an IDE in a browser isn’t, at least as far as I’m concerned, but this is just a contingency of our age. People got tired of many madnesses in the last 40 years, and they’ll get tired of this one. But planting the seed of easily-accessible, on-demand development environments and build machines is worth it, I guess.
Plus, there’s really the commercial aspect. It’s not really a technical matter. Technically, I mean, as far as you and me and all other developers are concerned, a cloud IDE is never going to come even close to an application that’s constrained neither by the myriad of quirks that browsers have, nor by the fact that GUI programming for the web is barely where GUI programming for “real computers” was 30 years ago (minus things made inevitable by our times, like internationalization, responsiveness etc.). But it’s going to make way much more money.
There’s a nice essay by Rudolf Winestock that keeps being posted around when stories like these pop up called The Eternal Mainframe, and the man has a point. Well, several, but this one’s one of them:
The fact that minicomputers, microcomputers, and workstations were ever successful indicates that the computer and telecommunications industries were still immature. It may also indicate that Big Blue and the Death Star were price gouging, but that’s another story.
I also think that widespread deployment of computers, with compute time that you don’t rent, was in good part a historical accident – mind you, a happy one, but ultimately something that just can’t prevail in our industry.
I think the problem has historically been network support - PCs pre-date widespread reliable internet, not to mention that we’re not using dumb terminals now but clients on computers that do a bunch of processing clientside, if only to avoid latency problems for e.g. responsive UI.To clarify, by “responsive UI” I mean a UI whose response time is very low, not the “responsive” buzzword.
I see this nearly as misguided (at the one end of the spectrum) as the “you need 3 days and 14 scripts and 2 coworkers to get your first build running on a new machine” on the other end of the spectrum.
If they actually manage to make it seamless without breaking the former (local) development flow, fine. But this takes so many things for granted (not the least a permanent stable internet connection). But maybe my experiences with all remote development have simply been so bad that I am extra wary.
It sure is a nice alternative, I won’t deny that. But I also see so many upsides for local development. Even if it’s just that I want to have my checkout under ~/code/gh and not ~/Dev/github/github to stay with their example.
I realize this piece has a strong whiff of PR to it, but I legitimately am interested in the idea of moving past the “local development environment” as a standard practice. Long ago I got real excited about local vagrant and docker for development, and that hasn’t really panned out for me in practice. I’ve been watching cloud development environments with great interest for a while and haven’t had a chance to invest time into them.
I’m convinced it will happen regardless. Too few people care about it passionately. General purpose computing will become a hobby and cult like the Amiga is today.
Also it’ll probably only be relevant if you’ve got such a big project that it takes you 45 minutes to make a fresh clone-to-dev environment and you’re not working with real hardware but something that’s made with replication in mind like web services. Oh and you don’t want any network problems.
This could be so, so powerful if the compilation within those codespaces could also be pushed to distributed cloud-build instances.
I’d be dying to use this if it came with a prebuilt ccache of previously compiled object files.
I’m on a 28 core machine with 64G of RAM and building Firefox still takes up to ten minutes. I know it can be much less than with a distributed compiler like icecc.
I think this will be the next step. The first and easiest development workflow to cover a scenario that is matches the remote environment as closely as possible (e.g. linux, no ui, etc). So codespaces is perfect as is for Web Development on Python, Ruby, PHP, JS. The next step would be service development where you combine Remote Execution (https://docs.bazel.build/versions/main/remote-execution.html) with CodeSpaces. It’s a bit tricky because now you have to deal with either multiple build systems, which is very difficult, or enforce a given supported built system (e.g. bazel). But at this point you will have very fast Rust/C/C++, etc compilation and can nicely develop there as well. The problem with CodeSpaces is when it comes to Mobile or GUI development, or worse case, 3D software (games). I am curious to see how they will solve that.
Back when I used to build Chromium all the time, the worst part was linking (because it happened every time, even if I only touched one source file.) And [current] linkers are both not parallelizable, and heavily I/O bound, so a distributed build system doesn’t help The only thing that helped was putting the object files on a RAM disk.
I don’t recall what was being used because I was a huge Unix fanboy at the time and wouldn’t touch Windows (tl;dr I was even more of an idiot than I am now) but back like 10+ years ago, I recall some folks at $work had this in Visual Studio. I don’t know if it was some add-in or something built in-house but it was pretty neat. It would normally take a couple of hours to compile their projects on the devs’ machines, but they could get it in 5-20 minutes, depending on how fresh their branch was.
I haven’t seen it because it was way before my time but I had a colleague who described basically this exact mechanism, minus the cloud part, because this was back in 2002 or so. ClearCase was also involved though so I’m not sure it was as neat as he described it :-D.
Cloud-based instances are, I suspect, what would really make this useful. Both of those two things were local, which wasn’t too hard for $megacorps who had their own data centres and stuff, but are completely out of my one man show reach. I don’t have the money or the space to permanently host a 28-core machine with 64G of RAM, but I suspect I could afford spinning some up on demand.
I wish this didn’t involve running an IDE in a damn browser but I guess that ship has sailed long ago…
Back when we had all those powerful workstations co-located in an office, we had them running icecc, which is really damn awesome and got us above 100 shared cores.
For a while, I even ssh’d into my workstation remotely and it worked quite well. But my machine failed me and getting it home was easier than making sure it’s never going to require maintenance again. Especially given that physical office access is very limited.
(As an aside, I agree running an IDE in a browser feels wrong and weird but vscode is pretty OK in terms of usability, considering it’s running on chromium)
Docker and Vagrant can be heavy to run and often don’t make reproducible builds. Something like Nix or Guix can help with this part, and if you throw in a Cachix subscription, you can safely build and push once from a developer’s machine to CI, production, and other developers with less overhead.
Usually I find it very frustrating to do any sort of development where there is human noticeable (and variable) latency on responses to keystrokes. (eg. Working From Home via vnc or something).
I suspect I’d find this extremely frustrating.
I have been working with a thin schroot container like thing. (ie. Tools, cross compilers, build system etc in a tar ball that gets unpacked into a schroot, gui tools and editor in the native host)
That has been working just fine for me. Schroot is smart about updating the tools when they change.
I’ve run the team responsible for maintaining the development environment tooling at a larger (100+ software engineers) company and the amount of time and money lost to engineers with broken local environments - or getting new hires spun up - or helping an engineer spin up a project they hadn’t worked with before - was astronomical.
Being able to provision a fresh, working development environment in just a few seconds is absolutely game-changing for large engineering team like that.
I’ve been using Theia for development recently. As far as I can tell that’s basically the way to DIY this sort of thing. It’s pretty slick, basically vscode in a browser.
asking our Vim and Emacs users to commit to a graphical editor is less great. If Codespaces was our future, we had to bring everyone along.
Happily, we could support our shell-based colleagues through a simple update to our prebuilt image which initializes sshd with our GitHub public keys, opens port 22, and forwards the port out of the codespace.
From there, GitHub engineers can run Vim, Emacs, or even ed if they so desire.
Aren’t we coming the full circle again? In the early days, everything was done on the mainframe, then personal computers came, so more work was able to be done there, while the binaries were shipped to servers. Now cloud is beginning to look like one big mainframe, and everything is moving onto it, so personal computers are becoming irrelevant — the main difference with the old days is that current personal computers are quite powerful. Aren’t we just lazy at making reproducible environments?
There is the aspect of laziness about making reproducible environments, but there are projects where it’s legitimately about computing power, and there’s also a shifting of expectations and requirements in terms of “terminals”, as in, things that the developers use.
I mean yes, Docker is basically “works on my machine”-as-a-service, but I’ve really seen (and, once or twice, worked) on projects where compile times on a very beefy high-end laptop were on the order of 3-4 hours, and 32-core build servers were a necessity. Even with laptops that cost a fortune, compilation speeds were a real bottleneck.
And then… there’s also the matter of the development machines themselves. For a variety of reasons, some legitimate, some extremely stupid, you really do see more and more thin, light, two-port machines being used for these things. Being able to develop on a comfortable, quiet, low-power system, while outsourcing the beefy build to loud, air-conditioned datacentres is pretty attractive.
Running an IDE in a browser isn’t, at least as far as I’m concerned, but this is just a contingency of our age. People got tired of many madnesses in the last 40 years, and they’ll get tired of this one. But planting the seed of easily-accessible, on-demand development environments and build machines is worth it, I guess.
Plus, there’s really the commercial aspect. It’s not really a technical matter. Technically, I mean, as far as you and me and all other developers are concerned, a cloud IDE is never going to come even close to an application that’s constrained neither by the myriad of quirks that browsers have, nor by the fact that GUI programming for the web is barely where GUI programming for “real computers” was 30 years ago (minus things made inevitable by our times, like internationalization, responsiveness etc.). But it’s going to make way much more money.
There’s a nice essay by Rudolf Winestock that keeps being posted around when stories like these pop up called The Eternal Mainframe, and the man has a point. Well, several, but this one’s one of them:
I also think that widespread deployment of computers, with compute time that you don’t rent, was in good part a historical accident – mind you, a happy one, but ultimately something that just can’t prevail in our industry.
I think the problem has historically been network support - PCs pre-date widespread reliable internet, not to mention that we’re not using dumb terminals now but clients on computers that do a bunch of processing clientside, if only to avoid latency problems for e.g. responsive UI.To clarify, by “responsive UI” I mean a UI whose response time is very low, not the “responsive” buzzword.
I see this nearly as misguided (at the one end of the spectrum) as the “you need 3 days and 14 scripts and 2 coworkers to get your first build running on a new machine” on the other end of the spectrum.
If they actually manage to make it seamless without breaking the former (local) development flow, fine. But this takes so many things for granted (not the least a permanent stable internet connection). But maybe my experiences with all remote development have simply been so bad that I am extra wary.
It sure is a nice alternative, I won’t deny that. But I also see so many upsides for local development. Even if it’s just that I want to have my checkout under
~/code/gh
and not~/Dev/github/github
to stay with their example.I realize this piece has a strong whiff of PR to it, but I legitimately am interested in the idea of moving past the “local development environment” as a standard practice. Long ago I got real excited about local vagrant and docker for development, and that hasn’t really panned out for me in practice. I’ve been watching cloud development environments with great interest for a while and haven’t had a chance to invest time into them.
Is this the way?
Not unless you’d like to see the end of accessible general purpose compute in your lifetime.
I’m convinced it will happen regardless. Too few people care about it passionately. General purpose computing will become a hobby and cult like the Amiga is today.
Also it’ll probably only be relevant if you’ve got such a big project that it takes you 45 minutes to make a fresh clone-to-dev environment and you’re not working with real hardware but something that’s made with replication in mind like web services. Oh and you don’t want any network problems.
This could be so, so powerful if the compilation within those codespaces could also be pushed to distributed cloud-build instances. I’d be dying to use this if it came with a prebuilt ccache of previously compiled object files. I’m on a 28 core machine with 64G of RAM and building Firefox still takes up to ten minutes. I know it can be much less than with a distributed compiler like icecc.
I think this will be the next step. The first and easiest development workflow to cover a scenario that is matches the remote environment as closely as possible (e.g. linux, no ui, etc). So codespaces is perfect as is for Web Development on Python, Ruby, PHP, JS. The next step would be service development where you combine Remote Execution (https://docs.bazel.build/versions/main/remote-execution.html) with CodeSpaces. It’s a bit tricky because now you have to deal with either multiple build systems, which is very difficult, or enforce a given supported built system (e.g. bazel). But at this point you will have very fast Rust/C/C++, etc compilation and can nicely develop there as well. The problem with CodeSpaces is when it comes to Mobile or GUI development, or worse case, 3D software (games). I am curious to see how they will solve that.
Back when I used to build Chromium all the time, the worst part was linking (because it happened every time, even if I only touched one source file.) And [current] linkers are both not parallelizable, and heavily I/O bound, so a distributed build system doesn’t help The only thing that helped was putting the object files on a RAM disk.
I don’t recall what was being used because I was a huge Unix fanboy at the time and wouldn’t touch Windows (tl;dr I was even more of an idiot than I am now) but back like 10+ years ago, I recall some folks at $work had this in Visual Studio. I don’t know if it was some add-in or something built in-house but it was pretty neat. It would normally take a couple of hours to compile their projects on the devs’ machines, but they could get it in 5-20 minutes, depending on how fresh their branch was.
I haven’t seen it because it was way before my time but I had a colleague who described basically this exact mechanism, minus the cloud part, because this was back in 2002 or so. ClearCase was also involved though so I’m not sure it was as neat as he described it :-D.
Cloud-based instances are, I suspect, what would really make this useful. Both of those two things were local, which wasn’t too hard for $megacorps who had their own data centres and stuff, but are completely out of my one man show reach. I don’t have the money or the space to permanently host a 28-core machine with 64G of RAM, but I suspect I could afford spinning some up on demand.
I wish this didn’t involve running an IDE in a damn browser but I guess that ship has sailed long ago…
Back when we had all those powerful workstations co-located in an office, we had them running icecc, which is really damn awesome and got us above 100 shared cores. For a while, I even ssh’d into my workstation remotely and it worked quite well. But my machine failed me and getting it home was easier than making sure it’s never going to require maintenance again. Especially given that physical office access is very limited.
(As an aside, I agree running an IDE in a browser feels wrong and weird but vscode is pretty OK in terms of usability, considering it’s running on chromium)
Docker and Vagrant can be heavy to run and often don’t make reproducible builds. Something like Nix or Guix can help with this part, and if you throw in a Cachix subscription, you can safely build and push once from a developer’s machine to CI, production, and other developers with less overhead.
Usually I find it very frustrating to do any sort of development where there is human noticeable (and variable) latency on responses to keystrokes. (eg. Working From Home via vnc or something).
I suspect I’d find this extremely frustrating.
I have been working with a thin schroot container like thing. (ie. Tools, cross compilers, build system etc in a tar ball that gets unpacked into a schroot, gui tools and editor in the native host)
That has been working just fine for me. Schroot is smart about updating the tools when they change.
I’m curious what issues you saw/see with a Vagrant setup, that you think some kind of ‘develop in a browser’ environment would solve?
[Comment removed by author]
I am SO excited about this.
I’ve run the team responsible for maintaining the development environment tooling at a larger (100+ software engineers) company and the amount of time and money lost to engineers with broken local environments - or getting new hires spun up - or helping an engineer spin up a project they hadn’t worked with before - was astronomical.
Being able to provision a fresh, working development environment in just a few seconds is absolutely game-changing for large engineering team like that.
I’ve been using Theia for development recently. As far as I can tell that’s basically the way to DIY this sort of thing. It’s pretty slick, basically vscode in a browser.
Too bad for the vimacs nerds, I guess?
From the article:
For emacs, you’d just have to copy your dotfiles to the image every time, I suppose?
It’s probably a much better idea to use a (slow) tramp connection, that way you’ll notice the latency only when saving, and not on every keystroke.