Thank you for taking the time to submit your fix!
Thank you, Joshua, :-).
illumos, and moreover Triton/SmartOS from Joyent, is excellent.
Although I’ve done a recent rebuild of my home infrastructure and have moved on, I spent years running Joyent’s cloud platform: Triton, on a cluster of Intel NUCs. It’s great that they offer it open source, and I highly recommend people check it out, if they’re unfamiliar. Although we’re living in an ephemeral container-centric world, with lots of cool constructs and patterns evolving, the notion of having a container that acted just like a HVM was always a pleasurable and exciting one (illumos Zones, check them out!).
And of course, Joyent really pushed their engineering with a great Docker API solution, too! So, I had a bunch of services running in zones, and a fair few containers too. All wrapped up with Terraform, Packer, and Ansible for provisioning. A lone KVM instance running OpenBSD for my OpenIKED VPN. I’m just rambling now, but I’m sure people can tell I loved that stack, and it demonstrates how flexible it is for something you can set up at home/in a private DC.
TL;DR - if you’re not familiar with illumos, SmartOS, Triton (and Joyent in general), definitely check out their stuff. It’s all open source, and is really cool!
Illumos is in my top two companies I would trust to run a docker container in production along with Google. I trust them because
They have really solid systems engineers.
Neither of them actually run the docker engine in production.
You also gain dtrace and the best ZFS implementation. I’ve never had to run Docker in prod but this has been my planned solution since this became possible.
Absolutely! There are some fantastic technologies that you get at your fingertips. I also forgot to mention in my post how exciting the Linux syscall translation was when it hit. OS level virtualization (containers) of the Linux kernel… on an illumos host. Mindblowing stuff. There are some excellent talks out there from @bcantrill (that are always very entertaining) on many of the things I’ve noted. I’d urge anyone reading, who’s curious about any of this, go watch some of them :)
I was jealous for a long time because Zones were a bit more “complete” than FreeBSD jails and then their Linux syscall translation was also more complete than FreeBSD’s…
Things are better now in FreeBSD land but Illumos still has a more polished solution…. and a damn fine network stack… and a damn fine CPU scheduler… and a damn fine memory management…
If Solaris was open sourced sooner I don’t know what the world would look like
I’m a big fan of both of those technologies. So it only sweetens the deal for me.
Doesn’t Google use Docker in production? That was surprising, to me.
Nope, They use their own container technology which predates docker by over a decade. They just wrap it in a docker api facade for you to make it easier for you to interact with it.
What are the reasons for moving?
Good question! To be honest, although I loved the stack, it had gathered dust for a while. Certainly in a sense of the methods I was using to define my infrastructure. The landscape changed pretty drastically in a short period of time, in the Ops world. I was doing all this stuff with kubernetes and GitOps at work, and still deploying with Terraform and Ansible at home.
A large part of why I have my home setup is to learn things, try things, develop things. I felt I wanted a stack that closer represented the things I was currently enjoying.
I could have tried out running k8s on top of Triton, but to be honest, the implementation Joyent have blogged about looks a little hefty for my liking (and my resources). It leverages KVM instances to run various k8s components.
I’ve been thoroughly enjoying Nix (and NIxOS) for quite some time, so I decided I’d redesign my home cluster:
I’ve been a massive nerd about it all and captured everything in a GitHub project, with a roadmap and issues for everything I plan to implement.
Whilst I’m excited about it, it’s largely blocked at the moment by the state of k8s deployments on NixOS. The modules provided to bootstrap a k8s cluster are a bit wonky in their current state. I believe ‘offline hacker’ is doing a complete rework of it all in the background. So I’m very much looking forward to his work.
Out of curiosity, is that GitHub project public?
There’s also OpenIndiana for those who want something easier to install.
I’ve never run a Solaris-like, but isn’t OpenIndiana a distro of Illumos?
Yes, according to the website:
Community-driven illumos Distribution
Community-driven illumos Distribution
I have huge nostalgias for the Unices I’ve used, which don’t really hold up to a critical inspection :). What I remember about Solaris was it being rock-solid, great binary compatibility, and SunRays being the best workstation idea I’d ever seen. I also remember it having the CDE/Motif UI for much too long, and package/update management being a lot of work.
Similarly, I seem to remember NEXTSTEP having a great UI and developer tools, and some really easy to understand abstractions for RMI (Distributed Objects) and directory services (NetInfo). On the other hand, both had poor security models, not much open source would build on the weird Mach/BSD/Custom hybrid OS, and its implementation of “popular” UNIX interoperability protocols like YP and NFS was flaky at best.
I still have an official OpenSolaris CD, as well as Solaris 10 DVDs. I’m still watching the Illumos and OpenIndiana’s progress and I’m glad they are alive. Oh, and a dead tree Solaris manual, a gift from a local Sun campus ambassador (that was a thing).
One time we’ve been demonstrating OpenSolaris installation on a laptop with him, and it went horribly wrong, so the demonstration turned into a live debugging session. Good times.
GNU/Linux or FreeBSD are still so much easier to use on both servers and desktop due to the network effect and hardware support and packages that come with it, but ecosystems do need diversity, and I with the project all the best.
for what its worth, OmniOS is the only Illumos kernel distro, that I was able to install and run on Hyper-V (mutli-CPU worked fine).