Ha ha! I don’t think the mainframe is really a good analogue for what we’re doing (commodity silicon, all open source SW and open source FW, etc.) – but that nonetheless is really very funny.
I think the overwhelming focus of modern z/OS on “everyone just has a recursive hierarchy of VMs” would also be a really central concept, as would the ability to cleanly enforce/support that in hardware. (I know you can technically do that on modern ARM and amd64 CPUs, but the virtualization architecture isn’t quite set up the same way, IMVHO.)
I remember reading a story from back in the days when “Virtual Machine” specifically meant IBM VM. They wanted to see how deeply they could nest things, and so the system operator recursively IPL’d more and more machines and watched as the command prompt changed as it got deeper (the character used for the command prompt would indicate how deeply nested you were).
Then as they shut down the nested VMs, they accidentally shut down one machine too many…
I’d go with reliability + scale-up. I’ve heard there’s support for like, fully redundant CPUs and RAM. That is very unique compared to our commodity/cloud world.
Architecture. I’ve never actually touched a mainframe computer, so grain of salt here, but I once heard the difference described this way:
Nearly all modern computers from the $5 Raspberry Pi Zero on up to the beefiest x86 and ARM enterprise-grade servers you can buy today are classified as microcomputers. A microcomputer is built around one or more CPUs manufactured as an integrated circuit. This CPU has a static bus that connects the CPU to all other components.
A mainframe, however, is built around the bus. This allows not only for the hardware itself to be somewhat configurable per-job (pick your number of CPUs, amount of RAM, etc), but mainframes were built to handle batch data processing jobs and have always handily beat mini- and microcomputers in terms of raw I/O speed and storage capability. A whole lot of the things we take for granted today were born on the mainframe: virtualization, timesharing, fully redundant hardware, and so on. The bus-oriented design also means they have always scaled well.
I’m super happy to see this happening. I’ve been watching Oxide for a while – I’m pretty much past that time in life where you get excited about brands and companies, but I’m a keen admirer of Jessie Frazelle’s and Bryan Cantrill’s work, so that gave me a good excuse to read their blog. It’s the only “corporate” blog I read tbh.
I don’t expect I’ll be touching any one of these soon but that’s an oddly refreshing feeling in and of itself :-).
I would love to see a more technical breakdown of the hardware and software stack. For instance @riking was able to suss out that the hardware runs Illumos with the bhyve hypervisor, more details on all the components would be super interesting.
Yeah, sorry – a lot more technical information to come, I promise. In the meantime, I went into some details in an episode of the The Data Center Podcast[0] that we hadn’t gone into elsewhere. But more is definitely coming – and it will all be open source before we ship!
These look really neat. Can’t quite tell a lot from the pictures, but I expect some sort of servicing guide to be excerpted later. Lots of OCP green “touch this”, like you can see on the screws around the CPU.
Does anything in particular make Illumos a better hypervisor than Linux? I have no particular reason to believe either is better, except that Linux gets a lot more developer hours.
I would not be at all surprised if the maturity of the Illumos ZFS implementation was a bigger consideration than bhyve vs. kvm. Storage is just as much a part of this story as CPU or chassis, so having a rock-solid underlying filesystem that can support all those VMs efficiently seems like a good default.
As far as I know, ZFS on Illumos and ZFS on Linux are the same these days (as of 2.0). Of course that wasn’t true when Oxide started, so you could be right.
Thinking on it more, Bryan Cantrill does have years of experience running VMs on Illumos (SmartOS) at Joyent, and years more experience with Illumos / Solaris in general. Although I think SmartOS mixed in Linux KVM for virtualization, not bhyve.
Ultimately I guess it doesn’t matter. Hosted applications are VMs. As long as it works, no one needs to care whether it’s Illumos, Linux, or stripped down iMacs running Hypervisor.framework.
Another point for Illumos: eBPF has come a long way, but DTrace already works. Since they made a DTrace USDT library for Rust I think it’s safe to assume DTrace influenced their choice to use Illumos.
I have no idea what Oxide is and why it is relevant for Lobste.rs, but this specific post looks like it’s just a big ad for some server hardware. Is there any interesting technical content that I’m missing?
I think it’s because two of the C-level people are “lobste.rs-approved”. Bryan Cantrill was the Joyent CEO, and wrote DTrace. Jessie Frazelle is a former Docker, Inc. employee who played a significant part in the popularisation of containers beyond Docker itself. (mostly during her time at Microsoft)
It’s hyperscaler hardware most people have no use for. I think people are interested on the engineering work going on, w/ work on low-level boot firmware and…. rewriting it in Rust.
The annoying thing is that the submission has nothing to do with the engineering, at its core. It isn’t a deep-dive into any of the sales points, no real technology is discussed.
And now these folks just got a rather expensive ad slot on a slow-traffic site with high-value eyeballs.
What is the spectrum of companies/people who have exposure to the hardware they are using for hosting network services?
My super layperson understanding is that in the olden times, everyone had server racks in the office, but basically everything is agressively being pushed out to be cloud-managed. But that doesn’t really gel with “everyone at big corp needs to be running everything through various VPNs”…. are there still a large percent of servers still being run within the walls of the office that is using it?
An aside: as covid hit, I remember hearing about some japanese network engineer having built a box that could be plugged into municipal office networks to let people work remotely, effectively a “no-setup VPN”, as these places didn’t have a need for it before. I wonder if that’s more the norm than I’d expect
I can only speak of my experience but every office I have worked in has had a “server room” of various capabilities.
One had a copy of their cage at the datacenter to act as a “in office” test and development environment, it was where all the servers went after they aged out of the datacenter and where they lived until they died and needed recycling (they were very much about reuse before recycle) - it also acted as a backup in case their main host went down, which happened once and the transition was almost seamless.
A lot of stuff can be done in the “cloud” but a lot of companies still like to own the hardware they operate on - I do see a drift towards a hybrid environment where companies are holding on to their dedicated hardware and supplementing it with cloud solutions where it makes sense.
One final thing of thought is that dedicated hardware can be be used as a tax tool, for example the cost of the hardware offset over several years as a tax deductible; while also being considered an asset on accounting sheets, the depreciation of which over time can also be used as a tax deductible (or at least this was true 10 years ago.) I don’t know if companies get the same benefits from cloud solutions.
I wonder if 0xide will sell these, and/or hire them out - on-prem hardware. but paid for as OPEX (operation expenditure). I believe IBM ships hardware even if you don’t pay for that option, then if you pay to upgrade they just send an activation key or flip a bit.
“The Gang Builds a Mainframe”
Ha ha! I don’t think the mainframe is really a good analogue for what we’re doing (commodity silicon, all open source SW and open source FW, etc.) – but that nonetheless is really very funny.
It makes you wonder what makes a mainframe a mainframe. Is it architecture? Reliability? Single-image scale-up?
I had always assumed it was the extreme litigiousness of the manufacturer!
Channel-based IO with highly programmable controllers and an inability to understand that some lines have more than 80 characters.
I think the overwhelming focus of modern z/OS on “everyone just has a recursive hierarchy of VMs” would also be a really central concept, as would the ability to cleanly enforce/support that in hardware. (I know you can technically do that on modern ARM and amd64 CPUs, but the virtualization architecture isn’t quite set up the same way, IMVHO.)
I remember reading a story from back in the days when “Virtual Machine” specifically meant IBM VM. They wanted to see how deeply they could nest things, and so the system operator recursively IPL’d more and more machines and watched as the command prompt changed as it got deeper (the character used for the command prompt would indicate how deeply nested you were).
Then as they shut down the nested VMs, they accidentally shut down one machine too many…
This sounds like the plot of a sci-fi short story.
…and overhead, without any fuss, the stars were going out.
I’d go with reliability + scale-up. I’ve heard there’s support for like, fully redundant CPUs and RAM. That is very unique compared to our commodity/cloud world.
If you’re interested in that sort of thing, you might like to read up on HP’s (née Tandem’s) NonStop line. Basically at least two of everything.
Architecture. I’ve never actually touched a mainframe computer, so grain of salt here, but I once heard the difference described this way:
Nearly all modern computers from the $5 Raspberry Pi Zero on up to the beefiest x86 and ARM enterprise-grade servers you can buy today are classified as microcomputers. A microcomputer is built around one or more CPUs manufactured as an integrated circuit. This CPU has a static bus that connects the CPU to all other components.
A mainframe, however, is built around the bus. This allows not only for the hardware itself to be somewhat configurable per-job (pick your number of CPUs, amount of RAM, etc), but mainframes were built to handle batch data processing jobs and have always handily beat mini- and microcomputers in terms of raw I/O speed and storage capability. A whole lot of the things we take for granted today were born on the mainframe: virtualization, timesharing, fully redundant hardware, and so on. The bus-oriented design also means they have always scaled well.
I’m super happy to see this happening. I’ve been watching Oxide for a while – I’m pretty much past that time in life where you get excited about brands and companies, but I’m a keen admirer of Jessie Frazelle’s and Bryan Cantrill’s work, so that gave me a good excuse to read their blog. It’s the only “corporate” blog I read tbh.
I don’t expect I’ll be touching any one of these soon but that’s an oddly refreshing feeling in and of itself :-).
Man, I’m really glad to see Oxide shipping something.
That said…flagged. Lobsters is not for slick product pages and signups. It’s not shipping for another year.
I would love to see a more technical breakdown of the hardware and software stack. For instance @riking was able to suss out that the hardware runs Illumos with the bhyve hypervisor, more details on all the components would be super interesting.
Yeah, sorry – a lot more technical information to come, I promise. In the meantime, I went into some details in an episode of the The Data Center Podcast[0] that we hadn’t gone into elsewhere. But more is definitely coming – and it will all be open source before we ship!
[0] https://www.datacenterknowledge.com/hardware/why-your-servers-suck-and-how-oxide-computer-plans-make-better
These look really neat. Can’t quite tell a lot from the pictures, but I expect some sort of servicing guide to be excerpted later. Lots of OCP green “touch this”, like you can see on the screws around the CPU.
(Tagged the story as illumos because it looks like it is, based on https://github.com/oxidecomputer/illumos-gate/tree/cross.vmm-vm.wip )
Also this: https://github.com/oxidecomputer/propolis
They mentioned that a hypervisor is part of the assumed stack, and it looks like it’s bhyve on illumos.
Does anything in particular make Illumos a better hypervisor than Linux? I have no particular reason to believe either is better, except that Linux gets a lot more developer hours.
I would not be at all surprised if the maturity of the Illumos ZFS implementation was a bigger consideration than bhyve vs. kvm. Storage is just as much a part of this story as CPU or chassis, so having a rock-solid underlying filesystem that can support all those VMs efficiently seems like a good default.
As far as I know, ZFS on Illumos and ZFS on Linux are the same these days (as of 2.0). Of course that wasn’t true when Oxide started, so you could be right.
Thinking on it more, Bryan Cantrill does have years of experience running VMs on Illumos (SmartOS) at Joyent, and years more experience with Illumos / Solaris in general. Although I think SmartOS mixed in Linux KVM for virtualization, not bhyve.
Ultimately I guess it doesn’t matter. Hosted applications are VMs. As long as it works, no one needs to care whether it’s Illumos, Linux, or stripped down iMacs running Hypervisor.framework.
Another point for Illumos: eBPF has come a long way, but DTrace already works. Since they made a DTrace USDT library for Rust I think it’s safe to assume DTrace influenced their choice to use Illumos.
So this has me wondering what happens if they decide they need arm64 hardware? How portable is Illumos?
It supports SPARC CPUs with the SUN heritage, so it should be portable to arm when needed, no?
Missed opportunity for zones, then!
Did you expect Cantrill to support epoll and dnotify?
epoll is actually terrible and a a huge argument against Linux game servers where I used to work.
io_uring is pretty sane as an interface though.
[Comment removed by author]
I have no idea what Oxide is and why it is relevant for Lobste.rs, but this specific post looks like it’s just a big ad for some server hardware. Is there any interesting technical content that I’m missing?
I think it’s because two of the C-level people are “lobste.rs-approved”. Bryan Cantrill was the Joyent CEO, and wrote DTrace. Jessie Frazelle is a former Docker, Inc. employee who played a significant part in the popularisation of containers beyond Docker itself. (mostly during her time at Microsoft)
But your comment is relevant though.
Bryan was CTO of Joyent
You’re right. My mistake. I can’t edit my comment anymore :P .
It’s hyperscaler hardware most people have no use for. I think people are interested on the engineering work going on, w/ work on low-level boot firmware and…. rewriting it in Rust.
The annoying thing is that the submission has nothing to do with the engineering, at its core. It isn’t a deep-dive into any of the sales points, no real technology is discussed.
And now these folks just got a rather expensive ad slot on a slow-traffic site with high-value eyeballs.
Just fanboyism here I am afraid
What is the spectrum of companies/people who have exposure to the hardware they are using for hosting network services?
My super layperson understanding is that in the olden times, everyone had server racks in the office, but basically everything is agressively being pushed out to be cloud-managed. But that doesn’t really gel with “everyone at big corp needs to be running everything through various VPNs”…. are there still a large percent of servers still being run within the walls of the office that is using it?
An aside: as covid hit, I remember hearing about some japanese network engineer having built a box that could be plugged into municipal office networks to let people work remotely, effectively a “no-setup VPN”, as these places didn’t have a need for it before. I wonder if that’s more the norm than I’d expect
I can only speak of my experience but every office I have worked in has had a “server room” of various capabilities.
One had a copy of their cage at the datacenter to act as a “in office” test and development environment, it was where all the servers went after they aged out of the datacenter and where they lived until they died and needed recycling (they were very much about reuse before recycle) - it also acted as a backup in case their main host went down, which happened once and the transition was almost seamless.
A lot of stuff can be done in the “cloud” but a lot of companies still like to own the hardware they operate on - I do see a drift towards a hybrid environment where companies are holding on to their dedicated hardware and supplementing it with cloud solutions where it makes sense.
One final thing of thought is that dedicated hardware can be be used as a tax tool, for example the cost of the hardware offset over several years as a tax deductible; while also being considered an asset on accounting sheets, the depreciation of which over time can also be used as a tax deductible (or at least this was true 10 years ago.) I don’t know if companies get the same benefits from cloud solutions.
Pretty sure they don’t, cloud is operational spend rather than capital expenditure.
Thank you, operational spend vs capital expenditure was what I was trying to articulate.
I wonder if 0xide will sell these, and/or hire them out - on-prem hardware. but paid for as OPEX (operation expenditure). I believe IBM ships hardware even if you don’t pay for that option, then if you pay to upgrade they just send an activation key or flip a bit.
Possible markets I see for this product are where regulations / compliance require stuff to be running on-premises. Finance and telecos mostly.
Interesting front page with lots of ascii animations, is there a software or it’s manually crafted?
Turns out the software in question is monodraw
The orange site has a comment thread about that, check there. Sorry, can’t link right now.