It’s very satisying to see how they progress on completely new platform for this OS, still keeping the legacy chain running, as OpenVMS is probably one of longest supported consistent platforms with updated being released today.
But it’s still intriguang - what’s their target? Some hobbyists do port older OSes’ environments or reimplement them to maintain compatibility with legacy software or just the some hard-to-explain „feeling” ffrom such OS, but this mostly happens in desktop/home environment without significant funding (HaikuOS, MorphOS, ReactOS). In this case, we have pure industry-grade OS being funded by large corporation just to be ported onto some weak home computers’ CPU which is already going to be replaced by ARM in next years.
We ran OpenVMS for our accounting system for > 20 years(I wasn’t here for all of that - I’d guess we started in the early 80’s – to lazy to look it up). We only switched when HP finally gave up and said we are finished. At that time, we decided to move to POSIX-like platforms, as the writing on the wall was pretty clear at the time, OpenVMS was dead/dying and this company hadn’t quite proved they were 100% serious about porting to x86 yet.
In the time it’s taken these guys to get to first boot, we’ve entirely re-written our application with a Qt GUI, and natively support macOS, Windows and Unix-like OS’s, added a lot more functionality, etc. So in retrospect I don’t think our decision was wrong at all, but OpenVMS was definitely a lot more stable than our current Linux/BSD boxes. Our outages under VMS were measured in minutes per decade, and now it’s minutes per year. But it’s just accounting, we don’t need 100% uptime, so it’s OK with our current stability. Also, now we have to buy new hardware every 5 years instead of once a decade like when we were running OpenVMS. I haven’t done a cost comparison, yet, it would be very interesting to compare our actual cost(s) and see if it’s cheaper this way than when we were under OpenVMS… I’d venture to guess it’s about the same taken over a 10 year period.
That said, there are plenty that keep sucking the HP support contracts until the bitter end, and marrying into this new company as quickly as possible. Many old organizations still have plenty of OpenVMS around.
Plus with OpenVMS being ported to x86, it can live under a VM eventually, as x86 VM’s will be here for many, many more decades, even if most of the industry eventually moves to some other chip(like say ARM as you suggest), so OpenVMS won’t die anytime soon, but it will be a niche product for sure.
There are a lot of little reasons why, but I much prefer using VMS over UNIX for almost any task given the opportunity.
I should add that recent studies have shown, for some of the reasons you mention above, that OpenVMS systems still have lower long-term TCO than Unix and Windows systems and higher reliability.
According to Wikipedia and confirmed at the TOP500 site, “as of June 2018, TOP500 supercomputers are now all 64-bit, mostly based on x86-64 CPUs (Intel EMT64 and AMD AMD64 instruction set architecture), with few exceptions”.
OpenVMS supports 32 processors and 8TB RAM per node, with 96 nodes per VMScluster, creating systems of 3,072 processors with 768TB of memory. The x86_64 transition will be the fourth supported hardware platform and the upgrade path from the current IA64-based offering.
OpenVMS is widely used in many mission-critical applications.
VMS clusters were crazy reliable. All kinds of businesses used them. They were happy to pay good money for the OS, too, long as it kept getting updated. It was one of HP’s highest-profit products at one point. Then they ditched it for some reason. This company took over. They’re updating it mainly for those legacy companies.
Search engines’ recent tactics make it hard to find some old articles like the one where customers loved it to death for its reliability and supposed security. I couldn’t find it. You can see here what kinds of critical workloads it’s been running. Wait, I did just find this which describes some of the reliability. I also like the API claim since I said something similar about UNIX/Linux slowly becoming more like mainframes and OpenVMS to meet cloud requirements. Far as I can tell, VMS is still better for what a lot of their users want to do given support for clustering, metering, and distributed apps. It won’t be cheaper, though. Customers used to say there’s a “$” in the command prompt for a reason. ;)
I really disliked VMS, but the VMS cluster/SMP solution was technically better than the competing ones. When Linux went SMP there was an effort to get a similar design, but the standard smp OS design was what was obvious and what the corporate sponsors wanted so ..
There were actually two distinct implementations of VMS multiprocessing — the original implementation was a unique asymmetric design, which changed to symmetric multiprocessing around the VMS 5 timeframe, if my memory is correct.
I may be wrong, but as I recall it, the Galaxy OS design ran multiple instantiations of the same kernel on an SMP machine - there was a single runq per kernel and each OS managed its own address space - although there was a way to reallocte pages. This design had a huge advantage that it did not need the increasingly fine grained and complex and expensive locking systems that are used when a single OS instance has multiple parallel threads of execution.
Yes, OpenVMS Galaxy was a system to partition and run multiple VMS instances on a single server, and these instances could be VMScluster members as well.
It would be really interesting if some of the higher ups in HP someday would ever make public why they ditched OpenVMS.. It’s either a really interesting story, or some number cruncher somewhere just decided it wasn’t worth the porting effort off of the existing, dying hardware line…
It’s always bugged me that I don’t know. The articles before ditching it said it was one of their most profitable products. Then they want to ditch it. My hypothesis was they had two, kind-of-competing products in the same area: OpenVMS Clusters and NonStop. If it’s about reliability for enterprise customers, NonStop is probably the better bet since its assurances go to hardware level. Companies with two products that compete often ditch one. So, they ditched OpenVMS in favor of NonStop.
Again, I have no idea why they did it. I just thought that was plausible. The only counter I’ve gotten to that so far is some OpenVMS experts saying they don’t really compete. The customer testimonials and application areas make me think they really do with OpenVMS’s niche being something that doesn’t go down even if it’s expensive. NonStop customers demand that, too. The two aren’t the same in capabilities but lots of overlap exists.
Wait, I’m leaving off one part of same hypothesis where Windows- and Linux-based solutions were taking over and more in demand for some of OpenVMS’s market. I know there were lots of conversions. OpenVMS market was a mix of different types of demand. Windows and Linux were bleeding it out of the regular, server market with those kind of capabilities. They were cheaper, more flexible, and had stronger ecosystems. Then, what’s left is high reliability and security, esp reliability. That leaves OpenVMS vs NonStop and Stratus. HP owns two. They kill the weaker one in that niche. Maybe less profitable one, too. So, there’s my full hypothesis.
Looking at the timeline, it seems plausible that when they finally came to terms with the sinking of the Itanic, they could not face the effort of porting.
Damn, I cant believe I ovelooked that. I was mentally separating business and tech sides of this. Ports, esp from RISC to CISC, are potentially a huge cost to technologists. Possible they explained it to business people who saw numbers that made them cancel the upgrade.
Doing a quick compare, HP killed VMS on Itanium around June 2013. NonStop was also on Itanium. It was November 2013 when media reported they’d shift to x86. That doesn’t give me anything definitive. It is interesting they started a port on year OpenVMS was cancelled on same architecture, though. Fits your theory a bit but NonStop port had to cost a lot, too. That it involved hardware development weakens your theory a bit.
Didn’t compaq make the decision? [ nope! ]
Decision to abandon Alpha was also dumb.
But DEC’s decision to dump the relation database they were developing in Colorado Springs, as it was ready to ship, and to sell it and the highly skilled team to Oracle was stupendously dumb.
I thought a bit about this, and other than speculation mentioned by nickpsecurity and others, it all boils down to the tradition that HP has killing off almost all the acquired products, and often in an underhanded way.
My memory of events is such that HP did not want DEC (and other acquision) products competing with their own. Consider how HP acquired and quickly killed Apollo. This seems to be the blueprint - acquire and extinguish - resulting in killing Tru64, and letting VMS languish for awhile as well.
For Tru64 users the upgrade path was intially HP-UX on PA-RISC - even as it was rumored that, internally, PA-RISC was already dead, and IA64 was the future.
In killing Tru64 on the Alpha, and without even porting many of the advanced features (DECnet, TruCluster, AdvFS, etc.) they ended up selling some former Tru64/Alpha customers new hardware twice in a very short period of time.
Now, consider that VMS was eating into HP-UX sales on the low end and NX/NonStop sales on the high-end …
To me, HP purchasing Compaq was a much sadder day than Compaq purchasing DEC.
To put this in perspective for the UNIX people, the HP acquisition was, to many DEC fans and customers, an exponentially sadder day, by manyorders of magnitude, than the acquisition of Sun by Oracle.
The VMSI situation - especially with their current team - is a very exciting and the first extremely welcome bit of excellent news. The first in a long while.
(I want to add that this might not have been an evil HP plan all along, but it’s a pattern that repeated itself over and over with HP, and was the way that many users felt and perceived the corporate actions. HP did, at some stage, have many dedicated and talented people who loved these systems they rather mercilessly dispatched, and were likely as equally or not upset than the customers.)
I knew someone high up in DEC engineering management for Alpha who had endless battles with DEC’s product management people - they really wanted to sell a few alpha servers at huge markup instead of trying to grow the product sales. He told me after the acquistion DEC were on a call explaining sales targets for alpha servers to Compaq and compaq execs asked “is that in units of 1000 or 10,000 ?” and the DEC guys said “no, just that number”
It’s very satisying to see how they progress on completely new platform for this OS, still keeping the legacy chain running, as OpenVMS is probably one of longest supported consistent platforms with updated being released today.
But it’s still intriguang - what’s their target? Some hobbyists do port older OSes’ environments or reimplement them to maintain compatibility with legacy software or just the some hard-to-explain „feeling” ffrom such OS, but this mostly happens in desktop/home environment without significant funding (HaikuOS, MorphOS, ReactOS). In this case, we have pure industry-grade OS being funded by large corporation just to be ported onto some weak home computers’ CPU which is already going to be replaced by ARM in next years.
We ran OpenVMS for our accounting system for > 20 years(I wasn’t here for all of that - I’d guess we started in the early 80’s – to lazy to look it up). We only switched when HP finally gave up and said we are finished. At that time, we decided to move to POSIX-like platforms, as the writing on the wall was pretty clear at the time, OpenVMS was dead/dying and this company hadn’t quite proved they were 100% serious about porting to x86 yet.
In the time it’s taken these guys to get to first boot, we’ve entirely re-written our application with a Qt GUI, and natively support macOS, Windows and Unix-like OS’s, added a lot more functionality, etc. So in retrospect I don’t think our decision was wrong at all, but OpenVMS was definitely a lot more stable than our current Linux/BSD boxes. Our outages under VMS were measured in minutes per decade, and now it’s minutes per year. But it’s just accounting, we don’t need 100% uptime, so it’s OK with our current stability. Also, now we have to buy new hardware every 5 years instead of once a decade like when we were running OpenVMS. I haven’t done a cost comparison, yet, it would be very interesting to compare our actual cost(s) and see if it’s cheaper this way than when we were under OpenVMS… I’d venture to guess it’s about the same taken over a 10 year period.
That said, there are plenty that keep sucking the HP support contracts until the bitter end, and marrying into this new company as quickly as possible. Many old organizations still have plenty of OpenVMS around.
Plus with OpenVMS being ported to x86, it can live under a VM eventually, as x86 VM’s will be here for many, many more decades, even if most of the industry eventually moves to some other chip(like say ARM as you suggest), so OpenVMS won’t die anytime soon, but it will be a niche product for sure.
There are a lot of little reasons why, but I much prefer using VMS over UNIX for almost any task given the opportunity.
I should add that recent studies have shown, for some of the reasons you mention above, that OpenVMS systems still have lower long-term TCO than Unix and Windows systems and higher reliability.
According to Wikipedia and confirmed at the TOP500 site, “as of June 2018, TOP500 supercomputers are now all 64-bit, mostly based on x86-64 CPUs (Intel EMT64 and AMD AMD64 instruction set architecture), with few exceptions”.
OpenVMS supports 32 processors and 8TB RAM per node, with 96 nodes per VMScluster, creating systems of 3,072 processors with 768TB of memory. The x86_64 transition will be the fourth supported hardware platform and the upgrade path from the current IA64-based offering.
OpenVMS is widely used in many mission-critical applications.
VMS clusters were crazy reliable. All kinds of businesses used them. They were happy to pay good money for the OS, too, long as it kept getting updated. It was one of HP’s highest-profit products at one point. Then they ditched it for some reason. This company took over. They’re updating it mainly for those legacy companies.
Search engines’ recent tactics make it hard to find some old articles like the one where customers loved it to death for its reliability and supposed security. I couldn’t find it. You can see here what kinds of critical workloads it’s been running. Wait, I did just find this which describes some of the reliability. I also like the API claim since I said something similar about UNIX/Linux slowly becoming more like mainframes and OpenVMS to meet cloud requirements. Far as I can tell, VMS is still better for what a lot of their users want to do given support for clustering, metering, and distributed apps. It won’t be cheaper, though. Customers used to say there’s a “$” in the command prompt for a reason. ;)
I really disliked VMS, but the VMS cluster/SMP solution was technically better than the competing ones. When Linux went SMP there was an effort to get a similar design, but the standard smp OS design was what was obvious and what the corporate sponsors wanted so ..
There were actually two distinct implementations of VMS multiprocessing — the original implementation was a unique asymmetric design, which changed to symmetric multiprocessing around the VMS 5 timeframe, if my memory is correct.
I may be wrong, but as I recall it, the Galaxy OS design ran multiple instantiations of the same kernel on an SMP machine - there was a single runq per kernel and each OS managed its own address space - although there was a way to reallocte pages. This design had a huge advantage that it did not need the increasingly fine grained and complex and expensive locking systems that are used when a single OS instance has multiple parallel threads of execution.
Yes, OpenVMS Galaxy was a system to partition and run multiple VMS instances on a single server, and these instances could be VMScluster members as well.
It would be really interesting if some of the higher ups in HP someday would ever make public why they ditched OpenVMS.. It’s either a really interesting story, or some number cruncher somewhere just decided it wasn’t worth the porting effort off of the existing, dying hardware line…
It’s always bugged me that I don’t know. The articles before ditching it said it was one of their most profitable products. Then they want to ditch it. My hypothesis was they had two, kind-of-competing products in the same area: OpenVMS Clusters and NonStop. If it’s about reliability for enterprise customers, NonStop is probably the better bet since its assurances go to hardware level. Companies with two products that compete often ditch one. So, they ditched OpenVMS in favor of NonStop.
Again, I have no idea why they did it. I just thought that was plausible. The only counter I’ve gotten to that so far is some OpenVMS experts saying they don’t really compete. The customer testimonials and application areas make me think they really do with OpenVMS’s niche being something that doesn’t go down even if it’s expensive. NonStop customers demand that, too. The two aren’t the same in capabilities but lots of overlap exists.
Wait, I’m leaving off one part of same hypothesis where Windows- and Linux-based solutions were taking over and more in demand for some of OpenVMS’s market. I know there were lots of conversions. OpenVMS market was a mix of different types of demand. Windows and Linux were bleeding it out of the regular, server market with those kind of capabilities. They were cheaper, more flexible, and had stronger ecosystems. Then, what’s left is high reliability and security, esp reliability. That leaves OpenVMS vs NonStop and Stratus. HP owns two. They kill the weaker one in that niche. Maybe less profitable one, too. So, there’s my full hypothesis.
Looking at the timeline, it seems plausible that when they finally came to terms with the sinking of the Itanic, they could not face the effort of porting.
Damn, I cant believe I ovelooked that. I was mentally separating business and tech sides of this. Ports, esp from RISC to CISC, are potentially a huge cost to technologists. Possible they explained it to business people who saw numbers that made them cancel the upgrade.
Doing a quick compare, HP killed VMS on Itanium around June 2013. NonStop was also on Itanium. It was November 2013 when media reported they’d shift to x86. That doesn’t give me anything definitive. It is interesting they started a port on year OpenVMS was cancelled on same architecture, though. Fits your theory a bit but NonStop port had to cost a lot, too. That it involved hardware development weakens your theory a bit.
All I can see right now.
Didn’t compaq make the decision? [ nope! ] Decision to abandon Alpha was also dumb.
But DEC’s decision to dump the relation database they were developing in Colorado Springs, as it was ready to ship, and to sell it and the highly skilled team to Oracle was stupendously dumb.
I thought a bit about this, and other than speculation mentioned by nickpsecurity and others, it all boils down to the tradition that HP has killing off almost all the acquired products, and often in an underhanded way.
My memory of events is such that HP did not want DEC (and other acquision) products competing with their own. Consider how HP acquired and quickly killed Apollo. This seems to be the blueprint - acquire and extinguish - resulting in killing Tru64, and letting VMS languish for awhile as well.
For Tru64 users the upgrade path was intially HP-UX on PA-RISC - even as it was rumored that, internally, PA-RISC was already dead, and IA64 was the future.
In killing Tru64 on the Alpha, and without even porting many of the advanced features (DECnet, TruCluster, AdvFS, etc.) they ended up selling some former Tru64/Alpha customers new hardware twice in a very short period of time.
Now, consider that VMS was eating into HP-UX sales on the low end and NX/NonStop sales on the high-end …
To me, HP purchasing Compaq was a much sadder day than Compaq purchasing DEC.
To put this in perspective for the UNIX people, the HP acquisition was, to many DEC fans and customers, an exponentially sadder day, by many orders of magnitude, than the acquisition of Sun by Oracle.
The VMSI situation - especially with their current team - is a very exciting and the first extremely welcome bit of excellent news. The first in a long while.
(I want to add that this might not have been an evil HP plan all along, but it’s a pattern that repeated itself over and over with HP, and was the way that many users felt and perceived the corporate actions. HP did, at some stage, have many dedicated and talented people who loved these systems they rather mercilessly dispatched, and were likely as equally or not upset than the customers.)
I knew someone high up in DEC engineering management for Alpha who had endless battles with DEC’s product management people - they really wanted to sell a few alpha servers at huge markup instead of trying to grow the product sales. He told me after the acquistion DEC were on a call explaining sales targets for alpha servers to Compaq and compaq execs asked “is that in units of 1000 or 10,000 ?” and the DEC guys said “no, just that number”