Written by miod@ and published on twitter: https://twitter.com/MiodVallat/status/669253976261570561
[Comment removed by author]
It most certainly does actually. Security vulnerabilities in old versions of x86 and ones created by providing backwards compatibility have opened a horrifying amount of CPUs to bad bad exploits. Intel has fixed the ones it can, but many cannot be fixed (or are only fixed on future CPUs). Continuing to support backwards compatible instruction sets and architectures is a horrible security flaw.
You absolutely should care that your CPU’s backwards compatibility is one of the biggest risks to your computer’s security. As security research gets more and more advanced, I imagine we will begin to see more and more of this type of advanced CPU attacks.
Also, it’s against Intel’s interest to remove backwards compatibility out of fear that one of these ancient systems or compilers that uses ancient instruction set 33 doesn’t stop working, causing those applications to fail. If we make a concerted effort as a community (ha) to focus on a clean slate of instruction sets, then maybe we can change their mind about the issues with it.
I don’t see your point. End users have never cared about CPU architectures.
But that doesn’t mean the architecture isn’t important. Those applications are still running on CPUs, whether the end user thinks about it or not, and there can still be advantages and disadvantages to different CPU architectures that affect things the end user does care about, like battery life and performance.
So clearly somebody has to know about and “care” about the underlying architecture, because operating systems, compilers, and low level libraries aren’t going to write themselves.. It seems pretty clear to me that this article was written by an OpenBSD developer, targeting an audience that does still have to care about CPU architecture.
Mostly agreed, but there are some newer mips64 making some noise. Also with crippled uboot bootloader.
The same applies to all non-mainstream components in any platform.
For instance, take any operating system besides the big 3 (Windows, Mac OS X and Linux, in that order).
Software, both commercial and free, is almost never tested on anything else. Running modern open source projects on a BSD often requires quit a bit of fiddling with configure scripts, makefiles or whatever else you need to fix before it works. It often depends on Linux-only functions even when a viable POSIX alternative is available.
Hardware support is often even worse. Vendors completely neglect other operating systems. There are some people who go through tremendous efforts to port over Linux drivers (and sometimes even entire kernel subsystems) to their operating system of choice, but they’ll always lag one or more generations of hardware behind (look at Intel graphics drivers for example).
But that doesn’t mean those platforms are pointless. OpenBSD for instance, which is fairly popular among the members of this site, is valuable because it has a different focus and prioritizes quality over new bells and whistles.
The same may be true for hardware platforms. Though I’m not holding my breath for Power or MIPS to make a come back, I’m eagerly awaiting the first products to come out of the RISC-V & lowrisc efforts.
There was a time when, for better or worse, you had your choice of SunOS or Ultrix or Irix or HP/UX. Software used to work on any of them, and it was even harder back then to make that happen. (ok, revisitionist history. some software maybe worked on all of them, and it was certainly a pain.) Now it’s “Linux 3.14 or gtfo”. Ironically, 20 years ago your choices were less free, but at least you had the freedom to make a choice.
I miss the beautiful hardware of old SGI machines. :(
Yeah, me too. I don’t miss Irix, however.
mips and powerpc are not dead in the network space.
I’m still a noob in the field, but from what I understand, there’s a shift for higher end whitebox switches to run on x86.
I work on what would be considered a “dead” architecture: z Systems (a.k.a. mainframes). z hardware has some absolutely incredible stuff, but using it is a whole other story. The Unix System Services provides a POSIX layer, but everything is pretty much 20 years old. If you write JCL, well, you’ll feel older. ;)
Essentially, nothing modern works on z without a whole lot of effort to port it (even ignoring the ASCII/EBCDIC nightmare). On the systems I use, bash is from 1998. I can’t imagine the amout of work it would take to get it to be as usable as any modern Unix, even AIX.
Well, Assembler is becoming more and more irrelevant and people program in high-level languages with compilers and translators written in C.
I love RISC to be honest and am not a big fan of the amd64 instruction set. A new, “clean” approach to a 64-Bit architecture would be the “cleanest” solution, but I definitely understand why no one cares.
If an architecture doesn’t bring anything new to the table, who would use it? If I came up with some revolutionary way of language design which somehow magically reduces the number of cache-misses, it may be adopted. But not just for the architecture’s sake.
This was a nice write-up on this matter. Good work by the author!
A new, “clean” approach to a 64-Bit architecture would be the “cleanest” solution, but I definitely understand why no one cares.
The main things RISC-V brings to the table are
Custom opcode space is a recipe for cruft as soon as people start using it. What’s the betting OSes end up having to support two different extensions that do the same thing with slightly different opcode syntax?
All these softcores are actively being developed and all run linux.
Microblaze and Nios 2 in particular have the support of the two largest fpga vendors.
Rather curiously Altera has been bought out by Intel…. Thus the Nios 2 is now an Intel CPU and Intel formally announced they would continue to support it.