[ Disclaimer: I have no knowledge of AMD roadmaps ]
I don’t find it at all surprising that AMD is developing an Arm[1] chip. They’ve been an Arm licensee for ages and already use Arm cores in some places (e.g. in the platform security processor). I’d be quite surprised if they hadn’t had a group working on fitting an Arm front end to their cores for a while. That said, there’s a big difference between ‘working on X’ and ‘shipping X as a product’. There’s a big gap between ‘AMD developing an Arm core internally so that they have leverage with Intel when they renew cross-licensing deals’ and ‘AMD plans on shipping an Arm laptop part’. I’d love to know which of these it actually is. Apple kept their x86 implementation of OS X around for around a decade before the Intel switch, to use in negotiations with IBM and Motorola / FreeScale. It took a change in the competitive landscape outside of their control before they shipped it as a product.
[1] Minor aside: Arm redid their branding a few years back and their style guide now recommends that you write it as Arm not ARM. It originally stood for Acorn RISC Machines, then Advanced RISC Machines, but it was just ARM Holdings for a while and they’ve dropped the term ‘RISC’ from everything as well. They now refer to the Arm architectures as ‘load-store architectures’, not as RISC. The instruction sets for both AArch32 and AArch64 are pretty massive, but they are orthogonal and everything is added because a compiler / OS actually can make use of it, unlike traditional CISC cores. It makes me chortle a bit when I read articles that talk about ARM RISC cores.
The name load-store architecture makes sense to me for the arm instruction set, but the name has always made me wonder about its counterparts. Someone invented that name for one class within a classification, presumably because the classification made sense as a way to separate CPU architectures into top-level classes. What is that classification and what are other other classes?
The other class is CISC where you have instructions that tell the CPU to load a value, modify it and store it back (eg. x86: addl $3, 4)
It never got a “fancy” name, and I guess the load-store naming only appeared because “reduced instruction set” was hard to say with a straight face when talking about an architecture with 1000+ instructions (e.g. ARM) because reduced can mean both “functionally reduced instructions” (e.g. load-store architecture) and “reduced number of instructions” (what RISC originally was as well, but only incidental).
Think what you will about Apple and its approach to innovation, but they deserve some credit for playing a key role in transferring the whole personal computer industry to a more efficient processor architecture. There were talks about ARM-based laptops for the longest time, but nobody really took a plunge. Granted, Apple is in a unique position to do it. Still, now when any major piece of user-facing software needs to take care of ARM builds, other manufacturers may follow with relative ease.
Is there some new information about this, somewhat old, rumour that I’m missing? The byline is December 4th, 2020, and the article says “Mauri QHD said that AMD CEO Dr Lisa Su will have a presentation at CES 2021 on January 12 (…)” - but that presentation has come and gone, and doesn’t appear to have contained anything significant about an ARM chip.
Of course, it would be interesting to see what AMD does with the ARM architecture…
I really want them to do a dual-ISA processor, with both amd64 and arm64 frontends decoding into the same micro-ops. Imagine just being able to boot both amd64 and arm64 .efi files, running virtual machines of both architectures natively at the same time…
Centaur (Glenn Henry was particularly interested in the problem) and IBM have done research into such things, but it never really took off. I suspect it’d blow out the complexity budget and end up worse than emulation/just recompiling it.
nVidia’s Project Denver CPU line was based on the TransMeta designs and both were intended to support multiple ISAs, with a JIT layer. The history of Project Denver is quite interesting: they were multiple years into development before they picked an ISA. nVidia was negotiating with Intel for an x86 license for a long time and eventually gave up and switched to Arm.
There’s more to an ISA than a decoder. Easy example: ARM and AMD64 have quite different constraints on memory consistency. The data structures that the chip uses to talk to the operating system for things like paging and interrupts are different. And probably more stuff that I’m forgetting.
You would just end up with something really complicated, and/or subtly incompatible with both.
That’s not a chicken bit. A chicken bit is something outside the architecture, undocumented, and typically set or cleared by something like the bootloader and left that way forever.
Whether to use the ARM64 memory model or the x86 memory model is an explicit part of the chip architecure and is settable on a per-process basis.
The RISC-V community started talking about the same capability – choosing between the high performance RISC-V memory model, and an x86-compatible TSO memory model – back in 2017 and ratified it as a standard optional feature of RISC-V in June 2019.
That’s not quite true. TSO is a valid implementation of Arm’s relaxed memory model. The problem would be if you had to implement two relaxed memory models where neither was a relaxation of the other but it’s completely valid to implement the Arm ISAs with the x86 memory model. A bunch of fences become no-ops and a bunch of permitted-but-not-required orderings never happen, but you still have a valid Arm implementation.
IBM didn’t develop x86, Intel did. I think they’re confusing x86 and PC.
(my emphasis)
I should bloody well hope not!
[ Disclaimer: I have no knowledge of AMD roadmaps ]
I don’t find it at all surprising that AMD is developing an Arm[1] chip. They’ve been an Arm licensee for ages and already use Arm cores in some places (e.g. in the platform security processor). I’d be quite surprised if they hadn’t had a group working on fitting an Arm front end to their cores for a while. That said, there’s a big difference between ‘working on X’ and ‘shipping X as a product’. There’s a big gap between ‘AMD developing an Arm core internally so that they have leverage with Intel when they renew cross-licensing deals’ and ‘AMD plans on shipping an Arm laptop part’. I’d love to know which of these it actually is. Apple kept their x86 implementation of OS X around for around a decade before the Intel switch, to use in negotiations with IBM and Motorola / FreeScale. It took a change in the competitive landscape outside of their control before they shipped it as a product.
[1] Minor aside: Arm redid their branding a few years back and their style guide now recommends that you write it as Arm not ARM. It originally stood for Acorn RISC Machines, then Advanced RISC Machines, but it was just ARM Holdings for a while and they’ve dropped the term ‘RISC’ from everything as well. They now refer to the Arm architectures as ‘load-store architectures’, not as RISC. The instruction sets for both AArch32 and AArch64 are pretty massive, but they are orthogonal and everything is added because a compiler / OS actually can make use of it, unlike traditional CISC cores. It makes me chortle a bit when I read articles that talk about ARM RISC cores.
They did have an Arm SoC as a product, the Opteron A1100, that could be purchased.
Could it? I thought it never went beyond pre-purchase/demo.
It could albeit briefly. There was a generally available board on 96boards.org
http://armdevices.net/2015/11/16/amd-huskyboard-96boards-enterprise-edition-explained-by-jon-masters-of-red-hat/
A more popular product was the SoftIron Overdrive 1000/3000
Going off on a tangent…
The name load-store architecture makes sense to me for the arm instruction set, but the name has always made me wonder about its counterparts. Someone invented that name for one class within a classification, presumably because the classification made sense as a way to separate CPU architectures into top-level classes. What is that classification and what are other other classes?
The other class is CISC where you have instructions that tell the CPU to load a value, modify it and store it back (eg. x86: addl $3, 4)
It never got a “fancy” name, and I guess the load-store naming only appeared because “reduced instruction set” was hard to say with a straight face when talking about an architecture with 1000+ instructions (e.g. ARM) because reduced can mean both “functionally reduced instructions” (e.g. load-store architecture) and “reduced number of instructions” (what RISC originally was as well, but only incidental).
There’s not much of substance in the article, beyond a reference to the buried K12.
Well, which is it?
Think what you will about Apple and its approach to innovation, but they deserve some credit for playing a key role in transferring the whole personal computer industry to a more efficient processor architecture. There were talks about ARM-based laptops for the longest time, but nobody really took a plunge. Granted, Apple is in a unique position to do it. Still, now when any major piece of user-facing software needs to take care of ARM builds, other manufacturers may follow with relative ease.
Is there some new information about this, somewhat old, rumour that I’m missing? The byline is December 4th, 2020, and the article says “Mauri QHD said that AMD CEO Dr Lisa Su will have a presentation at CES 2021 on January 12 (…)” - but that presentation has come and gone, and doesn’t appear to have contained anything significant about an ARM chip.
Of course, it would be interesting to see what AMD does with the ARM architecture…
I really want them to do a dual-ISA processor, with both amd64 and arm64 frontends decoding into the same micro-ops. Imagine just being able to boot both amd64 and arm64
.efi
files, running virtual machines of both architectures natively at the same time…Centaur (Glenn Henry was particularly interested in the problem) and IBM have done research into such things, but it never really took off. I suspect it’d blow out the complexity budget and end up worse than emulation/just recompiling it.
nVidia’s Project Denver CPU line was based on the TransMeta designs and both were intended to support multiple ISAs, with a JIT layer. The history of Project Denver is quite interesting: they were multiple years into development before they picked an ISA. nVidia was negotiating with Intel for an x86 license for a long time and eventually gave up and switched to Arm.
There’s more to an ISA than a decoder. Easy example: ARM and AMD64 have quite different constraints on memory consistency. The data structures that the chip uses to talk to the operating system for things like paging and interrupts are different. And probably more stuff that I’m forgetting.
You would just end up with something really complicated, and/or subtly incompatible with both.
Apple’s solution is to just add a chicken bit to enable the x86 memory model. That’s how Rosetta achieves the performance it does emulating.
That’s not a chicken bit. A chicken bit is something outside the architecture, undocumented, and typically set or cleared by something like the bootloader and left that way forever.
Whether to use the ARM64 memory model or the x86 memory model is an explicit part of the chip architecure and is settable on a per-process basis.
The RISC-V community started talking about the same capability – choosing between the high performance RISC-V memory model, and an x86-compatible TSO memory model – back in 2017 and ratified it as a standard optional feature of RISC-V in June 2019.
That’s not quite true. TSO is a valid implementation of Arm’s relaxed memory model. The problem would be if you had to implement two relaxed memory models where neither was a relaxation of the other but it’s completely valid to implement the Arm ISAs with the x86 memory model. A bunch of fences become no-ops and a bunch of permitted-but-not-required orderings never happen, but you still have a valid Arm implementation.