1. 29
    1. 7

      https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines but seriously, I don’t see this taking off. Open source OSs can take on Microsoft with enough coders because it’s just software - hardware is a very different business. I wish it could happen, but it’s very doubtful IMHO.

      1. 32

        Depends on what you mean by ‘taking off’. RISC-V has successfully killed a load of in-house ISAs (and good riddance!). For small control-plane processors, you don’t care about performance or anything else much, you just want a cheap Turing-complete processor with a reasonably competent C compiler. If you don’t have to implement the C compiler, that’s a big cost saving. RISC-V makes a lot of sense for things like the nVidia control cores (which exist to set up the GPU cores and do management things that aren’t on the critical path for performance). It makes a lot of sense for WD to use instead of ARM for the controllers on their SSDs: the ARM license costs matter in a market with razor-thin margins, power and performance are dominated by the flash chips, and they don’t need any ecosystem support beyond a bare-metal C toolchain.

        The important lesson for RISC-V is why MIPS died. MIPS was not intended as an open ISA, but it was a de-facto one. Aside from LWL / LWR, everything in the ISA was out of patent. Anyone could implement an almost-MIPS core (and GCC could target MIPS-without-those-two-instructions) and many people did. Three things killed it in the market:

        First, fragmentation. This also terrifies ARM. Back in the PDA days, ARM handed out licenses that allowed people to extend the ISA. Intel’s XScale series added a floating-point extension called Wireless MMX that was incompatible with the ARM floating point extension. This cost a huge amount for software maintenance. Linux, GCC, and so on had to have different code paths for Intel vs non-Intel ARM cores. It doesn’t actually matter which one was better, the fact both existed prevented Linux from moving to a hard-float ABI for userland for a long time: the calling convention passed floating-point values in integer registers, so code could either call a soft-float library or be compiled for one or the other floating-point extensions and still interop with other libraries that were portable across both. There are a few other examples, but that’s the most painful one for ARM. In contrast, every MIPS vendor extended the ISA in incompatible ways. The baseline for 64-bit MIPS is still often MIPS III (circa 1991) because it’s the only ISA that all modern 64-bit MIPS processors can be expected to handle. Vendor extensions only get used in embedded products. RISC-V has some very exciting fragmentation already, with both a weak memory model and TSO: the theory is that TSO will be used for systems that want x86 compatibility, the weak model for things that don’t, but code compiled for the TSO cores is not correct on weak cores. There are ELF header flags reserved to indicate which is which, but it’s easy to compile code for the weak model, test it on a TSO core, see it work, and have it fail in subtle ways on a weak core. That’s going to cause massive headaches in the future, unless all vendors shipping cores that run a general-purpose OS go with TSO.

        Second, a modern ISA is big. Vector instructions, bit-manipulation instructions, virtualisation extensions, two-pointer atomic operations (needed for efficient RCU and a few other lockless data structures) and so on. Dense encoding is really important for performance (i-cache usage). RISC-V burned almost all of their 32-bit instruction space in the core ISA. It’s quite astonishing how much encoding space they’ve managed to consume with so few instructions. The C extension consumes all of the 16-bit encoding space and is severely over-fitted to the output of an unoptimised GCC on a small corpus of C code. At the moment, every vendor is trampling over all of the other vendors in the last remaining bits of the 32-bit encoding space. RISC-V really should have had a 48-bit load-64-bit-immediate instruction in the core spec to force everyone to implement support for 48-bit instructions, but at the moment no one uses the 48-bit space and infrequently used instructions are still consuming expensive 32-bit real-estate.

        Third, the ISA is not the end of the story. There’s a load of other stuff (interrupt controllers, DMA engines, management interfaces, and so on) that need to be standardised before you can have a general-purpose compute platform. Porting an OS to a new ARM SoC used to be a huge amount of effort because of this. It’s now a lot easier because ARM has standardised a lot of this. x86 had some major benefits from Compaq copying IBM: every PC had a compatible bootloader that provided device enumeration and some basic device interfaces. You could write an OS that would access a disk, read from a keyboard, and write text to a display for a PC that would run on any PC (except the weird PC98 machines from Japan). After early boot, you’d typically stop doing BIOS thunks and do proper PCI device numeration and load real drivers, but that baseline made it easy to produce boot images that ran on all hardware. The RISC-V project is starting to standardise this stuff but it hasn’t been a priority. MIPS never standardised any of it.

        The RISC-V project has had a weird mix from the start of explicitly saying that it’s not a research project and wants to be simple and also depending on research ideas. The core ISA is a fairly mediocre mid-90s ISA. Its fine, but turning it into something that’s competitive with modern x86 or AArch64 is a huge amount of work. Some of those early design decisions are going to need to either be revisited (breaking compatibility) or are going to incur technical debt. The first RISC-V spec was frozen far too early, with timelines largely driven by PhD students needing to graduate rather than the specs actually being in a good state. Krste is a very strong believer in micro-op fusion as a solution to a great many problems, but if every RISC-V core needs to be able to identify 2-3 instruction patterns and fuse them into a single micro-op to do operations that are a single instruction on other ISAs, that’s a lot of power and i-cache being consumed just to reach parity. There’s a lot of premature optimisation (e.g. instruction layouts that simplify decoding on an in-order core) that hurt other things (e.g. use more encoding space than necessary), where the saving is small and the cost will become increasingly large as the ISA matures.

        AArch64 is a pretty well-designed instruction set that learns a lot of lessons from AArch32 and other competing ISAs. RISC-V is very close to MIPS III at the core. The extensions are somewhat better, but they’re squeezed into the tiny amount of left-over encoding space. The value of an ecosystem with no fragmentation is huge. For RISC-V to succeed, it needs to get a load of the important extensions standardised quickly, define and standardise the platform specs (underway, but slow, and without enough of the people who actually understand the problem space contributing, not helped by the fact that the RISC-V Foundation is set up to discourage contributions), and get software vendors to agree on those baselines. The problem is that, for a silicon vendor, one big reason to pick RISC-V over ARM is the ability to differentiate your cores by adding custom instructions. Every RISC-V vendor’s incentives are therefore diametrically opposed to the goals of the ecosystem as a whole.

        1. 4

          Thanks for this well laid out response.

          The problem is that, for a silicon vendor, one big reason to pick RISC-V over ARM is the ability to differentiate your cores by adding custom instructions. Every RISC-V vendor’s incentives are therefore diametrically opposed to the goals of the ecosystem as a whole.

          This is part of what makes me skidish, as well. I almost prefer the ARM model to keep a lid on fragmentation than RISC-V’s “linux distro” model. But also, deep down, if we manage to create the tooling for binaries to adapt to something like this and have a form of Universal Binary that progressively enhances with present CPUIDs, that would make for an exciting space.

          1. 6

            But also, deep down, if we manage to create the tooling for binaries to adapt to something like this and have a form of Universal Binary that progressively enhances with present CPUIDs, that would make for an exciting space.

            Apple has been pretty successful at this, encouraging developers to distribute LLVM IR so that they can do whatever microarchitectural tweaks they want for any given device. Linux distros could do something similar if they weren’t so wedded to GCC and FreeBSD could if they had more contributors.

            You can’t do it with one-time compilation very efficiently because each vendor has a different set of extensions, so it’s a combinatorial problem. The x86 world is simpler because Intel and AMD almost monotonically add features. Generation N+1 of Intel CPUs typically supports a superset of generation N’s features (unless they completely drop something and are never bringing it back, such as MPX) and AMD is the same. Both also tend to adopt popular features from the other, so you have a baseline that moves forwards. That may eventually happen with RISC-V but the scarcity of efficient encoding space makes it difficult.

            On the other hand, if we enter Google’s dystopia, the only AoT-compiled code will be Chrome and everything else will be JavaScript and WebAssembly, so your JIT can tailor execution for whatever combination of features your CPU happens to have.

          2. 1

            Ultimately, vendor extensions are just extensions. Suppose a CPU is RV64GC+proprietary extensions, what this means is that RV64GC code would still work on it.

            This is much, much better than the alternative (vendor-specific instructions implemented without extensions).

        2. 2

          Vendor extensions only get used in embedded products. RISC-V has some very exciting fragmentation already, with both a weak memory model and TSO: the theory is that TSO will be used for systems that want x86 compatibility, the weak model for things that don’t, but code compiled for the TSO cores is not correct on weak cores. There are ELF header flags reserved to indicate which is which, but it’s easy to compile code for the weak model, test it on a TSO core, see it work, and have it fail in subtle ways on a weak core. That’s going to cause massive headaches in the future, unless all vendors shipping cores that run a general-purpose OS go with TSO.

          I don’t understand why they added TSO in the first place.

          Third, the ISA is not the end of the story. There’s a load of other stuff (interrupt controllers, DMA engines, management interfaces, and so on) that need to be standardised before you can have a general-purpose compute platform. Porting an OS to a new ARM SoC used to be a huge amount of effort because of this. It’s now a lot easier because ARM has standardised a lot of this. x86 had some major benefits from Compaq copying IBM: every PC had a compatible bootloader that provided device enumeration and some basic device interfaces. You could write an OS that would access a disk, read from a keyboard, and write text to a display for a PC that would run on any PC (except the weird PC98 machines from Japan). After early boot, you’d typically stop doing BIOS thunks and do proper PCI device numeration and load real drivers, but that baseline made it easy to produce boot images that ran on all hardware. The RISC-V project is starting to standardise this stuff but it hasn’t been a priority. MIPS never standardised any of it.

          Yeah this part bothers me a lot. It looks like a lot of the standardization effort is just whatever OpenRocket does, but almost every RISC-V cpu on the market right now has completely different peripherals outside of interrupt controllers. Further, there’s no standard way to query the hardware, so creating generic kernels like what is done for x86 is effectively impossible. I hear there’s some work on ACPI which could help.

          1. 7

            I don’t understand why they added TSO in the first place.

            Emulating x86 on weakly ordered hardware is really hard. Several companies have x86-on-ARM emulators. They either only work with a single core, insert far more fences than are actually required, or fail subtly on concurrent data structures. It turns out that after 20+ years of people trying to implement TSO efficiently, there are some pretty good techniques that don’t sacrifice much performance relative to software that correctly inserts the fences and perform a lot better on the software a lot of people write where they defensively insert too many fences because it’s easier than understanding the C++11 memory model.

            Yeah this part bothers me a lot. It looks like a lot of the standardization effort is just whatever OpenRocket does, but almost every RISC-V cpu on the market right now has completely different peripherals outside of interrupt controllers. Further, there’s no standard way to query the hardware, so creating generic kernels like what is done for x86 is effectively impossible. I hear there’s some work on ACPI which could help.

            Initially they proposed their own thing that was kind-of like FDT but different, because Berkeley. Eventually they were persuaded to use FDT for embedded things and something else (probably ACPI) for more general-purpose systems.

            The weird thing is that Krste really understands the value of an interoperable ecosystem. He estimates the cost of building it at around $1bn (ARM thinks he’s off by a factor of two, but either way it’s an amount that the big four tech companies could easily spend if it were worthwhile). Unfortunately, the people involved with the project early were far more interested in getting VC money than in trying to build an open ecosystem (and none of them really had any experience with building open source communities and refused help from people who did).

            1. 2

              Are the Apple and Microsoft emulators on the “far more fences than are actually required” side? They don’t seem to have many failures..

              1. 2

                I don’t know anything about the Apple emulator and since it runs only on Apple hardware, it’s entirely possible that either Apple’s ARM cores are TSO or have a TSO mode (TSO is strictly more strongly ordered than the ARM memory model, so it’s entirely conformant to be TSO). I can’t share details of the Microsoft one but you can probably dump its output and look.

          2. 2

            there’s no standard way to query the hardware, so creating generic kernels like what is done for x86 is effectively impossible

            Well, device trees (FDT) solve the “generic kernel” problem specifically, but it all still sucks. Everything is so much better when everyone has standardized most peripherals.

            1. 1

              That’s the best solution, but you still have to have the bootloader pass in a device tree, and that device tree won’t get updated at the same cadence as the kernel does (so it may take a while if someone finds a bug in a device tree).

              1. 2

                For most devices it’s the kernel that maintains the device tree. FDT is not really designed for a stable description, it changes with the kernel’s interface.

                1. 2

                  FDT is not specific to a kernel. The same FDT blobs work with FreeBSD and Linux, typically. It’s just a description of the devices and their locations in memory. It doesn’t need to change unless the hardware changes and if you’re on anything that’s not deeply embedded it’s often shipped with U-Boot or similar and provided to the kernel. The kernel then uses it to find any devices it needs in early boot or which are attached to the core via interface that don’t support dynamic enumeration (e.g. you would put the PCIe root complex in FDT but everything on the bus is enumerated via the bus).

                  The reason for a lot of churn recently has been the addition of overlays to the FDT spec. These allow things that are equivalent to option roms to patch the root platform’s FDT so you can use FDT for expansions connected via ad-hoc non-enumerable interfaces.

                  1. 2

                    It doesn’t need to change.. but Linux developers sometimes like to find “better” ways of describing everything, renaming stuff, etc. To be fair in 5.x this didn’t really happen all that much.

                    And of course it’s much worse if non-mainline kernels are introduced. If there’s been an FDT for a vendor kernel that shipped with the device, and later drivers got mainlined, the mainlined drivers often expect different properties completely because Linux reviewers don’t like vendor ways of doing things, and now you need very different FDT..

                    The reason for a lot of churn recently has been the addition of overlays to the FDT spec

                    That’s not that recent?? Overlays are from like 2017..

          3. 1

            Further, there’s no standard way to query the hardware, so creating generic kernels like what is done for x86 is effectively impossible. I hear there’s some work on ACPI which could help.

            There’s apparently serious effort put into UEFI.

            With rpi4 uefi boot, FDT isn’t used. I suppose UEFI itself has facilities to make FDT redundant.

            1. 2

              With RPi4-UEFI, you have a choice between ACPI and FDT in the setup menu.

              It’s pretty clever what they did with ACPI: the firmware fully configures the PCIe controller by itself and presents a generic XHCI device in the DSDT as if it was just a directly embedded non-PCIe memory-mapped XHCI.

              1. 1

                I have to ask, what is the benefit of special casing the usb3 controller?

                1. 2

                  The OS does not need to have a driver for the special Broadcom PCIe host controller.

                  1. 1

                    How is the Ethernet handled?

                    1. 2

                      Just as a custom device, how else? :)

                      Actually it’s kinda sad that there’s no standardized Ethernet “host controller interface” still… (other than some USB things)

                      1. 1

                        Oh. So Ethernet it’s not on PCIe to begin with, then. Only XHCI. I see.

        3. 2

          This doesn’t paint a very good picture of RISC-V, IMHO. It’s like some parody of worse-is-better design philosophy, combined with basically ignoring all research in CPU design since 1991 for a core that’s easy to make an educational implementation for that makes the job of compiler authors and implementers harder. Of course, it’s being peddled by GNU zealots and RISC revanchists, but it won’t benefit the things they want; instead, it’ll benefit vanity nationalist CPU designs (that no one will use except the GNU zealots; see Loongson) and deeply fragmented deep embedded (where software freedom and ISA doesn’t matter other than shaving licensing fees off).

          1. 3

            Ignoring the parent and focusing on hard data instead, RV64GC has higher code density than ARM, x86 and even MIPS16, so the encoding they chose isn’t exactly bad, objectively speaking.

            1. 8

              Note that Andrew’s dissertation is using integer-heavy, single-threaded, C code as the evaluation and even then, RISC-V does worse than Thumb-2 (see Figure 8 of the linked dissertation). Once you add atomics, higher-level languages, or vector instructions, you see a different story. For example, RISC-V made an early decision to make the offset of loads and stores scaled with the size of the memory value. Unfortunately, a lot of dynamic languages set one of the low bits to differentiate between a pointer and a boxed value. They then use a complex addressing mode to combine the subtraction of one with the addition of the field offset for field addressing. With RISC-V, this requires two instructions. You won’t see that pattern in pure C code anywhere but you’ll see it all over the place in dynamic language interpreters and JITs.

              1. 1

                I think there was another example of something far more basic that takes two instructions on RISC-V for no good reason, just because of their obsession with minimal instructions. Something return related?? Of course I lost the link to that post >_<

              2. 1

                Interesting. There’s work on an extension to help interpreters, JITs, which might or might not help mitigate this.

                In any event, it is far from ready.

                1. 7

                  I was the chair of that working group but I stepped down because I was unhappy with the way the Foundation was being run.

                  The others involved are producing some interesting proposals though a depressing amount of it is trying to fix fundamentally bad design decisions in the core spec. For example, the i-cache is not coherent with respect to the d-cache on RISC-V. That means you need explicit sync instructions after every modification to a code page. The hardware cost of making them coherent is small (i-cache lines need to participate in cache coherency, but they can only ever be in shared state, so the cache doesn’t have to do much. If you have an inclusive L2, then the logic can all live in L2) but the overheads from not doing it are surprisingly high. SPARC changed this choice because the overhead on process creating from the run-time linker having to do i-cache invalidates on every mapped page were huge. Worse, RISC-V’s i-cache invalidate instruction is local to the current core. That means that you actually need to do a syscall, which does an IPI to all cores, which then invalidates the i-cache. That’s insanely expensive but the initial measurements were from C code on a port of Linux that didn’t do the invalidates (and didn’t break because the i-cache was so small you were never seeing the stale entries).

                  1. 1

                    L1$ not coherent

                    Christ. How did that go anywhere?

                    1. 7

                      No one who had worked on an non-toy OS or compiler was involved in any of the design work until all of the big announcements had been made and the spec was close to final. The Foundation was set up so that it was difficult for any individuals to contribute (that’s slowly changing) - you had to pay $99 or ask for the fee to be waived to give feedback on the specs as an individual. You had to pay more to provide feedback as a corporation and no corporation was going to pay thousands of dollars membership and the salaries of their contributors to provide feedback unless they were pretty confident that they were going to use RISC-V.

                      It probably shouldn’t come as a surprise that saying to people ‘we need your expertise, please pay us money so that you can provide it’ didn’t lead to a huge influx of expert contributors. There were a few, but not enough.

      2. 7

        Keep in mind an ISA isn’t hardware, it’s just a specification.

        1. 6

          That ties into my point - RISC-V is kinda useless without fabbing potential. And that’s insanely expensive, which means the risk involved is too high to take on established players.

          1. 9

            According to the article, it seems that Samsung, Western Digital, NVIDIA, and Qualcomm don’t think the risk is too high, since they plan to use RISC-V. They have plenty of money to throw at any problems, such as inadequate fabbing potential. Hobbyists may benefit from RISC-V, but (like Linux) it’s not just for hobbyists.

            1. 9

              According to the article, it seems that Samsung, Western Digital, NVIDIA, and Qualcomm don’t think the risk is too high, since they plan to use RISC-V.

              I think it is more accurate they plan to use the threat of RISC-V to improve negotiating position, use it in some corner cases and as a last ditch hedge. Tizen is a prime example of such a product.

              1. 2

                I think it is more accurate they plan to use the threat of RISC-V to improve negotiating position, use it in some corner cases and as a last ditch hedge.

                Yet WD and NVIDIA designed their own RISC-V cores. Isn’t it a bit too much for “insurance”?

                The fact here is that they do custom silicon and need CPUs in them for a variety of purposes. Until now, they paid the ARM tax. From now on, they don’t have to, because they can and do just use RISC-V.

                I’m appalled at how grossly the impact of RISC-V is being underestimated.

                1. 4

                  Yet WD and NVIDIA designed their own RISC-V cores. Isn’t it a bit too much for “insurance”?

                  I don’t think so – it isn’t purely insurance, it is negotiating power. The power can be worth tens (even hundreds) of millions for companies at the scale of WD and NVIDIA. Furthermore they didn’t have to develop FABs for the first time, both have existing manufacturing prowess and locations. I think it is a rather straightforward ROI based business decision.

                  The fact here is that they do custom silicon and need CPUs in them for a variety of purposes. Until now, they paid the ARM tax. From now on, they don’t have to, because they can and do just use RISC-V.

                  They will use this to lower the ARM tax without actually pulling the trigger on going with something as different as RISC-V (except on a few low yield products to prove they can do it, see Tizen and Samsung’s strategy).

                  I’m appalled at how grossly the impact of RISC-V is being underestimated.

                  Time will tell, but I think that RISC-V only become viable if Apple buys and snuffs out new customers of ARM, only maintaining existing contracts.

                  1. 1

                    I don’t think so – it isn’t purely insurance, it is negotiating power.

                    Do you think they have any reason left to license ARM, when they clearly can do without?

                    Time will tell, but I think that RISC-V only become viable if Apple buys and snuffs out new customers of ARM, only maintaining existing contracts.

                    I see too much industry support behind RISC-V at this point. V extension will be quite the spark, so we’ll see how it plays out after that. All it’ll take is one successful high performance commercial implementation.

                    1. 3

                      Do you think they have any reason left to license ARM, when they clearly can do without?

                      I think you are underestimating the cost of rebuilding an entire ecosystem. I have run in production ThunderX arm64 servers – and ARM has massive support behind it and we still fell into weird issues, niches and problems. Our task was fantastic fit (large-scale OCR) and it still was tough setup and in the end due to poor optimizations and other support issues – it probably wasn’t worth it.

                      I see too much industry support behind RISC-V at this point. V extension will be quite the spark, so we’ll see how it plays out after that. All it’ll take is one successful high performance commercial implementation.

                      Well – I think it actually takes a marketplace of commercial implementations so that selecting RISK-V isn’t single-vendor lockin forever, but I take your meaning.

            2. 3

              As I said up top, I hope this really happens. But I’m not super confident it’ll ever be something we can use to replace our AMD/Intel CPUs. If it just wipes out the current microcontroller and small CPU space that’s good too, since those companies don’t usually have good tooling anyway.

              I just think features-wise it’ll be hard to beat the current players.

              1. 1

                I just think features-wise it’ll be hard to beat the current players.

                Can you elaborate on this point?

                What are the features? Who are the current players?

                1. 4

                  Current players are AMD64 and ARM64. Features lacking in RV64 include vector extension.

                  1. 4

                    I notice you’re not the author of the parent post. Still,

                    Features lacking in RV64 include vector extension.

                    V extension is due to be active standard by September if all is well. This is practically like saying “tomorrow”, from a ISA timeline perspective. To put it into context, RISC-V was introduced in year 2010.

                    Bit manipulation (B) is also close to active standard, and also pretty important.

                    With these extensions out of the way, and software support where it is today, I see no features stopping low power, high performance implementations appearing and getting into smartphones and such.

                    AMD64 and ARM64.

                    The amd64 ISA is CISC legacy. Popular or not, it’s long overdue replacement.

                    ARM64 isn’t a thing. You might have meant aarch64 or armv8.

                    I’m particularly interested whether the parent meant ISAs or some company names regarding current players.

                    1. 4

                      ARM64 isn’t a thing. You might have meant aarch64 or armv8.

                      The naming is a disaster :/ armv8 doesn’t specifically mean 64-bit because there’s technically an armv8 aarch32, and aarch64/32 is just an awful name that most people don’t want to say out loud. So even ARM employees are okay with the unofficial “arm64” name.


                      Another player is IBM with OpenPOWER.. Relatively fringe compared to ARM64 (which the Bezos “Cloud” Empire is all-in on, yay) but hey, there is a supercomputer and some very expensive workstations for open source and privacy enthusiasts :) and all the businesses buying IBM’s machines that we don’t know much about. That’s much more than desktop/server-class RISC-V… and they made the POWER ISA royalty-free too now I think.

                      1. 7

                        The naming is a disaster :/ armv8 doesn’t specifically mean 64-bit because there’s technically an armv8 aarch32

                        Amusingly, the first version of the ARMv8 spec made both AArch32 and AArch64 optional. I implemented a complete 100% standards-compliant soft core based on that version of the spec. They later clarified it so that you had to implement at least one out of AArch32 and AArch64.

                      2. 4

                        SPARC is also completely open. Yes, POWER is open now, but I don’t see why it would fare better than SPARC.

                        1. 1

                          In terms of diversity of core designers and chip makers, maybe not. But POWER generally just as an ISA is doing much better. IBM clearly cares about making new powerful chips and is cultivating a community around open firmware.

                          Who cares about SPARC anymore? Seems like for Oracle it’s kind of a liability. And Fujitsu, probably the most serious SPARC company as of late, is on ARM now.

                  2. 1

                    Current players are AMD64 and ARM64

                    And ARM32/MIPS/AVR/SuperH/pick your favorite embedded ISA. The first real disruption brought by RISC-V will be in microcontrollers and in ASICs. With RISC-V you aren’t tied to one’s board/chip to a single company (Like ARM Holdings, MIPS Technologies, Renesas, etc.). If they go under/decide to get out of the uC market/slash their engineering budgets/start charging double you can always license from another vendor (or roll your own core). In addition, the tooling for RISC-V is getting good fairly fast and is mostly open source. You don’t have to use the vendor’s closed-source C compiler, or be locked into their RTOS.

                    1. 1

                      The first real disruption

                      Indeed. The second wave is going to start soon, triggered by stable status of the V and B extensions.

                      This will enable Qualcomm and friends to release smartphone-tier SoCs with RISC-V CPUs in them.

          2. 6

            Yes fab is expensive, but SiFive is a startup, it still managed to fab RISC-V chips that can run Linux desktop. I don’t think there is need to be too pessimistic.

            1. 4

              The economics are quite interesting here. Fabs are quite cheap if you are a brand-new startup or a large established player. They give big discounts to small companies that have the potential to grow into large customers (because if you get them early then they end up at least weakly tied into a particular cell library and you have a big long-term revenue stream). They give good prices to big customers, because they amortise the setup costs over a large number of wafers. For companies in the middle, the prices are pretty bad. SiFive is currently getting the steeply discounted rate. It will be interesting to see what happens as they grow.

          3. 5

            RISC-V is kinda useless without fabbing potential.

            RISC-V foundation have no interest on fabbing themselves.

            And that’s insanely expensive, which means the risk involved is too high to take on established players.

            Several chips with CPU in them based on RISC-V have been fabricated. Some are already shipped as components in other products. Some of these chips are available on sale.

            RISC-V’s got significant industry backing.

            Refer to: https://riscv.org/membership/

          4. 3

            There’s a number of companies that provide design and fabbing services or at least help you realize that.

            The model is similar to e.g. SOLR, where the core is an open source implementation, but enterprise services are actually provided by a number of companies.

          5. 2

            With ARM on the market, RISC-V has to be on a lot of people’s minds; specifically, those folks that are already licensing ARM’s ISA, and producing chips…

      3. 3

        Open source OSs can take on Microsoft with enough coders because it’s just software

        Yet we haven’t seen that happening either. In general, creating a product that people love require a bit more than opensource software. It requires vision, deep understanding of humans and rock solid implementation. This usually means the cathedral approach that is exactly the opposite of FOSS approach.

        1. 5

          Maybe not for everyone on the market, but I’ve been using Linux exclusively for over 10 years now, and I’m not the only one. Also, for some purposes (smartphones, servers, SBCs, a few others) Linux is almost the only choice.

          1. 3

            You are absolutely in the minority though in terms of desktop computing. The vast majority of people can barely get their hand held through Mac OS, much less figuring out wtf is wrong with their graphics drivers or figuring out why XOrg has shit out on them for the 3rd time that week, or any number of problems that can (and do) crop up when using Linux on the desktop. Android, while technically Linux, doesn’t really count IMO because it’s almost entirely driven by the vision, money, and engineers of a single company that uses it as an ancillary to their products.

            1. 6

              That’s a bit of a stereotype - I haven’t edited an Xorg conf file in a very long time. It’s my daily driver so stability is a precondition. My grandma runs Ubuntu and it’s fine for what she needs.

              1. 3

                Not XOrg files anymore, maybe monitors.xml, maybe it’s xrandr, whatever. I personally just spent 4+ hours trying to get my monitor + graphics setup to behave properly with my laptop just last week. Once it works, it tends to keep working (though not always, it’s already broken on me once this week for seemingly no reason) unless you change monitor configuration or it forgets your config for some reason, but getting it to work in the first place is a massive headache depending on the hardware. Per-monitor DPI scaling is virtually impossible on XOrg, and Wayland is still a buggy mess with limited support. Things get considerably more complex with a HiDPI, multi-head setup, which are things that just work on Windows or Mac OS.

                The graphics ecosystem for Linux is an absolute joke too. That being said, my own mother runs Ubuntu on a computer that I built and set up, it’s been relatively stable since I put in the time to get it working in the first place.

        2. 2

          Not on th desktop, for sure. Server side however, GNU Linux is a no brainer, the default choice.