1. 27
  1.  

    1. 10

      I was not aware of this distro! It looks quite appealing. But, in case anyone else was confused like me, it has nothing to do with the Arch-based https://chimeraos.org

      It is frustrating how slow RISC-V hardware has been to attain performance parity with commercial incumbent architectures. Given the economics of chip-making, I don’t really expect the lack of a reasonably performant build server candidate board to change any time soon. Rather, I expect RISC-V to continue nibbling away at the “embedded” low end.

      1. 5

        It’s been hit by

        • COVID

        • Intel going slowly bankrupt leading to what would have been a very interesting chip two years ago getting cancelled and the same cores just now hitting the market in an SoC from a Chinese company instead, quite probably using not as good DDR etc IP and an older process node.

        • a very interesting chip that was probably going to be out about now, and probably leapfrogging the RK3588 and Pi 5 chip (whatever that’s called), getting cancelled due to US sanctions.

        • Android development is gated by the RVA23 spec, which was just published in December, so actual chips will be a year or three away. The V spec and some others were finished two years later than had been hoped for.

        • most of the Western big money for higher performance seems to be going into the very profitable automotive and aerospace markets, with no one caring too much about the market for a few thousand or even hundreds of thousands of SBCs for hobbyists.

        • there is no point even going for the many millions of units of consumer phones / tablets / PCs until they can complete with recent generation x86 and Apple, which has been consistently looking for a few years now to be around 2027.

        Is 2027 or 2028 “any time soon” for you? It’s frighteningly close for the people who are actually working on the hardware and software needed!

        1.  

          most of the Western big money for higher performance seems to be going into the very profitable automotive and aerospace markets, with no one caring too much about the market for a few thousand or even hundreds of thousands of SBCs for hobbyists.

          That market is never specifically catered to by SoC manufacturers. You are getting chips intended for Android TV boxes or industrial automation, Automotive investment right now is at rock bottom, so that’s not siphoning away any RISC-V money.

          there is no point even going for the many millions of units of consumer phones / tablets / PCs until they can complete with recent generation x86 and Apple

          It is strange to me that you think RISC-V phones and PCs would ever be interesting to a business if performance was good? RISC-V as an ecosystem currently has zero advantages. Even if you make your own SoC you’re going to be licensing a core design from someone if you want anything that’ll be able to compete with Arm’s portfolio of freebies they just give you at the base tier. Anyone large enough to benefit from a patent-free license-free ISA to develop their own high-performance cores from is so large they can just negotiate better deals with Arm.

          There’s simply no business case for Cortex-A-tier RISC-V, not even Qualcomm could be bothered to invest in licensing some SiFive cores or making their own of that performance tier after having a lawsuit with Arm over ISA license fees.

          Is 2027 or 2028 “any time soon” for you? It’s frighteningly close for the people who are actually working on the hardware and software needed!

          Are you saying Chimera-Linux should simply keep doing RISC-V builds to anticipate hardware that doesn’t exist, just in case any happens to pop up out of nowhere one day? What’s their benefit in doing that?

      2. 6

        RISC-V is like quantum…always just a few years away

        1. 16

          I think this is completely untrue. RISC-V is real, exists, works, there are hardware products being built, embedded into mass produced devices, etc.

          It’s just in the space that most of us are mostly interested in - modern high performance CPUs - the instruction set is maybe 1% of the story. Modern CPUs are one of the most elaborated artifacts of human engineering and result of decades of research and improvements, part of a huge industrial and economical base built around it. That’s not something that can be significantly altered rapidly.

          Just look how long it took Arm to get into high performance desktop CPUs. And there was big and important business behind it, with strategy and everything.

          1. 7

            They’re not asking for high-performance desktop CPUs here though. Judging by the following statement on IRC:

            <q66> there isn't one that is as fast as rpi5
            <q66> it'd be enough if it was
            

            it sounds like anything that even approaches a current day midrange phone SoC would be enough. RPi5 is 4x Cortex-A76, which is about a midrange phone SoC in 2021.

            1.  

              Last I checked, the most commonly recommended RISC-V boards were slower than a Raspberry Pi 3, and the latest and greatest and most expensive boards were somewhere in the middle between RPi 3 and 4. So yeah, pretty slow.

                1.  

                  That’s very misleading. That’s what Geekbench says, but GB isn’t relevant to software package build farms. For building software most common RISC-V SBCs at the moment are far closer to Pi 4 than to Pi 3, ESPECIALLY given that Pi 3 never has more than 1 GB RAM (and Pi4 has never had more than 8 GB) while 8 GB, 16 GB, 32 GB RAM RISC-V SBCs are everywhere and you can get the 128 GB RAM 64 core Milk-V Pioneer for a lot less money than the Ampere Altra they talk about in that post.

                2.  

                  We would have had faster than Pi 5 RISC-V machines right around now if US sanctions hadn’t nerfed the SG2380.

                3. 5

                  Beyond microcontrollers, I really haven’t seen anything remotely usable. I’d love to be wrong though.

                  I tried to find a Pi Zero replacement, but the few boards available all had terrible performance. The one I bought in the end turned out has an unusable draft implementation of vector instructions and it’s significantly slower than any other Linux board I’ve ever used, from IO to just CPU performance. Not to mention the poor software support (I truly despise device trees at this point).

                  1.  

                    I truly despise device trees at this point

                    Could be worse, could be ACPI.

                    1.  

                      The Milk-V duo beats the Pi Zero in every way, and that draft 0.7.1 vector implementation is perfectly usable using asm or C intrinsics in GCC 14+.

                    2.  

                      Just for a data point, the RP2350 chip in the Raspberry Pi Pico 2 includes a pair of RV32IMACZb* cores as alternates for the main Cortex-M33 cores: you can choose which architecture to boot into. The Pico 2 costs $5 in quantities of 1.

                  2.  

                    I don’t know why almost everybody descended from Unix is still so adamant that you only build for target X on target X. The only project I can think of going another way is Zig.

                    1.  

                      Go, Python, Java, C#, Rust etc. all care about either cross compilation or running on multiple platforms.

                      1.  

                        Because in cross-compilation scenarios you are unable to test the software built and that is quite a major point in software distributions, where each package built runs its own tests. If something is broken and it won’t catch via tests, you’re shipping broken software to users.

                      2.  

                        The reason for doing it this way was that there wasn’t any hardware we could use for performance reasons; I had obtained a SiFive HiFive Unmatched board in October 2021 and this proved to be useless for builds as the performance of this board is similar to Raspberry Pi 3. Other boards came later, but none of them improved on that front significantly enough.

                        I don’t really get this. So if these folks were around twenty years ago, would they not’ve gone in to computing?

                        You can start something running, and so long as the platform is stable, you can come back for the results months later. Having self-built NetBSD on m68k and on VAX, and considering I compile pkgsrc binary packages for those architectures, all I can say is that it just seems like people are rushing for no good reason.

                        Also:

                        It burns a ton of power for how slow it is, because it fully loads a beefy x86 machine, and I’m not happy at all about that.

                        Are they running Intel? I’ve got a twelve core Ryzen 7900 here which, amongst other things, runs NetBSD/riscv in qemu, and even with all cores at 100%, takes less than 120 watts. This system even has spinning rust.

                        Also, if qemu is hanging, then there’s something wrong with the emulation or with the OS running in the emulation. NetBSD/riscv had issues for quite a while, but now it can compile and run for weeks at a time, unattended, compiling 24/7.

                        I wonder if there’s more to this than is written here.

                        1. 17

                          i don’t think you recognize what it takes to build an entire package repository for a linux distro while actually giving a damn about what is shipped (i.e. taking care of quality control so that it’s as good as the other architectures)

                          the idea is that all architectures are roughly at parity when it comes to what is in their repos and how well the stuff that is in there works in practice; using slow hardware (the hifive unmatched is ~7x slower than the emulation and the emulation is ~5x slower than the second slowest builder) means that the architecture will always lag heavily in the build queue and i really cannot be bothered to let it hold back the rest, especially not in a rolling release system that keeps everything always up to date

                          you may have built netbsd on a vax or whatever but the expectations for that are far lower (i doubt you have to build things like firefox and kde for it and keep them up to date)

                          qemu-user randomly hanging is long known and gradually mitigated over time but never truly fixed, it’s not a hardware problem and it can be experienced on many configurations (and btw, no, it’s not intel, it’s ryzen 5950x)

                          1.  

                            the hifive unmatched is ~7x slower than the emulation

                            Only if your emulation has far more cores. The U74 is pretty close to parity on per core speed vs qemu on the best x86. For example building a defconfig Kernel on VisionFive 2 (a massively cheaper way to buy U74 cores than the Unmatched) takes 67m35s real time, 250m user, while on docker/qemu on a 24 core i9-13900HX it takes 19m12s real time, 584m user. I haven’t done a -j4 on the i9, but we can already see that the native RISC-V board uses less than half the CPU time of the emulation on x86. And only 3x the wall clock time. An 8 GB VisionFive 2 is $100. Buy three of them and they’ll have package building throughput as high as the $1500 x86.

                            The price-performance of current RISC-V actual hardware is already well ahead of the price-performance of emulation on x86, if you actually have to buy the x86 machine, not just use an idle one you have lying around. The energy-performance is also better. The VisionFive 2 uses around 5W, while that 3x faster i9 uses 160W.

                            The SpacemiT K1 machines are pretty comparable the VisionFive 2. They’re less powerful per core, and slightly less powerful over all – but only slightly. That kernel build takes 71m on my K1 board with -j8 vs, as above, 67m35s on the VisionFive 2. But the K1 boards have the advantage of being available with 16 GB RAM, which makes some packages easier to build, and allows you to do more simultaneous builds for smaller packages to keep the cores busy.

                            The new Milk-V Megrez is about twice as fast as the VisionFive 2, for twice the price for a 16 GB machine vs two 8 GB VF2s. So no huge advantage, but fewer machines are easier to manage, and running more parallel builds on a bigger machine uses resources more effectively. You can also get them with 32 GB RAM, which is even more flexible.

                            Overall I think the article reflects a poor knowledge of the current state of the RISC-V SBC scene, especially as it comes to price-peformance rather than absolute performance. If $650 was paid for the Unmatched then that’s a very bad comparison to the actually faster $100 VisionFive 2.

                            If I was building a RISC-V build farm today, I’d be getting 8 or 10 32 GB Milk-V Megrez 32 GB at $269 each.

                            That’s comparable to one decent x86 machine, will be a lot faster than emulation on that x86, and cost less than the Ampere Altra will have.

                            1.  

                              Why not cross-compile?

                              1.  

                                I’m sure that comes with its own significant problems considering the massive hodgepodge of tooling used by all the different software that has to be built.

                                1.  

                                  ¯\_(ツ)_/¯ Yocto does it

                                  1.  

                                    Yocto claims to do it, but then it will give you broken output. Cross-compiling introduces far too many issues and is a massive maintenance burden to keep subtle bugs at bay, even something relatively easy to cross-compile like the Linux kernel will just give you wrong userspace headers if you do it.

                                    It should be a sign of how unreliable, complex and prone to problems cross-compilation is that people would rather bootstrap three decades of Haskell compilers than trust the output of its cross-compiler just to get pandoc.

                              2.  

                                The title is “Dropping RISC-V support”, which is pretty extreme. It’s not “Downgrading”, or “Making RISC-V best attempt”. That’s why I wonder about this.

                                i really cannot be bothered to let it hold back the rest

                                Why would it, ever? Sure, if aarch64 failed to build commonly used software, you’d probably take some time to look at what’s going on, but why isn’t it just made in to a “Tier 2” or “Tier 3” platform, with builds building what they can?

                                Forgive me if I’m naive - I know nothing about the Chimera build system - do builds need babysitting? Do failures require human time even if you’re just going to let whatever failed fail?

                                1. 8

                                  if a build fails, it needs to be investigated (the infrastructure picks up changes from git in real time and builds them as it goes); then a fix needs to be made, deployed, tested, waited for again (which may be a long time because in the time it’s been building that package very slowly, 1000 other updates may have been pushed, and the restarted batch may prioritize these updates first), or the template can be marked broken (if something in a batch fails, the whole batch fails, because it’s sorted specifically to account for correct dependency ordering, and there may be things depending on the failed package further in the batch), which means if it was previously built, it will remain in the repo out of date, and once revdeps start requiring the newer versions, they will also fail, etc.

                                  with an unreliable emulator, many of these failures are spurious; if an emulator hangs, it needs to be manually canceled, or the builder will wait for several hours until it decides to kill it; with slow hardware, it takes longer than i’m willing to wait, and regardless, it adds into effort and burden

                                  we put a lot of effort into making sure every supported architecture can build (almost) everything and that it stays in a good shape, i’m not comfortable with the idea of having something that is half broken, so regardless i’d probably be putting in the effort to fix things where possible, so might as well drop it

                                  1.  

                                    we put a lot of effort into making sure every supported architecture can build (almost) everything

                                    This is a significant degree of difficulty and it makes sense that if this is your standard for support you’d be inclined to drop a marginal platform entirely, and it’s no skin off my back. I do want to extol the virtues of not trying quite so hard though! By comparison, the last OpenBSD release built ports packages for 9 architectures, as few as 8300 for POWER9 and as many as 12000 for amd64. Often the “slow” archs don’t finish building packages until after the nominal release day.

                                    1. 7

                                      for reference (note how arch with least packages still has over 90% of the arch with the most packages):

                                      q66@chimera-primary:~$ find /media/repo/repo-x86_64 -name '*.apk'|wc -l
                                      14158
                                      q66@chimera-primary:~$ find /media/repo/repo-aarch64 -name '*.apk'|wc -l
                                      14097 (99.5%)
                                      q66@chimera-primary:~$ find /media/repo/repo-ppc64le -name '*.apk'|wc -l
                                      13989 (98.8%)
                                      q66@chimera-primary:~$ find /media/repo/repo-riscv64 -name '*.apk'|wc -l
                                      13611 (96.1%)
                                      q66@chimera-primary:~$ find /media/repo/repo-loongarch64 -name '*.apk'|wc -l
                                      13491 (95.3%)
                                      q66@chimera-primary:~$ find /media/repo/repo-ppc64 -name '*.apk'|wc -l
                                      13343 (94.2%)
                                      q66@chimera-primary:~$ find /media/repo/repo-ppc -name '*.apk'|wc -l
                                      12779 (90.3%)
                                      

                                      the idea was always that every arch is fully usable; and the expectation was that at this point there would be hardware that is usable; but meanwhile, there is still nothing, there seems to be no effort to get something out, and i’m not particularly interested in supporting an architecture where the whole industry around it cares solely about “AI chips” and near-future e-waste

                                      1.  

                                        Well, that’s a bit harsh on the RISC-V ecosystem although not entirely unwarranted. FWIW I think you’re making the right choice here, and I’m excited to try out a non-systemd distro that cares about overall system quality and integrity. Others can nurse RISC-V linux through these awkward years.

                            2.  

                              They mention they have a loongarch builder, does anyone know what hardware they use for building? I’ve been meaning to extend my homelab..

                              1.  

                                it’s one of those 400€ boards based on loongson 3a6000, they’re very nice machines, not the fastest but very sufficient (quadcores but with ipc comparable to intel 14th gen/amd zen4)