1. 73
  1.  

  2. 22

    I mostly agree with this, but it’s an interesting culture shift. Back when I started to become involved, open source was largely maintained by poor people. Niche architectures were popular because a student could get a decent SPARC or Alpha machine that a university department was throwing away. It wasn’t as fast as a decent x86 machine but it was a lot faster than an x86 machine you could actually afford. The best-supported hardware was the hardware that was cheap for developers to buy.

    Over the last couple of decades, there’s been a shift towards people being paid to maintain open source things and a big split between the projects with big corporate-backed foundations behind them and the ones that are still maintained by volunteers. The Z-series experience from the article highlights this split. There’s a big difference in perspective on this kind of thing is very different if you’re a volunteer who has been lovingly supporting Alpha for years versus someone paid by IBM to maintain Z-series support.

    Even among volunteers there’s sometimes a tension. I remember a big round of applause for Marcel at BSDCan some years ago when he agreed to drop support for Itanium. He’s been maintaining it almost single-handed and there were a bunch of things that were Itanium-only and could be cleaned up once Itanium was removed. It’s difficult to ask a volunteer to stop working on a thing because it’s requiring other volunteers to invest more effort.

    1. 9

      Speaking from experience: PowerPC is in a bit of an interesting state regarding that, with varying factions:

      • IBM selling Power systems to legacy-free Linux customers; they only support and would prefer you use 64-bit little endian
      • IBM also has to support AIX and i, which are 64-bit big endian operating systems
      • Embedded network equipment on 32-bit or crippled 64-bit cores
      • Power Mac hobbyists with mostly 32-bit and some 64-bit equipment
      • Talos users, using their fast owner controlled POWER9 systems which are mostly little endian Linux users, some big endian

      Sometimes drama happens because the “sell PPC Linux” people only care about ppc64le and don’t do any of the work (or worse, actively discontinue support) for 32-bit/BE/older cores (i.e Go dropped all non-POWER8/9 support, which is like only support Haswell or newer), which pisses off not just hobbyists, but also the people who make deep embedded networking equipment or supporting AIX.

      1. 3

        I am duty bound to link to https://github.com/rust-lang/rust/issues/59932. Rust currently sets wrong baseline for embedded network equipment PowerPC systems.

        1. 1

          I resemble that remark, having a long history with old POWER, Power Macs, a personal POWER6 and of course several POWER9 systems. It certainly is harder to do development with all those variables, and the 4K/64K page size schism in Linux is starting to get worse as well. I think it’s good discipline to be conscious of these differences but I don’t dispute it takes up developer time. It certainly does for me even with my small modest side projects.

        2. 6

          The best-supported hardware was the hardware that was cheap for developers to buy.

          That fact contributes to the shift away from non mainstream ISAs, I suppose? In this decade the hardware that is cheapest for developers to buy is the most mainstream stuff.

          Nobody buys and discards large amounts of POWER or SPARC or anything. If you are a freegan and want to acquire computers only out of dumpster(s), what I believe you will find is going to be amd64, terrible Android phones and obsolete raspis? Maybe not even x86 - it’s on the cusp of being too old for organisations to be throwing it away in bulk. e.g. the core2duo was discontinued in 2012 so anyone throwing that away would be doing so on like an 8 year depreciation cycle.

          Just doing a quick skim of eBay, the only cheap non-mainstream stuff I can see is old g4 mac laptops going for about the price of a new raspi. Some of the PPC macs are being priced like expensive antiques rather than cheap discards. It looks like raspis are about as fast anyway. https://forums.macrumors.com/threads/raspberry-pi-vs-power-mac-g5.2111057/

          It’s difficult to ask a volunteer to stop working on a thing because it’s requiring other volunteers to invest more effort.

          Ouuuch :/

          1. 3

            Nobody buys and discards large amounts of POWER or SPARC or anything.

            You can (because businesses do buy them), but they’re be very heavyweight 4U boxes at minimum. The kind that nerds like me would buy and run. They’re cheapish (like, $200 if you get a good deal) for old boxes, but I doubt anyone in poverty is getting one.

            You’d also be surprised what orgs ewaste after a strict 5-10 year life cycle. I hear Haswell and adjacent business desktops/laptops are getting pretty damn cheap.

            1. 2

              I got burned trying to get Neat Second Hand Hardware once in the late oughts. Giant UltraSPARC workstation, $40. Turns out that the appropriate power supply, Fiber Channel hard drives, plus whatever the heck it took to connect a monitor to the thing… high hundreds of dollars. I think I ended up giving it to a friend who may or may not have strictly wanted it, still in non-working condition.

              1. 1

                They’re cheapish (like, $200 if you get a good deal) for old boxes

                I was seeing prices about 2x higher than that skimming eBay. I also see very few of them - it’s not like infiniband NICs where there are just stacks and stacks of them. Either way, that’s much more money than trash amd64.

                You’d also be surprised what orgs ewaste after a strict 5-10 year life cycle. I hear Haswell and adjacent business desktops/laptops are getting pretty damn cheap.

                Sure. I was bringing up Core2Duo as a worst-case for secondhand Intel parts that someone would probably throw in a dumpster.

                1. 2

                  Sure. I was bringing up Core2Duo as a worst-case for secondhand Intel parts that someone would probably throw in a dumpster.

                  Yeah, absolutely. It freaks me out, but a Core 2 Duo is about a decade and a half old. It’s about as old as a 486 was in 2006. These are gutter ewaste, but they’re still pretty useful and probably the real poverty baseline. Anything older is us nerds.

          2. 7

            There’s some value in supporting odd platforms, because it excercises portability of programs, like the endianness issue mentioned in the post. I’m sad that the endian wars were won by the wrong endian.

            1. 5

              I’m way more happy about the fact that the endian wars are over. I agree it’s a little sad that it is LE that won, just because BE is easier to read when you see it in a hex dump.

              1. 4

                Big Endian is easy for us only because we ended up with some weird legacy of using Arabic (right-to-left) numbers in Latin (left-to-write) text. Arabic numbers in Arabic text are least-significant-digit first. There are some tasks in computing that are easier on little-endian values, none that are easier on big-endian, so I’m very happy that LE won.

                If you want to know the low byte of a little-endian number, you read the first byte. If you want to know the top byte of a little-endian number, you need to know its width. The converse is true of a big-endian number, but if you want to know the top byte of any number and do anything useful with it then you generally do know its width because otherwise ‘top’ doesn’t mean anything meaningful.

                1. 2

                  Likewise, there are some fun bugs only big endian can expose, like accessing a field with the wrong size. On little endian it’s likely to work with small values, but BE would always break.

              2. 2

                Apart from “network byte order” looking more intuitive to me at first sight, could you eloborate why big endian is better than little endian? I’m genuinely curious (and hope this won’t escalate ;)).

                1. 10

                  My favorite property of big-endian is that lexicographically sorting encoded integers preserves the ordering of the numbers itself. This can be useful in binary formats. Since you have to use big-endian to get this property, a big-endian system doesn’t need to do byte swapping before using the bytes as an integer.

                  Also, given that we write numbers with the most significant digits first, it just makes more “sense” to me personally.

                  1. 5

                    Also, given that we write numbers with the most significant digits first, it just makes more “sense” to me personally.

                    A random fact I love: Arabic text is right-to-left, but writes its numbers with the same ordering of digits as Latin texts… so in Arabic, numbers are little-endian.

                    1. 3

                      Speaking of Endianness: In Arabic, relationships are described from the end farthest from you to the closest, as in if you were to naively describe the husband of a second cousin, instead of saying “my mother’s cousin’s daughter’s husband” you would say “the husband of the daughter of the cousin of my mother” and it makes it insanely hard to hold it in your head without a massive working memory (because you need to reverse it to actually grok the relationship) but I always wonder if it’s because I’m not the most fluent Arabic speaker or if it’s a problem for everyone that speaks it.

                      1. 2

                        My guess is that it is harder for native speakers as well, but they don’t notice it because they are used to it. A comparable case I can think of is a friend of mine who is a native German speaker who came to the States for a post-doc. He commented that after speaking English consistently for a while, he realized that German two digit numbers are needlessly complicated. Three and twenty is harder to keep in your head than twenty three for the same reason.

                        1. 2

                          German has nothing to Danish.

                          95 is “fem og halvfems” - “five and half-five”, where the final five refers to five twentys (100), and the “half” refers to half of 20, i.e. 10, giving 90.

                          It’s logical once you get the hang of it…

                          In Swedish it’s “nittiofem”.

                  2. 4

                    I wondered this often and figured everyone just did the wrong thing, because BE seems obviously superior. Just today I’ve been reading RISC-V: An Overview of the Instruction Set Architecture and noted this comment on endianness:

                    Notice that with a little endian architecture, the first byte in memory always goes into the same bits in the register, regardless of whether the instruction is moving a byte, halfword, or word. This can result in a simplification of the circuitry.

                    It’s the first time I’ve noticed something positive about LE!

                    1. 1

                      From what I hear, it mostly impacted smaller/older devices with small buses. The impact isn’t as big nowadays.

                    2. 3

                      Little-endian vs. big-endian has a good summary of the trade-offs.

                      1. 2

                        That was a bit of tongue-in-cheek, so I don’t really want to restart the debate :)

                        1. 2

                          Whichever endianness you prefer, it is the wrong one. ;-)

                          Jokes aside, my understanding is that either endianness makes certain types of circuits/components/wire protocols easier and others harder. It’s just a matter of optimizing for the use case the speaker cares about more.

                        2. 1

                          Having debugged on big-endian for the longest time, I miss “sane” memory dumps on little-endian. It takes a bit more thought to parse them.

                          But I started programming on the 6502, and little-endian clearly makes sense when you’re cascading operations 8 bits at a time. I had a little trouble transitioning to the big-endian 16-bit 9900 as a result.