1. 27
  1.  

  2. 27

    Deleting “the limited 2GB virtual address space” from the quote was quite the snip.

    1. 4

      Considering i686 has the same limitation (though it can be configured as 1, 2, or 3 depending on administrator preference), it didn’t seem that relevant. It certainly wasn’t relevant to mipsel, which has the same limitation (because it’s the same hardware, just in the other endianness) and was not dropped.

      1. 9

        Considering i686 has the same limitation

        Didn’t that sell many more units to many more people needing a general-purpose system like Debian? And who are continuing to use non-MIPS systems outside black-box appliances like routers that they buy, including most FOSS-loving folks?

        Hardly any OS was ever “universal” in an absolute sense of the word. It typically means supporting whatever platforms have a lot of users. MIPS isn’t one of them for desktops or servers. It’s an also-ran in embedded mainly used in lower-cost applications. It doesn’t surprise me that hardly anyone supports it now. The recent opening of MIPS might shift that in a slightly-different direction, though.

    2. 19

      Their reasoning makes sense. They lack the manpower to provide a quality big-endian MIPS port. I used to run a big endian PPC on gentoo and I often had to write and maintain custom patches to mainstream packages that were broken. It’s a lot of work. I eventually sold my machine and got x86.

      They still provide a little endian port that covers the pareto majority of MIPS hardware. I don’t see what the big issue is. IMO Debian was never “universal” in the sense that it ran on all hardware because it never did, it isn’t NetBSD. It’s universal in the sense that it’s free software and available for everyone.

      1. 2

        it’s free software and available for everyone

        ..for everyone to modify :)

        But yeah, in the end it’s all about man power. I wonder if the sheer size of debian hurts its long term survivability vs something small like netbsd or openbsd?

        1. 2

          I wonder if the sheer size of debian hurts its long term survivability vs something small like netbsd or openbsd?

          In what ways do you see Debian’s size hurting it’s survivability in a way to which NetBSD and OpenBSD are immune? I can see an analogue in large dinosaurs vs small mammals but can’t exactly imagine it here.

          1. 2

            First we need to define what makes it “large” (if it even is). To me large is supporting / maintaining lots of packages, many workflows for particular system administration (ifconfig vs ip), difficult to setup to develop, etc.

            We know popularity contributes significantly to an OS’s survival, but look at 9front, netbsd, openbsd and haiku: they still live because they can be maintained by such a small group of people - and to me that’s powerful.

            At this point in time I believe NetBSD has the best chance of being around in 100 years. It has adapted to run on top of other OSs.

      2. 13

        I’ll note the other thing the announcment says is “On the other hand the level of interest for this architecture is going down, and with it the human resources available for porting is going down” and the author of this post isn’t offering to step up and maintain it (either for Debian or the other two distros they mention).

        I’d expect Debian would be fine keeping it if there was people willing to maintain it, but if there isn’t then it’s better it gets dropped rather than keep decaying further. Also, IIRC this has happened before for some items like this, if there are in fact lurking people willing to maintain MIPS then this might get reversed if volunteers come to light as a result of this announcment.

        1. 4

          “Might” being the key word; a whole group of us got together to try to “save” ppc64 and Debian wasn’t interested, more than likely because we weren’t already Debian developers. It’d be nice if the “ports” system was more open to external contributions. But mips isn’t even going to ports, it’s being removed.

          1. 3

            From my experience, if you aren’t already a Debian developer, you aren’t going to become one. My experience trying to contribute to it was absolutely miserable. I’ve heard that changed somewhat, but I don’t feel like trying anymore.

            1. 1

              Can you speak more to this issue? I’m curious as to whether it was a technical or social problem for you, or both.

              1. 3

                More of a social problem. I wanted to package a certain library. I filed an “intent to package” bug, made a package, and uploaded it to the mentors server as per the procedure. It got autoremoved from there after a couple of months of being ignored by people supposed to review those submissions. Six months later someone replied to the bug with a question whether I’m going to work on packaging it.

                I don’t know if my experience is uniquely bad, but I suspect it’s not. Not long ago I needed to rebuild a ppp package from Buster and found that it doesn’t build from their git source. Turned out there’s a merge request against it unmerged for months, someone probably pulled it, built an official package and forgot about it in the same fashion.

                Now three years later that package is in Debian, packaged by someone else.

                1. 2

                  I don’t know if my experience is uniquely bad, but I suspect it’s not.

                  Seem’s like you’re right: https://news.ycombinator.com/item?id=19354001

          2. 3

            …and the author of this post isn’t offering to step up and maintain it (either for Debian or the other two distros they mention).

            From the author’s github profile:

            Project maintainer of the Adélie Linux distro.

            1. 0

              Hmm, maybe. I’d bet against it. If Debian is going (reading between the lines) “maintaining modern software on this architecture is getting really hard” then I’d bet against anyone else adding support. Maybe I’ll lose that bet, in which case I owe someone here several beers, but I’ll be very surprised!

          3. 7

            Somehow OpenBSD, having a fraction of Linux’s manpower, supports quite a few platforms.

            Source: https://www.openbsd.org/plat.html

            1. 3

              All of them self-hosting too.

            2. 4

              as we move towards computers which can use whatever endianness is appropriate for the situation

              What are the appropriate situations when you want to run your whole system in big endian? It might be my lack of imagination, but other than compatibility with buggy C programs that assume big endian, I can’t think of any. It would be nice to leave this part of computer history behind, like 36-bit words and ones’ complement arithmetic.

              1. 12

                I’ve been running big endian workstations for years. It’s slightly faster at network processing, and it’s a whole lot easier to read coredumps and work with low-level structures. Now that modern POWER workstations exist, I no longer even have an x86 on my desk.

                Many formats are big-endian and that won’t change. TIFF, JPEG, ICC colour profiles, TCP, etc…

                Ideally, higher level languages would make this irrelevant to most people, so we could just run everything in BE and nobody would notice except the people doing system-level work where it’s relevant. Unfortunately, we haven’t gotten there yet. So it’s best for user freedom to let the user decide what suits their workload.

                1. 5

                  Modern x86 has a special instruction for byte-swapping moves, MOVBE: https://godbolt.org/z/juJ6VL

                  I disagree that low level languages are a problem when it comes to this. Even higher level languages need to deal with endianness when working with those formats you mentioned, so we’ll never be rid of it on that level. On the other hand, it’s possible to do it properly in low level languages as well; don’t read u16/u32/u64 data directly and avoid ntohl()/htonl(), etc. The C function in my link works on both big and little endian systems because it expresses the desired result without relying on the native endianness.

                  1. 3

                    I wish more people would know the proper ways to do that in C.

                    1. 5

                      Simple: “reading several bytes as if they were a 32 bit integer is implementation defined (or even undefined if your read is not aligned). Now here’s the file format specification, go figure a way to write a portable program that reads it. #ifdef is not allowed.”

                      From there, reading bytes one by one and shift/add them is pretty obvious.

                  2. 5

                    I’ve been running big endian workstations for years.

                    I’m curious about your setup? What machines are you running with MIPS? I guess I haven’t really looked into “alternative architectures” since the early 2000s, so I’m quite intrigued at what people are actually running these days.

                    1. 3

                      My internal and external routers are both MIPS BE.

                      My main workstation is a Raptor Talos II, POWER9 in BE mode. Bedroom PC is a G5.

                      My media computer is an old Mac mini G4. I haven’t felt the need to replace it.

                      1. 3

                        I suspected that you had a Talos machine. The routers make total sense, too. Thanks for taking the time to reply!

                        1. 1

                          My internal and external routers are both MIPS BE.

                          May I ask what the make and model codes are?

                          1. 3

                            Netgear WNR3500L.

                            1. 2

                              Thank you @awilfox!

                      2. 3

                        Many formats are big-endian and that won’t change. TIFF, JPEG, ICC colour profiles, TCP, etc…

                        All those standards use Big-Endian for various reasones related to hardware down to the chip level.

                        • IP for example, is used for routing based on prefixes where you only look at the first few bits to decide which port a packet of data should exit through. In more than 99,9% of the cases, it simply does not make sense to look at the low end of the numbers.
                        • TIFF, JPEG and ICC colour profiles all deal with pixels and some form of light sensors which are connected some form of analog to digital converter-circuit. Such a circuit is essentially a string of resistors interlaced with digital comparators that output 1 if the input voltage is above a certain threshold. If the first half of all comparators returns 1, you switch on the MSB, if not, you switch it off, however, the MSB (which would be upfront in Big Endian notation) denotes 50% of the input signal’s strength and is therefore more important to “get right” than the lower numbers.

                        So why is Little Endian winning on modern CPU’s? Well that’s because we have different concerns when we are running computer programs in which a pattern like this

                        for(int i=0; i<length; i++) {}
                        

                        is common.

                        It would make no sense to start comparing numbers bitwise from the high end, because that almost never changes. The low end however, changes all the time. This makes it easier to put the low-end bytes upfront and only check the higher bytes when we have overflowed on a low-end byte.

                        So it’s a story about: Different concerns -> different hardware.

                        As for Debian: They must have looked through the results of their package popularity contest and have judged that the amount of work required to maintain the mips architecture cannot be justified by the small number of users that uses it.

                        This is also why I always opt for yes when I’m asked to vote in the popcon. Because they can’t see you if you don’t vote!

                        1. 2

                          Ideally, higher level languages would make this irrelevant to most people

                          See Erlang binary patterns. It provides what you want.

                          1. 1

                            It’s slightly faster at network processing

                            New protocols these days tend to have a little endian wire format. TCP/IP is still big endian, but whatever lies on top of it might not be. Maybe that explains the rise of dual endian machines: little endian has won, but some support for big endian still comes in handy.

                            1. 1

                              Yes and no. The z-cash algorithm (used by ethereum et al) serialises numbers to BE always. But some LE protocols and formats exist. I think the real winner is not LE, nor BE, but systems that can let you use both.

                              1. 4

                                And everything designed by DJB is little Endian: Salsa/Chacha, Poly1305, Curve25519… And then there’s Blake/Blake2, Argon2, and more. I mean, the user hardly cares about the endianness of those primitives (it’s mostly about mangling bytes), but their underlying structure is clearly little endian. Older stuff like SHA-2 is still big endian, though.

                                Now sure, we still see some big endian stuff. The so called “network byte order” is far from dead. Hence big endian support in otherwise little endian systems. But I think it is fair to say that big endian by default is mostly extinct by now. New processors are little endian first, they just have additional support for big endian formats.

                                And if you were to design a highly constrained microcontroller now (that must not cost more than a few cents), and your instruction set is not big enough to support both endianness efficiently, which endianness would you chose? Personally, I would think very hard before settling on big endian.

                        2. 2

                          Full disclosure: I haven’t recursively crawled the site, grepping it for the word “universal”.

                          After a few minutes (around ten) of looking around the site, I can’t find anywhere on the site where Debian officially quantifies or defines the word “universal”.

                          If it’s CPU architecture support, I don’t think any Linux distro comes close to NetBSD. Granted, I kinda stopped paying attention to Linux over a decade ago.

                          1. 1

                            This is weird. If MIPS can support any endianness, then Debian can just support LE MIPS. In fact, the announcement says they are doing exactly that. Since, as the article itself says, most chips are bi-endian anyway, then let’s just make all our software LE.

                            “Whichever endianness is appropriate for the situation” is “whatever endianness everyone already uses”. If every chip in the world is little-endian, that’s fine. That’s GREAT. I don’t care. Let’s move on.

                            1. 1

                              Since Adélie and Void both support big-endian PowerPC, I am hopeful that both distros will work to support MIPS as well.

                              As far as I know Voidlinux does not support PowerPC.

                              1. 2

                                There have been parts of the port merged upstream; it lives at void-ppc for now.