Threads for calvin

    1. 32

      From the looks of it, Phoronix had to nuke the comment section due to the harassment and hate going on there. No wonder Lina feels unsafe.

      1. 10

        The comments there are frequently pretty toxic. The site seems to relentlessly keep tabs on various updates to every project known, but the comment section is to be avoided.

        1. 20

          The comments there are frequently pretty toxic. The site seems to relentlessly keep tabs on various updates to every project known, but the comment section is to be avoided.

          It absolutely is, I can confirm.

          Unlike most online journos, I actively participate in the comments on my stories on The Register, and I have been thanked for this a lot.

          A couple of years ago I joined Phoronix’s forums – I think it was to point out a mistake. A few people messaged me to welcome me, or to express surprise I wasn’t already a member.

          But… OMG it is a cesspit. I have been on the internet since 1985, I have been on some of the nastiest sites it’d had to offer, but sheesh: the Phoronix forums are nasty. The worst of the Linux and FOSS world, gathered to insult one, mock and denigrate one another.

          1. 6

            Yes, Phoronix is full of people who know just slightly more about computers than the average human, read a lot of articles online, maybe know how to plug together an ATX motherboard and power supply and on a good day a PCIe video card, but generally don’t actually know anything about programming, ISAs, µarch than what they read in online articles and other people’s comments. But they are very confident they are experts.

          2. 7

            My concern is that since the comment section is always toxic, how bad must it have gotten for Phoronix to completely nuke it (It’s not even just behind a loginwall, it’s gone)

            1. 8

              Edit: Seems to be removed now; comment below was accurate at time of posting

              Still there, just limited visiblity to folks with 3+ year accounts or premium memberships. And it’s just absolutely boneheaded vitriol. Certainly an approach to moderation.

              1. 3

                Looks like it’s fully gone now? On the forums “home page”, the “Sensitive Topics” section shows 0 topics and 0 posts. The article’s “Add a comment” link is back (instead of “x comments”) and goes nowhere.

                What’s crazy is how terrible the moderation there has gotten that even the first rule (“Absolutely no flaming, name calling, sexual harassment, or other personal attacks will be permitted.”) is routinely broken.

                1.  

                  Yup, looks like it’s truly gone now (or at least somewhere I also can’t see).

                  Moderation is a truly thankless job, so I understand how things get that way, but… Oof.

              2. 4

                I’d guess something that could carry legal liability bad. There is a moderation thread in the forums that has immediately turned negative too.

                I’m discontinuing all access it it, even the front page.

            2. 7

              A lot of harassment of these devs coming from Kiwi Farms for some reason, but Phoronix is surprising (to someone who’s never used to comments on that site - the articles seem fine).

              1. 6

                Considering how vile the Phoronix comments section normally is, if Larabel had to delete and lock it…

                1. 3

                  she specifically asked people not to speculate…

                  1. 8

                    I’m not speculating. I’m observing that the response in parts of the internet is vitriolic enough that I think Lina would have left anyway.

                    1.  

                      she doesn’t want you to speculate or make assumptions about what made her leave. I don’t know why you can’t respect that.

                      1. 7

                        Again I’m not so I’m not sure what you’re on about.

                        1.  

                          you absolutely are. you can’t speculate about why she would have felt unsafe enough to leave, then claim that it’s completely distinct from speculating about why she actually did feel unsafe enough to leave.

                          it’s a simple request. you can refuse to honor it but then don’t play linguistic games to deny it.

                2. 20

                  Great to see this push to open source. In Germany they also had a recent push in open source software [0]. They have fund for open source software [1].

                  [0] https://www.openproject.org/blog/opendesk-1-0/
                  [1] https://www.sovereign.tech/tech

                  1. 25

                    Let’s hope it’s not killed like LiMux, the push in Munich to move the entire public administration to Linux.

                    Sure it was a lot of work, but it still saved a lot of money.

                    Major coincidences: Steve Ballmer visited Munich’s mayor, LiMux was killed by the city council and the city was switched to Windows again (at much higher costs). Microsoft shortly thereafter moved its German headquarters from Cologne to Munich. Peculiar coincidences. I think Microsoft was scared LiMux would become a positive example.

                    1. 3

                      I’ve never actually heard a straight story about LiMux that didn’t come from FLOSS ideologues or politicians. I wonder what it was like for people in the trenches…

                      1. 9

                        You mean the people actually working in the public administration? I can tell you about it, because I also work in civil service at a university. Most people working in a city administration are actually employed for life, and they don’t have to do anything that goes beyond what was agreed upon when they were hired (no joke, this is unique to Germany, and they literally cannot be fired unless they really try). This is why the system in general is very adverse to change, and this is the usual company culture in these areas.

                        For this reason, many ‘soldiers in the trenches’ heavily opposed LiMux, but not because it was non-functioning, but rather simply because it wasn’t they way they were used to, despite obvious long-term benefits. It’s the old tale we technical folk all know, where we know that macOS or Linux might be better for a relative or friend, but never end up doing it because we know the person would complain about it all day and not be bothered to spend 5 minute to get used to it.

                        Am I a FLOSS ideologue? Yes. But it weren’t the ideologues that let LiMux fail. It were corrupt politicians (indirect bribery through increase in corporate taxation) and stubborn civil servants. A small factor were some bad players in the sub-contractors, but it’s a point that is brought up very often and doesn’t hold too much weight.

                        I actually talked first-hand to some people involved in LiMux (on the contractors’ side), and the biggest problem was the constant barrage of tickets and repeated, stupid questions by the users, despite proper and qualified training. It was just a lack of initiative and extrinsic/intrinsic motivation.

                        To shed a positive light on this, many south american countries show that projects like LiMux can work. It’s just that Microsoft is too ingrained in our minds and machines.

                      2. 2

                        Wow, I had not heard of those coincidences. I remember back in the day I was so frustrated that they went back to Windows. I had hoped this example would inspire the Dutch government to do the same. But it makes a lot of sense now!

                        1. 6

                          There’s a documentary called “The Microsoft-Dilemma - Europe as a Software Colony” (available on youtube in full), which documents some of this.

                        2. 1

                          Let’s hope it’s not killed like LiMux, the push in Munich to move the entire public administration to Linux.

                          With the incoming administration I am not too positive. Their funding is coming from the ministry of economy and that will probably go to the party with the least interest in funding OSS. I hope I am wrong.

                      3. 32

                        Losing access to lobste.rs. The UK will be GeoIP blocked from Sunday, so so long and thanks for all the fish!

                        1. 7

                          That’s a terrible shame. I’ve enjoyed very many posts by you.

                          1. 8

                            My ActivityPub handle is in my profile, I’ll probably keep posting comments in reply to the lobste.rs ActivityPub reflector…

                          2. 3

                            Well, I’m happy you shared your knowledge as long as it lasted. Thanks, and hopefully you get fed up soon so that you’ll get a VPN. (Or even better, the ones in charge recover some sense, but let’s not have any illusions.)

                              1. 1

                                Looks like I’m not blocked and the red banner saying I will be has gone, so fingers crossed…

                              2. 1

                                That is so awful, and I hope that your politicians will wise up at some point in the not-too-distant future.

                                1. 0

                                  Terrible! I read the related post on Lobsters a while back. What is the UK gov’s explanation for such ‘aggressive’ laws?

                                  1. 5

                                    No idea. The last government passed it and, when they lost the election, the new government decided to keep it. We have two major parties: malicious and incompetent. Malicious got voted out and now we have incompetent in power again.

                                2. 2

                                  People sometimes object to the entire 1990s styling, and volunteer to design us a complete set of replacements in a different style. We’ve never liked any of them enough to adopt them. I think that’s probably because the 1990s styling is part of what makes PuTTY what it is – “reassuringly old-fashioned”.

                                  That’s the kind of attitude that speak to my soul. I can open PuTTY in 2025 and there won’t be any degradation of functionality from the version I used 10 years ago, and the UI elements are where I’m used to find them. So many product today you open it one day and one important feature you used to rely on is either gone, moved, or has been “migrated” to a half baked replacement that don’t solve your problem anymore. No, PuTTY UI is not pretty nor intuitive, but its incredibly functional once you know what you are doing.

                                  Never change PuTTY.

                                  1. 2

                                    Well, I do wish that PuTTY’s config window would open on the same monitor… or be resizable. But that’s incremental improvement.

                                  2. 17

                                    I was not aware of this distro! It looks quite appealing. But, in case anyone else was confused like me, it has nothing to do with the Arch-based https://chimeraos.org

                                    It is frustrating how slow RISC-V hardware has been to attain performance parity with commercial incumbent architectures. Given the economics of chip-making, I don’t really expect the lack of a reasonably performant build server candidate board to change any time soon. Rather, I expect RISC-V to continue nibbling away at the “embedded” low end.

                                    1. 17

                                      It’s been hit by

                                      • COVID

                                      • Intel going slowly bankrupt leading to what would have been a very interesting chip two years ago getting cancelled and the same cores just now hitting the market in an SoC from a Chinese company instead, quite probably using not as good DDR etc IP and an older process node.

                                      • a very interesting chip that was probably going to be out about now, and probably leapfrogging the RK3588 and Pi 5 chip (whatever that’s called), getting cancelled due to US sanctions.

                                      • Android development is gated by the RVA23 spec, which was just published in December, so actual chips will be a year or three away. The V spec and some others were finished two years later than had been hoped for.

                                      • most of the Western big money for higher performance seems to be going into the very profitable automotive and aerospace markets, with no one caring too much about the market for a few thousand or even hundreds of thousands of SBCs for hobbyists.

                                      • there is no point even going for the many millions of units of consumer phones / tablets / PCs until they can complete with recent generation x86 and Apple, which has been consistently looking for a few years now to be around 2027.

                                      Is 2027 or 2028 “any time soon” for you? It’s frighteningly close for the people who are actually working on the hardware and software needed!

                                      1. 9

                                        most of the Western big money for higher performance seems to be going into the very profitable automotive and aerospace markets, with no one caring too much about the market for a few thousand or even hundreds of thousands of SBCs for hobbyists.

                                        That market is never specifically catered to by SoC manufacturers. You are getting chips intended for Android TV boxes or industrial automation, Automotive investment right now is at rock bottom, so that’s not siphoning away any RISC-V money.

                                        there is no point even going for the many millions of units of consumer phones / tablets / PCs until they can complete with recent generation x86 and Apple

                                        It is strange to me that you think RISC-V phones and PCs would ever be interesting to a business if performance was good? RISC-V as an ecosystem currently has zero advantages. Even if you make your own SoC you’re going to be licensing a core design from someone if you want anything that’ll be able to compete with Arm’s portfolio of freebies they just give you at the base tier. Anyone large enough to benefit from a patent-free license-free ISA to develop their own high-performance cores from is so large they can just negotiate better deals with Arm.

                                        There’s simply no business case for Cortex-A-tier RISC-V, not even Qualcomm could be bothered to invest in licensing some SiFive cores or making their own of that performance tier after having a lawsuit with Arm over ISA license fees.

                                        Is 2027 or 2028 “any time soon” for you? It’s frighteningly close for the people who are actually working on the hardware and software needed!

                                        Are you saying Chimera-Linux should simply keep doing RISC-V builds to anticipate hardware that doesn’t exist, just in case any happens to pop up out of nowhere one day? What’s their benefit in doing that?

                                        1. 6

                                          …that and the licensing costs at the Cortex-A level aren’t as significant as things like software ecosystem. RISC-V’s advantages make a lot of sense at MCU level, where deep customization and royalty free are huge benefits. When you’re buying A53s to run Android or X1s to run Windows, less so.

                                          1. 6

                                            most of the Western big money for higher performance seems to be going into the very profitable automotive and aerospace markets, with no one caring too much about the market for a few thousand or even hundreds of thousands of SBCs for hobbyists.

                                            That market is never specifically catered to by SoC manufacturers. You are getting chips intended for Android TV boxes or industrial automation

                                            That was certainly the case with the original Raspberry Pi (a warehouse full of unsold set-top box SoCs) and Odroid (Galaxy S5 sold worse than expected, resulting in a lot of spare Exynos 5422 SoCs, which Hardkernel used for Odroid XU3 and XU4) but in recent years Broadcom seem to have been developing SoCs specifically for Raspberry Pi.

                                            RISC-V has not as yet been big in set top boxes or mobile phones to have surplus chips or amortisation of costs there, but you can observe that all the Chinese RISC-V SoC designs (and they all are Chinese, even with SiFive cores) have multiple camera inputs and NPUs, to me suggesting quite strongly that millions and millions of them are being used by the Chinese state.

                                            It is strange to me that you think RISC-V phones and PCs would ever be interesting to a business if performance was good? RISC-V as an ecosystem currently has zero advantages. Even if you make your own SoC you’re going to be licensing a core design from someone if you want anything that’ll be able to compete with Arm’s portfolio of freebies they just give you at the base tier. Anyone large enough to benefit from a patent-free license-free ISA to develop their own high-performance cores from is so large they can just negotiate better deals with Arm.

                                            How can you negotiate a better deal with Arm if you have no realistic alternative if/when they simply say “The price is the price, and it’s going up 20% in five minutes”?

                                            Arm’s prices, the stuff they throw in for free, their newfound willingness to allow custom instructions on certain cores …. these have all happened since RISC-V started to appear.

                                            RISC-V existing benefits Arm customers hugely – until Arm decides their remaining customers are so locked-in that they won’t leave no matter what the prices are.

                                            There are in fact customers big enough (one assumes) to negotiate better deals with Arm who are moving to RISC-V for applications processors. Samsung and LG are two obvious pretty large examples, for their TVs.

                                            There’s simply no business case for Cortex-A-tier RISC-V, not even Qualcomm could be bothered to invest in licensing some SiFive cores or making their own of that performance tier after having a lawsuit with Arm over ISA license fees.

                                            Qualcomm have pretty obviously been developing their own high performance RISC-V cores based on Nuvia, or they wouldn’t be so active in RISC-V mailing lists and trying to persuade people to modify RISC-V to use fixed-width 4 byte instructions only.

                                            Is 2027 or 2028 “any time soon” for you? It’s frighteningly close for the people who are actually working on the hardware and software needed!

                                            Are you saying Chimera-Linux should simply keep doing RISC-V builds to anticipate hardware that doesn’t exist, just in case any happens to pop up out of nowhere one day? What’s their benefit in doing that?

                                            I wouldn’t presume to tell them what they “should” do. They can do what they want. However their stated rationalisations don’t line up with the current facts. In particular, RISC-V native building has been better price-performance (and energy-performance) than running an emulator on x86 ever since the VisionFive 2 came out in February 2023 at 10% of the price of the slightly worse performance HiFive Unmatched that was released 21 months earlier.

                                            You still get higher per-core performance with emulation on x86, but 1.5 - 2 GHz dual-issue or mild OoO RISC-V cores cost much less than 4+ GHz x86 cores. The task of building thousands of Linux packages is embarrassingly parallel – you can throw pretty much as many cores at it as you want if they are cheap enough.

                                            My own tests show a Linux kernel build being only three times quicker on a $1500 24 core i9 machine (with docker/qemu) than on a $100 4 core VisionFive 2. You’re clearly better off getting three VisionFive 2s. That wasn’t the case 18 months earlier (mid 2021) when you would need three $650 HiFive Unmatched instead.

                                            Price matters, not only performance.

                                            Looking historically, mainframes, minicomputers, and microcomputers all went through a phase where performance of the base model stayed the same for many years, but cost went down.

                                            1. 9

                                              look, i really don’t care how many subpar SBCs i can buy at a price - if a $2k board of sufficient qualities was available, i would have gone for it and wouldn’t have said a word

                                              spreading the load onto many slow boards is just a general pain in the ass:

                                              • it will not cover the “large” packages (e.g. llvm, browsers, etc.) because those have to build on a single worker regardless
                                              • it means a lot of extra pain figuring out things such as how to spread a build batch across workers so that they don’t step on each other’s toes and account for dependencies within the batch; you need to figure out shared filesystem for them, ensure there is working locking so they can update the index safely, and a bunch of other things

                                              for any other arch i am interested in, i can have a machine that doesn’t subject me to any of that; for riscv i can’t, and it does not matter to me how much it costs, so there is that, and only that

                                              1. 1

                                                C’mon it’s not all that hard. If you wanted to then you would. Ubuntu and Fedora build on farms of cheap boards. It really doesn’t matter if the few very large projects tie up a board for a long time, as long as the board does have enough RAM, which seems to be 16 GB at the moment.

                                                Fedora uses Koji to manage the build farm, Ubuntu uses Launchpad. Both are open source.

                                                I’m sorry to hear that there are reliability problems with the Pioneer – which I can’t comment on because I don’t have one and haven’t been following the experiences of those who do. I’ve got pretty much every other RISC-V board that exists (or at least one for each SoC … there are often a few different boards with the same SoC), but the Pioneer is out of my impulse buying range. I’ve made a couple of offers on used ones, but so far have been outbid by people who want it more than I do.

                                                If the Pioneer is reliable then it’s ideal with 64 cores and 128 GB RAM. The price is absolutely fine for what it is. Yes, less performance than an x86 for the same price, but a lot more performance than an x86 the same price running RISC-V in emulation.

                                                1. 8

                                                  Please try to be less condescending/demanding, it’s not a good look. You are lecturing someone about her hobby project, “not wanting to” deal with something is a completely fine way to decide what to do actually.

                                                  1. 3

                                                    Is it a hobby? I have no idea. I’ve never heard of this distro, but then there are dozens I’ve never heard of.

                                                    “not wanting to” is a perfectly fine reason by me. Nothing more is needed. I just don’t like then using as justifications things that are incorrect, or out of date, or just artificial arbitrary constrains such as “it has to all be on one machine”.

                                                    Don’t want to. Have other priorities. Waiting for hardware that normal users will want to daily drive. All perfectly good reasons by me.

                                                    1. 5

                                                      It’s not artificial constrain, both Koji and Launchpad are massive projects hosted by either corporation or a team of people with large backing. It’s unreasonable to expect someone to run a big infrastructure and a server farm to build software because the hardware is terrible and can’t be used as a single host.

                                                      1. 2

                                                        They are free open source software that anyone can download and use.

                                                        A dozen VisionFive 2’s would cost $1200 and fit in the same volume as a standard PC tower.

                                                        Or, Sipeed has been selling the Lichee cluster, with 7 LPi4As plugged into a small motherboard, for around $1250 for boards with 16 GB RAM and 128 GB eMMC per board (so a total of 28 1.85 GHz OoO cores, 112 GB RAM), or in a case with power supply and fan for $1350. That’s more expensive, but pre-made, Just install software and go.

                                                        1. 3

                                                          Just install software and go.

                                                          1. 2

                                                            Exactly.

                                                            It doesn’t even have to be all that hard. In the old days before multi-core computers we all used distcc to run the build scripting stuff on one machine and the compiles on as many others as you could find on the network. Takes minutes to set up.

                                                            You could even use your big x86 and Arm machines to run RISC-V cross compiles.

                                                            If they simply don’t WANT to do RISC-V, that’s fine.

                                                            But if they say they WANT to but X, Y, and Z are problems preventing it …. I’m gonna offer solutions to X, Y and Z. That’s what an engineer does.

                                                            1. 3

                                                              I recommend you read the threads here, as it has been explained why what you are proposing (cross-compilation) is not fine and will not work. What you have offered is not solutions, setting up a compile server farm is not a solution. You seem to not grasp that it’s not “just install software and go” because someone has to install it, configure it, maintain it and it’s not just software. Adding a powerful host or 2 to a distro build infra is much different than “go buy dozen of risc-v machines and make a build farm cluster out of it”.

                                                              And please do not bring up “the old days”, those were terrible, lots of things were done wrong and software/hardware landscape is much different these days.

                                                              1. 2

                                                                Don’t worry, I perfectly understand the difficulties of cross-compilation. I do that kind of thing (and avoid it) every day.

                                                                There are a lot of things that need to be run in the target environment, including scripts, and also binaries that are built as part of the build process. And in some builds some of those binaries also need to be built for and run in the host environment.

                                                                I’ve been doing this stuff for decades.

                                                                Running scripts in the target environment (whether actual hardware or emulation), preprocessing source, and sending the preprocessed source to another machine – whether the same architecture or a cross-compiler on a faster machine – is a perfectly viable and low-configuration way to do things.

                                          2. 3

                                            5 years ago, the claim from you and other RISC-V fans was that higher performance (by which I mean, faster than any raspberry pi but still slower than AMD and Intel chips) designs were just a few years away. 5 years later, they’re still a few years away? Was that just that one chip that got cancelled due to sanctions?

                                            most of the Western big money for higher performance seems to be going into the very profitable automotive and aerospace markets, with no one caring too much about the market for a few thousand or even hundreds of thousands of SBCs for hobbyists.

                                            What are these designs? With how expensive many RISC-V SBCs are, if expensive high performance automotive or aerospace chips exist, I’m surprised no one has made an SBC around one of them.

                                            1. 4

                                              There’s no way I would have said something like that five years ago, when the initial spec was just barely ratified and there were three microcontrollers (FE310, GD32VF103, K210) and nothing Linux-capable at all on the market (the 2018 limited run $999 HiFive Unleashed had been and gone … and to make a usable SBC from it required a custom $2000 FPGA board from Microchip or a standard $3500 FPGA board from Digilent, so it had a very limited potential market even if they’d made more than 500 of them.

                                              When the P670 was announced in November 2022 – two years and four months ago – I would quite likely have predicted it being around three years for someone to license it for an SoC and get that made and on a board you could buy. At that time it would certainly beat a Pi 4, and in fact we now know it would be very competitive and probably better than Pi 5.

                                              There of course is never any guarantee that some one will license any given core, or use it to made an SoC suitable for SBCs. Where, for example, are the SBCs using the Arm A57, A73 and A75?

                                              Even the Arm world struggles to have more than one or maybe two (if any at all) usable SBC SoCs for a given core.

                                              So, yes, the SG2380 was announced using the P670, and it has been caught up in politics, not technical issues. It probably could realistically have been out right about now, but even if it was late 2025 it would not be late by normal core -> SoC -> SBC schedules. A53, A72, and A76 all took around four years from announcement of the core to the Pi 3, Pi 4, Pi 5.

                                            2. 1

                                              I’d be pleasantly surprised to see a commercially viable RV board with performance comparable to RK3588 or the Pi 5’s BCM2712 (clocked at 2.4 GHz) by then. But I’m in no rush, myself. Think how long it took ARM to challenge the Intel / AMD x86 duopoly in the desktop / server space.

                                              Linux is itself a legacy system, and to me there’s something a little sad about it colonizing a nice greenfield architecture, the first truly open hardware the world has seen outside academia. If RV builds strength in the low-end and non-Western markets, maybe it will drive some OS innovation too.

                                          3. 1

                                            I immediately wondered how much of this research on IPE in Xerox inspired ACME, and I would say a lot, given that Cedar and Xerox are cited in this ACME paper.

                                            1. 2

                                              Acme is much more “inspired” by Oberon instead, which itself was inspired by Cedar.

                                              1. 2

                                                Indeed. The paper that @denz linked has a “History and motivation” section that describes the precedent set by Cedar and Oberon and how they inspired Acme.

                                            2. 22

                                              I like Rust a lot, it deserves major credit for the first thing moving the Pareto optimum in awhile, but there are still glaring holes for low level stuff. No stable allocator API, no placement new, no not hacky way to allocate things on the heap without them going on the stack first, many traits still only work for tuples up to length 12, almost no operations are permitted on const generics, the list goes on. I’d love to stop using C++ but this stuff is making it hard.

                                              1. 11

                                                No stable allocator API

                                                Sadly, really low level stuff requires nightly today. It’s a shame. I think one alternative would be taking a page from erlang and publish versiones crates that have access to nightly features that will eventually land, but that have no finalized API. Only project crates could have that access. It’d be a middle step between what we have now, and std being distributed over crates.io.

                                                For this only changing the global allocator is stable, but per container allocator is nightly, and would suspect will continue to be for the short term.

                                                no placement new, no not hacky way to allocate things on the heap without them going on the stack first

                                                Aren’t these the same thing? :)

                                                As alluded in a couple of replies (including your oun) you can get the same behavior using MaybeUninit and for genericity you need macros. Between the two, you can do quite a bit when it comes to specific projects. But a language level solution is needed. The last effort found some tricky technical issues, sources of UB that weren’t anticipated (that I don’t recall now, and the author of that effort doesn’t either because it’s been so long 😅).

                                                many traits still only work for tuples up to length 12

                                                For a second I was gonna ask you to elaborate, but then I realized you said tuples and not arrays. For arrays impls are now generic over length, but as you say tuples are not (because we have no concept of variadics). This means that the only tool available to API authors, including the std, is using macros to enumerate every impl.

                                                almost no operations are permitted on const generics

                                                This is being actively worked on. The reason progress seems slow is because the resulting behavior is being fully specified, the implementation needs to be sound, and there can be no divergence between comp time and run time. (I’m not sure what the conclusion was with floating point, I think that might end up being the only possible source of divergence between one or the other, unless floating point operations are done entirely on software on some toplchain targets).

                                                1. 1

                                                  macros+MaybeUninit aren’t enough for generics to really work, the problem is macros don’t have access to type information.

                                                2. 5

                                                  Agreed. You don’t need C++ CTFE often, but when you do, you really want it. No, typenum is not a good substitute. (Although typenum’s download number shows how much people want it.)

                                                  1. 2

                                                    no placement new

                                                    can you explain what do you mean by this?

                                                    1. 7

                                                      can you explain what do you mean by this?

                                                      In C++ there are two forms of the new operator. The common one everyone is familiar with, new T(arg) which allocates memory on the heap and runs the constructor and the less commonly known placement new new(p) T(arg) which only runs the constructor, using the memory already available at p. It’s essential for implementing things like vector without unnecessary copies or moves. Since C++11 vector<T> has had emplace_back, which takes the constructor arguments needed by T and creates a T “in place” right where it’s going to live in the vector, without creating it on the stack first and moving it copying it in, which is all Rust can do. In Rust you can’t express emplace_back because 1) no placement new 2) no variadic generics so you can’t write the signature.

                                                      You can approximate the effect with MaybeUninit in Rust, but without variadics you can’t make it work generically, only in cases where you know the specific type ahead of time.

                                                      1. 8

                                                        Placement construction is a bit of a controversial issue in Rust, there is quite a bit of bikeshedding. A few macros exist in the crates.io registry, some of them from people involved in Rust-for-Linux because they need that there, especially for big structs.

                                                        Atm the only downside of not having placement semantics is that you can’t construct a single non-referential structure larger than your remaining stack size, but this is something you hit rarely in practice (but triple as annoying when you do hit it).

                                                        edit: Though to clarify there, placement semantics are being worked on. People do want to have an option on how to construct something from a heap reference, ie, MaybeUninit without the unsafe parts.

                                                        1. 4

                                                          Placement construction is a bit of a controversial issue in Rust,

                                                          I think it’s more controversial that Box::new([0i32; 1024 * 1024]) causes a stack overflow in a systems programming language, where “preallocate a big chunk of memory” is kind of a routine thing to do (yes, vec![0i32; 1024 * 1024].into_boxed_slice() kinda gets around this, but shouldn’t the obvious thing obviously work?)

                                                          but this is something you hit rarely in practice (but triple as annoying when you do hit it).

                                                          I disagree, I’ve hit this many times writing Rust for the last few years. In async it’s extra annoying because there’s no easy workaround, boxed futures must be allocated on the stack (and futures can be enormous!).

                                                          1. 4

                                                            I don’t think it’s controversial that the example you mentions causes a stack overflow, but the solution isn’t something you can just slot into the language like that.

                                                            Just consider that Box is not a privileged type. It means that trying to do a placement new onto a Box isnt’ something the compiler can “just figure out”. The pointer type that’s subject to placement new needs to explicitly support it (not all pointers can just be used like this!). So there is some new traits that need to be worked out and stabilized that would enable placement new. Or, to name another example, Fallible Placement. The placement execution can obviously fail for a number of reasons (invalid data or OOM), so it needs the option to handle failure to place or allocate. Or how to do placement on types that are !Unpin.

                                                            And hence, it requires time. The original issues relating to this are from 2015, but the majority of the work was done in 2018 with most current work focusing on bikeshedding the result so that it can be included in the standard library and be safely used.

                                                          1. 2

                                                            FWIW, Objective-C also has the separation of allocation and construction with the [[NSLobster alloc] init] pattern. In the past, there was arena allocation via zones, used via i.e. allocWithZone:, so that necessitated the separate steps.

                                                      2. 1

                                                        I’m curious if anyone’s actually using WinRT - I haven’t been following Windows development in a while, but it doesn’t seem that the non-Win32 stuff from 8 and beyond took off.

                                                        1. 29

                                                          Because there’s more to OS development than failed experiments at 90s Bell Labs. It solves actual problems (i.e. plugins in same address space, updating dependencies without relinking everything, sharing code between multiple address spaces). Now, there’s a lot of issues in implementations that can be learned from (i.e. the lack of ELF symbol namespacing); I don’t know if Redox will simply slavishly clone dynamic linking as it exists in your typical Linux system, or iterate on the ideas.

                                                          1. 7

                                                            I don’t think plugins in the same address space are a good idea in a low-level language. In particular I think PAM and NSS would have been better as daemons not plugin systems. It’s better to do plugins with a higher-level language that supports sandboxing and has libraries that can be tamed.

                                                            Sharing code between address spaces is a mixed blessing. It adds a lot of overhead from indirection via hidden global mutable state. ASLR makes the overhead worse, because everything has to be relinked on every exec.

                                                            1. 1

                                                              I don’t think plugins in the same address space are a good idea in a low-level language. In particular I think PAM and NSS would have been better as daemons not plugin systems. It’s better to do plugins with a higher-level language that supports sandboxing and has libraries that can be tamed.

                                                              Right, conflating libraries and plugins in dynamic linking was a mistake - especially since unloading libraries is basically impossible. Maybe there’s research into that though?

                                                            2. 3

                                                              Because there’s more to OS development than failed experiments at 90s Bell Labs.

                                                              …but arguably not, like, a lot more.

                                                              1. 16

                                                                It’s unfortunate Plan 9 is a thought-terminating object. There’s a lot of room in the osdev space, and unfortunately Plan 9 sucks all the oxygen out of the room when it gets mentioned, especially when it’s the “use more violence” approach to Unix’s problems.

                                                                1. 2

                                                                  How about its own successor, Inferno?

                                                                  1. 1

                                                                    Inferno and Limbo probably mostly live on in Golang, of all things. Rob Pike is like the fucking babadook of tech.

                                                                    1. 1

                                                                      Er. What is a “babadook”? Google tells me it’s a horror film, which isn’t very helpful…

                                                            3. 27

                                                              Submitted this because Balatro’s one of my favourite games of all time; this an overview of what it’s like to develop and publish an indie game (on the less technical side), and the timeframes involved.

                                                              1. 7

                                                                Thanks for sharing it! I wouldn’t have known LocalThunk had a blog otherwise. He seems like a really grounded and thoughtful person, and there’s a ton of behind the scenes stuff in this blog post.

                                                              2. 12

                                                                Optimizers like LLVM may reduce this into a @fence(.seq_cst) + load internally.

                                                                Ah, so we’re repeating the C mistake of having to write the low level construct in the high level language so that the optimizer can turn it into the high level construct in the low level language, because the supposed high level language is insufficiently expressive. I cringe every time I have to write out i.e. a popcount the “formal” way only to have the optimizer turn it into a single popcnt instruction.

                                                                1. 13

                                                                  In exchange you get thread sanitizer.

                                                                2. 1

                                                                  Repo and documentation are also available; I linked the introduction blog post since it describes the context.

                                                                  1. 18

                                                                    They’ve also open sourced Tiberian Dawn, Renegade and Generals with Zero Hour!

                                                                    Given the pile of dependencies I suspect the main value will be for modders though, but some of these for sure have the fanbase a full replacement project could happen.

                                                                    1. 6

                                                                      Really excited about Zero Hour, hopefully this will be helpful to Thyme.

                                                                      1. 3

                                                                        oh, I didn’t know about that project, very interesting! License mismatch, but small enough contributor number they can hopefully fix that if they want to.

                                                                        Zero Hour is the one with the most memories for me too. Never was good at it, especially not multiplayer, but lots of fun was had.

                                                                      2. 5

                                                                        I suspect the main benefit is for the OpenRA project. The license is compatible…

                                                                        1. 3

                                                                          Zero Hour! My childhood :D I thought maybe I could deep dive on this source code, but 1.3m lines of code, thought projects like that would be around 200k :D

                                                                        2. 2

                                                                          The further from Halifax you are, the worse the donair…

                                                                          1. 2

                                                                            This thesis is about Dasher, which used a language model for rapid accessible text entry.

                                                                            1. 4

                                                                              A year ago I spent a week with a borrowed macbook air. I made some notes that I still haven’t made into a proper post, but there were some things that I take for granted in Linux DEs and the fact that I couldn’t get them in macOS was driving me mad.

                                                                              Some things of note:

                                                                              1. Very few built-in widgets have tooltips. To see what network you are connected to, you need to click on the “chain link” icon. To see the numeric battery percentage, you need to click the battery icon.
                                                                              2. Very limited keyboard layout switch options. On Linux, I like to set it to CapsLock because I don’t use that key otherwise. On macOS, there’s no built-in way to do that. Maybe there’s a third-party tool for that?
                                                                              3. Missing compose key. On Linux, I took it for granted that I can just enter something like “x → y”, or “Seán Ó Ríordáin”, or “Ça va”. On macOS, the only built-in way was switching layouts or using dead keys, but using a different layout if you have no muscle memory for it and no keyboard markings is a nightmare. Switch layouts just to type a random Irish/Spanish/French word in an English text is nonsense. Memorizing Unicode sequences is also nonsense. I also couldn’t find a third-party solution for that.
                                                                              4. Inconsistent maximization behavior (e.g., Safari cannot be properly maximized, only made full-screen) and no keyboard shortcut for maximization were quite annoying.
                                                                              5. There seemed to be no way to make the system reconnect to a WiFi network if that network disappears and reappears. I was travelling and used my iPhone as a mobile hotspot: the iOS hotspot likes to switch itself off, and macOS refused to auto-reconnect when I turned it back on, so I had to do all the clicks to reconnect by hand — that was absolutely maddening.

                                                                              I was really glad to be back to Linux after that experience, to be fair.

                                                                              1. 5

                                                                                Very limited keyboard layout switch options. On Linux, I like to set it to CapsLock because I don’t use that key otherwise. On macOS, there’s no built-in way to do that. Maybe there’s a third-party tool for that?

                                                                                This is built in; you can change the shortcut from the Keyboard settings.

                                                                                Missing compose key. On Linux, I took it for granted that I can just enter something like “x → y”, or “Seán Ó Ríordáin”, or “Ça va”. On macOS, the only built-in way was switching layouts or using dead keys, but using a different layout if you have no muscle memory for it and no keyboard markings is a nightmare. Switch layouts just to type a random Irish/Spanish/French word in an English text is nonsense. Memorizing Unicode sequences is also nonsense. I also couldn’t find a third-party solution for that.

                                                                                The Mac keyboard layout has AltGr on right Option by default. AltGr+e then e results in é.

                                                                                Inconsistent maximization behavior (e.g., Safari cannot be properly maximized, only made full-screen) and no keyboard shortcut for maximization were quite annoying.

                                                                                Modern macOS emphasizes full screen by default. but even before that, it isn’t maximize, it’s zoom. Zooming resizes the window to match its contents - if an app doesn’t specify that, it’ll approximate something like a maximize.

                                                                                1. 3

                                                                                  Nowadays there’s also a bona fide maximize in Window > Fill / ctrl+fn+f / hover on the green traffic light.

                                                                                  1. 3

                                                                                    Holding Opt while clicking the now-“Full Screen” button turns it back into Zoom, too. Nice to know for us users of the Old Versions, who still like Zoom more than Full Screen.

                                                                                    1. 2

                                                                                      This is built in; you can change the shortcut from the Keyboard settings.

                                                                                      CapsLock is not among the option it offers. At least it wasn’t a year ago, maybe that has changed.

                                                                                      AltGr+e then e results in é.

                                                                                      Yes. But there is no uniform way to enter é, ö, ç, æ, an the like.

                                                                                      1. 3

                                                                                        Yes. But there is no uniform way to enter é, ö, ç, æ, a the like.

                                                                                        https://support.apple.com/en-me/guide/mac-help/mh27474/mac

                                                                                    2. 3

                                                                                      Very few built-in widgets have tooltips. To see what network you are connected to, you need to click on the “chain link” icon. To see the numeric battery percentage, you need to click the battery icon.

                                                                                      So I generally don’t like tooltips so always look for an alternative… pretty sure the mac can show a percentage next to the battery all the time with one of the settings, but my old mac laptop i used to use for testing these things started beeping at me a while ago when i try to turn it on. internet says that means ram failure, but being a mac, fixing that is a pain, sigh.

                                                                                      idk about network though.

                                                                                      (why do i not like tooltips: making them come up can be a real pain, having to position and wait, you don’t know for sure what can have them show up so you might be hovering and waiting randomly, and the biggest annoyance: they often come up when i don’t want them and then cover up something i am trying to look at! so i try to design my own things to never use them.)

                                                                                      1. 1

                                                                                        I agree about the Compose key. After 25 years this remains a pain point.

                                                                                        There is a native way – a two-step dance – but it’s a PITA.

                                                                                        1. Press and hold the letter; a list of some accented forms appears depending on locale settings.
                                                                                        2. Failing that, add the Keycaps status icon in Settings, then click on it and search the on-screen keyboard while holding Cmd, Opt, Shift and combinations thereof.
                                                                                      2. 6

                                                                                        I also switched to a mac for the first time in October, it’s mostly working out, but a couple of things drive me nuts currently:

                                                                                        • Home does not mean Home like on Windows/Linux
                                                                                        • going left or right word by word with alt instead of ctrl
                                                                                        • activating an app with several windows brings all its windows to the foreground

                                                                                        Also while my employer lets me do 99% of the things on this machine, Karabiner seems to need a driver I’m not allowed to install, so unfortunately I can’t remap capslock, but that’s not a huge problem.

                                                                                        1. 7

                                                                                          … so unfortunately I can’t remap capslock, …

                                                                                          some amount of remapping f.e. caps-lock -> ctrl can be done via the customize modifier keys through the settings menu itself.

                                                                                          1. 6

                                                                                            This is the way. You have to do it for every new keyboard that gets plugged in, but I have all capslock keys assigned to escape. On Sequoia it’s:

                                                                                            System Settings -> Keyboard -> Keyboard Shortcuts -> Modifier Keys

                                                                                            You have to go through the drop-down menu on the top to change the setting for each keyboard individually.

                                                                                          2. 4

                                                                                            activating an app with several windows brings all its windows to the foreground

                                                                                            This behavior was kept for consistency with classic Mac OS, which had this behavior due to some limitations in global data structures that were not designed to support a multitasking OS.

                                                                                            1. 4

                                                                                              This app might help with that:

                                                                                              https://hypercritical.co/front-and-center/

                                                                                              You’ll find on the Mac that the main functionality, while opinionated, is often complemented by one or more for-pay, well-crafted independent tools for folks who need specific functionality.

                                                                                              1. 5

                                                                                                For long-time Mac people, a fun bit of trivia is that it’s written by John Siracusa.

                                                                                              2. 2

                                                                                                This is an amazing fact - where did you learn about this peculiarity?

                                                                                                1. 2

                                                                                                  Not sure where I first read about it, but it goes back to Switcher/MultiFinder on classic MacOS in the mid 80’s.

                                                                                                  1. 1

                                                                                                    This is an amazing fact

                                                                                                    Er, not really, not if you were already using Macs when OS X came in. It behaves the same way Classic did, and Classic did that for good reasons.

                                                                                                    TBH I never really noticed it and don’t consider it to be an inconvenience, but now it’s been spelled out to me, I can see how it might confuse those used to other desktop GUIs.

                                                                                                    FWIW, being used to OS X being just a Mac and working like a Mac, I found the article at the top here a non-starter for me. On the other hand, I absolutely detest KDE, and I would have liked something going the other way: how to make KDE usable if you are familiar with macOS or Windows.

                                                                                                    1. 5

                                                                                                      I should note that I was born after the year 2000 and bought my first Mac computer in 2021, so it’s perhaps mostly amazing because of my relatively small scope.

                                                                                                      1. 6

                                                                                                        Oh my word!

                                                                                                        Fair enough, then…

                                                                                                        (I can, just barely, remember the 1960s. Right now I am resisting the urge to crumble into dust and blow away on the breeze with all my might.)

                                                                                                  2. 1

                                                                                                    I’m not understanding how that is consistent. When you click a modern macOS window, that window alone comes to the front, unless you are using Front and Center as another mentioned. When you click a Dock icon, then all the app’s windows come forward, but classic Mac OS didn’t have a Dock. I am out of my depth, though, as an OS X-era switcher.

                                                                                                    1. 3

                                                                                                      In the very first versions of Mac OS, there was only one program running at a time (plus desk accessories), so only that application’s windows were visible.

                                                                                                      Then came MultiFinder and Switcher and whatever. In the earliest versions, you could switch programs: all of one program’s windows would vanish, and the next’s would appear.

                                                                                                      Eventually you had all your windows on screen at once two ways of switching: a menu of applications in the menu bar, and by clicking on a window. If you clicked on a window in classic Mac OS, all of its windows would be raised. Until Mac OS X (maybe OS 9?), it was not officially possible to interleave windows of different applications, and Mac OS to this day still raises all windows when you select an application in the Dock, just as it did when you selected an application in the switcher menu in the days of yore.

                                                                                                      This behavior started because certain data structures in classic Mac OS were global, and the behavior stuck around for backwards compatibility reasons.

                                                                                                  3. 4

                                                                                                    activating an app with several windows brings all its windows to the foreground

                                                                                                    Perspective may or may not help muscle memory, and you probably know this already: This is because applications and their menu bars are the top level UI objects in macOS, whereas windows are one level down the tree. ⌘-Tab or a dock icon click will switch apps. The menu bar appears, and so do windows if they exist, but there may be none. (Some apps then create one.) If what you want is to jump to a window, Mission Control is your friend.

                                                                                                    1. 1

                                                                                                      I know, but thanks.

                                                                                                      I suppose it’s my (weird?) setup where I have a firefox window open on the (smaller) laptop screen to the right that’s always open but less used - but also one with “current tabs” on one of the two main screens.

                                                                                                      The odd thing to me is that alt-tab gives me both firefox windows (and thus hides e.g. my IDE) and not the last recently used like on Windows (and most linux WMs, I guess - but I use tiling there most of the time). I guess I ruined everything else by using a tiling WM for years where every window opens exactly on that screen I want it to be and I never had to alt-tab in the first place :P

                                                                                                    2. 2

                                                                                                      Karabiner seems to need a driver I’m not allowed to install, so unfortunately I can’t remap capslock, but that’s not a huge problem.

                                                                                                      I went over this and I dislike how cumbersome Karabiner feels. You can remap keys pretty easily with a custom Launchd user agent. Some resources:

                                                                                                      Obviously you should try the hidutil command line by itself before creating the service.

                                                                                                      As an example, here’s what I have in ~/Library/LaunchAgents/org.custom.kbremap.plist (working on Sequoia 15.3.1): https://x0.at/BNtu.txt

                                                                                                      I hope this helps.

                                                                                                      1. 1

                                                                                                        I can try again, thanks - but I’m pretty sure I spent some hours researching and couldn’t get it to work as a non-modifier in a way I need it.

                                                                                                        1. 2

                                                                                                          I have a custom app I wrote to remap Command to Escape when pressed without being held, for the exact same reason that I couldn’t install Karabiner on a work laptop (those MacBook Pros with no physical escape key).

                                                                                                          You could probably tweak it for your own purposes, hopefully without too much difficulty.

                                                                                                    3. 5

                                                                                                      I wonder how much code in the kernel also has been broken without anyone noticing.

                                                                                                      The whole thing also suggests there’s not much testing in the kernel in general, be it automated or manual.

                                                                                                      1. 14

                                                                                                        Kernel development is pretty unusual in the sense that many things you’d expect to be part of a project are carried out downstream of it.

                                                                                                        For one, there’s no CI and not much in the way of testing there. Instead, developers and users do all kinds of testing on their own terms, such as the linux test project or syzbot (which was the thing that found this filesystem bug).

                                                                                                        I was even more surprised when I found out that documentation is also largely left to downstream, and so there are a bunch of syscalls that the Linux manpages project simply hasn’t gotten around to add manpages for.

                                                                                                        1. 15

                                                                                                          I was pretty surprised to find that the only documentation for the ext filesystems was just one guy trying to read the (really dodgy) code and infer what happened under different cases. A lot of “I think it does X when Y happens, but I’m not sure” in the documentation. And reading through the code, I understand it. That filesystem is its own spec because no one actually understands it. It’s wild how much this kind of stuff exists near the bottom of our tech stacks.

                                                                                                          1. 8

                                                                                                            Yup. I tried to learn about the concurrency semantics of ext4 - stuff like “if I use it from multiple processes, do they see sequentially consistent operations?”, so I asked around. Nobody was able to give me a good answer.

                                                                                                            Also, one kernel dev in response smugly told me to go read memory-barriers.txt. Which I’d already read and which is useless for this purpose! Because it documents kernel-internal programming practices, not the semantics presented to userland.

                                                                                                          2. 3

                                                                                                            This is also true of stuff like gcc. Just a very different style than more modern projects.

                                                                                                            1. 1

                                                                                                              GCC has quite a bit of upstream documentation.

                                                                                                              1. 2

                                                                                                                I’m talking about the testing, you’re right that every time I’ve looked at it’s documentation, it feels very thorough.

                                                                                                        2. 6

                                                                                                          @pushcx the actual link to the lkml was deleted:

                                                                                                          2025-02-21 07:28 pushcx Story: Linus replies to R4L controversy Action: deleted story Reason: Don’t link into projects’ issue trackers and discussion spaces to brigade Lobsters readers into their arguments.

                                                                                                          Is this link allowed?

                                                                                                          1. 5

                                                                                                            Yes, extremely silly that the direct link was censored, but the exact same content via phoronix is allowed. There were some good discussion in the deleted thread.

                                                                                                              1. 7

                                                                                                                Thanks for the link here. Yeah, it’s not about the content of Linus’s email here, it’s about linking our 100k+ readers into projects’ community spaces. Linux is sort of the worst possible first example here because it’s a huge stable project and the friction of signing up to a single-purpose high-volume mailing list means it’s especially unlikely that we’re going to meaningfully disrupt Linux. The other end of the spectrum is linking into a small project’s GitHub issue, where most of our readers are going to be logged in and looking at that inviting <textarea> on a contentious topic with little context or history.

                                                                                                                If the rule is “don’t submit links into projects’ spaces” it’s a clear rule, and I admit that it’s overkill for this specific situation. “Don’t submit links into projects’ spaces unless they’re big and probably fine like Linux or Mozilla (but a big project like Firefox not a small one like NSS)” is an unending series of judgment calls that are often going to be about really contentious issues that feel like they justify an exception to our rules or to common courtesy. It’s an imperfect rule, but there’s value in predictability and legibility.

                                                                                                                If this compromise isn’t clear from what I’ve written in that code and the guidelines, I’m very open to suggestions for improving it. Doubly so if it’s the wrong compromise and there’s a path to us having better conversations and being a better neighbor on the web. As a reminder, the next office hours stream is in ~2h hours and this is the kind of thing I’m started office hours to talk about, in the hopes that folks find that more convenient or less formal than a meta thread or emailing me.

                                                                                                                1. 3

                                                                                                                  I feel the tradeoff is this instead basically links to blogspam that barely summarizes it, then links it anyways. They get the ad revenue. Maybe if we waited for a better thing to post about it, i.e. an LKML article or something in that ballpark?

                                                                                                                  1. 5

                                                                                                                    The small benefit is that it’s one more small step that makes bad behavior less likely but, you’re right, it does incentivize lazy sites like this one.

                                                                                                                    You probably meant to write LWN? I’ve been mentioning them a lot in this running discussion about what our rules should be, I agree they’re a consistently excellent source. I don’t want to take a hard dep on them so I try to write things like “neutral third-party” but yeah, they’re first in my thoughts as well.

                                                                                                                    One aspect of getting good writeups of these things is false urgency, or maybe that urgency depends on proximity. To people who are involved or affected by the topic, Linus posting a single email is a significant development. They want to know immediately because it could significantly affect their work. So they want to see the primary source, or a repost of it. But anyone outside that narrow circle needs a writeup that explains the topic and puts it into the context of the last few months of news. That takes a lot more time to produce and sometimes it doesn’t happen. So even for obviously topical stories we have two very different kinds of readers. A significant part of the brigading problem is when the second, bigger group hits an update appropriate for the narrow group. They can’t contextualize it, but if it hits a hot button like the morality of licensing or Linus insulting people, it can generate a lot of outrage that makes them feel like they need to do something, and that unacceptable behavior like trolling is justified by the circumstances.

                                                                                                                    For a long time Lobsters has avoided being a source of brigading by trying to have norms that are kinder than average. That lowers the temperature of every discussion, makes us less appealing to the serious trolls, and makes it less likely that any particular discussion is going to gather enough outrage to hit the critical mass where our readers brigade into a project. But much bigger than our active users, our readership has been growing steadily, so even as our norms reduce the percentage chance of bad behavior, I’m worried that it’s not reducing enough to offset growth. If the percent risk drops by half but the readership grows 10x, we have a higher absolute risk.

                                                                                                                    To bring it back to a specific example, last summer Nix was having a running governance crisis around the project’s direction, corporate/government involvement, and codes of conduct. There was a series of stories about breaking news and new dimensions to the broader story about who should be running Nix and how, and it was tons of hot-button issues. A lot of their work happens on GitHub and a bunch of the issues tracking different proposals, petitions, and governance actions were submitted here, so all of the ingredients for brigading were present and temperatures were rising. Some of the links were submitted by the people directly involved. To put it charitably they were advocating and organizing for better governance; to put it uncharitably they were trying to brigade our readers into the project to overwhelm it. I did my best to separate the two and I think we discussed very important, hard topics while being a good neighbor on the web, but it’s why I added to the brigading guidelines about preferring not to link into project spaces.

                                                                                                                    To sum up, the rule against linking into community spaces trades off between a lot of hard topics. I’m trying to reduce judgment calls and our risk of harming projects while maintaining high-quality discussions on important topics. Sacrificing urgency draws a predictable, clear line about what links are acceptable, though I know that’s especially frustrating to people who are most involved with breaking news. So that’s why my last message called the rule a compromise, and I again encourage folks to help the site figure out better ones.

                                                                                                                    1. 1

                                                                                                                      Probably a silly question, but would something like an https://archive.is snapshot of the target be enough of a barrier to brigading? Could even add that functionality internally..

                                                                                                            1. 3

                                                                                                              Yeah, I wasn’t sure about this either. I understand the rationale for the no-brigading, but I don’t see much difference in posting this URL vs LKML directly.

                                                                                                              (Also hope there’s a way we can still discuss Linus’s statement regardless)

                                                                                                            2. 2

                                                                                                              How does ArcaOS compare to ReactOS? It looks like they have commercial funding, that’s promising.

                                                                                                              1. 11

                                                                                                                ReactOS is aiming to be a rewrite. ArcaOS is a bundle of drivers and tools around a commercial operating system.

                                                                                                                The Windows equivalent would be if somebody had a license to distribute Windows 2000, bundled it with drivers for modern hardware, backported Firefox to it, and created a UEFI loader for it.

                                                                                                                1. 8

                                                                                                                  I feel like there would be a market for that OS.

                                                                                                                  1. 1

                                                                                                                    There were a lot of folks I knew who ran win2k as our desktop OS for probably longer than we should because it really Just Worked.

                                                                                                                  2. 3

                                                                                                                    The Windows equivalent would be if somebody had a license to distribute Windows 2000, bundled it with drivers for modern hardware, backported Firefox to it, and created a UEFI loader for it.

                                                                                                                    Lovely comparison. Well done.

                                                                                                                    I am with @fs111 here. I think that would interest me, too.

                                                                                                                    For me personally, W2K was the peak of the NT timeline and it’s been accelerating downhill since.

                                                                                                                    1. 2

                                                                                                                      Same, though I might draw the “peak” line at server 2k3 R2 x64.

                                                                                                                      1. 1

                                                                                                                        It’s a slippery slope, but that was more or less the last of the old GDI-based line, before Vista and its built-in compositor.

                                                                                                                        I tried running XP 64 as my main OS just 2Y ago. It was a surprisingly good experience. https://www.theregister.com/2023/07/24/dangerous_pleasures_win_xp_in_23/

                                                                                                                        Just as MS Office 97 seemed bloated and sluggish when new, now, it’s my go-to version of MS Word, because it’s tiny and fast. XP seemed bloated when new, but compared even to Win7, it’s tiny and fast. On an 8MB Core 2 Duo it flies along.

                                                                                                                  3. 8

                                                                                                                    As far as I understand from the eComstation days, nobody outside IBM has the full OS/2 kernel source code, so there is never going to be a 64-bit OS/2. This in contrast to ReactOS, which is open source.

                                                                                                                    OS/2 is a dead-end. This product is primarily especially interesting to companies that still have legacy OS/2-based systems running.

                                                                                                                    1. 2

                                                                                                                      Can that be true? I would imagine the features ArcaOS has done would require the full sources, particularly ACPI and EFI booting. If not, while bizarre, it would be a phenomenal testament to whatever modular kernel engineering decisions allowed this level of evolution.

                                                                                                                      1. 2

                                                                                                                        it would be a phenomenal testament to whatever modular kernel engineering decisions allowed this level of evolution.

                                                                                                                        I think it is true, and yes, it is a testament.

                                                                                                                        I interviewed Lewis Rosenthal: https://www.theregister.com/2023/01/19/retro_tech_week_arca_os/

                                                                                                                        And I reviewed ArcaOS: https://www.theregister.com/2023/09/04/arcaos_51/

                                                                                                                        It’s a remarkable piece of work. It’s still a pig to install, as OS/2 always was. It’s still fussy about hardware and disk partitioning, as ever. But thanks to lots of generic drivers, it’s way less so.

                                                                                                                        I could only get it to dual-boot with FreeDOS, nothing newer. If a disk was set up by Windows or Linux, then ArcaOS couldn’t understand it.

                                                                                                                        But it’s blazingly fast, it can talk to USB and SATA and UEFI, and to Wifi. It has a useful browser, which is more than eComStation does.

                                                                                                                        It felt even faster than XP. It can run rings around any 64-bit version of Windows. It has DOS, Win16, native OS/2 16-bit and 32-bit apps, and some Linux ports. There’s a WINE-like layer called Odin that can let some Win32 apps run. It can drive 64 CPU cores and given over 4GB of RAM allocate the stuff above 4GB as a RAMdisk.

                                                                                                                        It is astonishingly capable for an OS whose kernel is from 1998 or so (with later fixpacks and updates).

                                                                                                                        1. 1

                                                                                                                          It probably doesn’t understand GPT partitioning which is the default on newer OS installers, you could make at least Linux comply not sure if Windows will still oblige MBR.

                                                                                                                          1. 2

                                                                                                                            (I don’t know whether to use a laugh or cry response.) Oh no no no. Nothing remotely so simple and easy.

                                                                                                                            The big new feature in ArcaOS 5.1 and the main thing that drove the entire project is UEFI support. That means it has to support GPT as UEFI firmware and GPT partitions go hand in hand.

                                                                                                                            ArcaOS can boot from both BIOS and UEFI, and it can boot from MBR on both and from GPT when using UEFI. (I am not sure if it can boot from GPT on BIOS.)

                                                                                                                            No no. When I say it can’t understand partitioning schemes from other OSes I am being literal.

                                                                                                                            On BIOS on MBR, its native format, in my testing, it can handle 1 primary FAT partition and then having a second partition with ArcaOS in it.

                                                                                                                            It will not attempt to install if there is a primary with anything else but DOS. It can’t handle it if there’s a primary with NT. It can’t handle extended partitions created by other OSes. It can’t handle Linux setups, primary or logical or both. It can’t handle BSD setups; I tried FreeBSD, OpenBSD and NetBSD. WinXP 32, and 64, and Win7, and Win10.

                                                                                                                            For instance ArcaOS needs gaps between partitions. You must have at least 1 empty cylinder between partitions. Primary, gap, extended, gap, 1st logical, gap, 2nd logical, gap, etc. But even carefully creating this in (for example) Gparted is not enough.

                                                                                                                            You need to create the partitions in ArcaOS or in an OS/2-compatible partitioning tool, such as DFSee.

                                                                                                                            https://www.dfsee.com/

                                                                                                                            Paid, not included with ArcaOS.

                                                                                                                            ArcaOS has its own internal LVM system and that can’t coexist with modern LBA-aware partitioning. The OS/2 kernel still seems to think in terms of cylinders, heads and tracks, and the modern interpretation of other OSes confuses it – fatally.

                                                                                                                            I could not get it to dual boot with any other 32-bit or 64-bit OS, at all, full stop.

                                                                                                                            Only with DOS. A single copy in a single partition.

                                                                                                                            The docs tell you to create all partitions only with ArcaOS itself before installing anything else. The snag is that other OSes then see that partitioning setup as corrupt and won’t use it, and if you let Linux or Windows repair it, then ArcaOS can’t use it.

                                                                                                                            Basically, you need to treat ArcaOS like ChromeOS: it needs to be the only OS on the hardware and it does not want to share with anything else. Do that, and there’s a much better chance things will work.

                                                                                                                            P.S. Yes, Win10 still supports MBR. It has a unique requirement though. As far as I can tell, you can only use MBR on BIOS machines, and only use GPT on UEFI machines. Windows won’t boot from GPT on BIOS or from MBR on UEFI.

                                                                                                                            Linux and other OSes don’t care; they can handle both, in any combination.

                                                                                                                        2. 2

                                                                                                                          From what I’ve heard, they don’t have the source to some components (it may be simple as IBM losing the source), but they are allowed to do binary patches for what they don’t have source for. I’m not sure what components they’re binary patching versus having the source for though.

                                                                                                                          1. 2

                                                                                                                            they don’t have the source to some components

                                                                                                                            I think this is correct.

                                                                                                                            It’s a real shame. There will never be a 64-bit OS/2 but an x86-32 OS with in-kernel PAE, so it could allocate lots of RAM and have a big disk cache and lots and lots of 2GB apps, would be all I needed, I think.

                                                                                                                            1. 1

                                                                                                                              I wonder if it’s because Microsoft still has licensing rights to chunks of OS/2 and are still holding a grudge.

                                                                                                                              1. 1

                                                                                                                                I don’t think so.

                                                                                                                                I think that is partly why there is no FOSS release of OS/2.

                                                                                                                                IBM does not really care any more. Microsoft doesn’t either. I don’t think anyone in management really knows what they are any more.

                                                                                                                                I suspect the main motivations are just 2 non-technical issues:

                                                                                                                                1. There’s 3rd party code in there neither companies have the right to release. Nobody wants to spend the money to go through it and clean it up.

                                                                                                                                2. Simple shame. I suspect there are a lot of ugly hacks in there.

                                                                                                                                In an ideal world IBM and MS would do some kind of mutual accord where they give each other full rights to the code of each others’ that each company has, including to open source it. Maybe talk to any surviving companies whose code is in there: RealPlayer is long gone, MP3 is open now, there can only be ancient audio/video codecs… Maybe some hardware drivers? Try to get blanket permissions to release.

                                                                                                                                I’d believe that “MS <3 FOSS” if it released the source of all versions of DOS, Windows 1/2/3/9x, all forms of OS/2, and made all its DOS apps freeware. There is precedent: it did with MS Word 5.5 for DOS, as a Y2K fix for all older releases.

                                                                                                                    🇬🇧 The UK geoblock is lifted, hopefully permanently.