Threads for Summer

  1. 3

    Much like which, fgrep egrep arn’t portable, they are part of GNU Grep (and others), but they are not standardized. If you want to be reasonably sure your code will run consistently on all posix platforms, probably best to avoid those shortcuts.

    1. 15

      POSIX (which is now the same specification as the Single UNIX Specification) is meant to be a lowest common denominator. The way that POSIX is extended is the following sequence:

      1. A system adds an extension.
      2. Other systems add similar extensions.
      3. If the extensions are incompatible, the vendors agree on a common useful set of interfaces.
      4. This de-facto standard is proposed for incorporation into the de-June standard.
      5. The next version of POSIX incorporates it.

      Unfortunately there is a subset of the community that believes ‘not in POSIX’ means ‘should not exist’ rather than ‘should not be relied on to work everywhere’, an attitude that would prevent POSIX from ever evolving.

      Checking the FreeBSD man page, egrep, fgrep, and rgrep are all supported. They exist on GNU and BSD platforms (including Darwin), so they’re already at step 3. The correct thing to do is advance them to step 4 and add them to the next version of POSIX, not remove them from some platforms that support them.

      1. 8

        There are plenty of flags in GNU Grep (and many, many other tools) that aren’t in POSIX either. When can we expect removal of all those?

        1. 7

          Maybe the error should be on POSIXLY_CORRECT=1 then

          1. 4

            That would certainly be saner

            1. 2

              incredible that they didn’t just do that

              1. 1

                actually there was some discussion of this on the bug-grep mailing list:

                https://lists.gnu.org/archive/html/bug-grep/2022-09/msg00000.html

                @sjamaan

            2. 6

              No, they have been in Unix since the 1970s, they are portable, or at least they were before this boneheaded GNU change. (Boneheaded because it is causing pain and they are proceeding ahead stubbornly despite reasonable complaints.)

              The history of these programs is that they were originally completely separate, implemented using different algorithms. The -E and -F flags came much later after computers became big enough that all the different modes could fit into a single program. If you look at the early days of metaconfig or autoconf, I bet you will find that egrep and fgrep were more portable than grep -E and grep -F.

              1. 1

                A day or so after writing the above I checked some old man pages, which indicated that grep -E and grep -F came from System V, and BSDish systems did not get a multi-algorithm grep until the mid-1990s.

              2. 1

                So the decision by the GNU project to deprecate egrep and fgrep in favor of grep -E and grep -F was to bring GNU grep into alignment with POSIX?

                Looking at OpenBSD grep I can see it has -E and -F options but I don’t know if that grep is POSIX compliant.

                1. 1

                  Looking at OpenBSD grep I can see it has -E and -F options but I don’t know if that grep is POSIX compliant.

                  You know that’s usually written at the bottom of the man page and it’s trivial to check?

                  Also why would it not be?

                  On the other hand, BSDs do provide egrep and fgrep so…

                  1. 5

                    You know that’s usually written at the bottom of the man page and it’s trivial to check?

                    I didn’t know that. I appreciate the information. The snark, not so much.

                    Anyway as BSDs support [e|f]grep, that would mean that prior to GNU removing them they would work on 99.99%[1] of production unix-likes. So the decision to remove the aliases from GNU makes even less sense.

                    [1] wild-ass guess

                    1. 3

                      Also why would it not be?

                      I can personally attest that OpenBSD ships commands that only partially conform to POSIX. Quoting OpenBSD locale(1):

                      With respect to locale support, most libraries and programs in the OpenBSD base system, including the locale utility, implement a subset of the IEEE Std 1003.1-2008 (“POSIX.1”) specification.

                  2. 1

                    I agree. I never really understood why they existed for so long. And while such discussions go on for decades, tools like rg, ag, ack, etc. Become more and more popular without any of these neckbeard dramas.

                    I refrain from using -P on shellscripts because it is gnu specific and OSX users will come to me claiming my script “has an error”. Is -F equally GNU specific?

                    1. 3

                      -E and -F, along with the syntax of ERE (-E) is all specified in POSIX

                  1. 50

                    This essay is an admirable display of restraint. I would have been far crueler.

                    In my experience, protocols that claim to be simple(r) as a selling point are either actually really complex and using “simple” as a form of sarcasm (SOAP), or achieve simplicity by ignoring or handwaving away inconvenient details (RSS, and sounds like Gemini too.)

                    1. 15

                      The “S” in “SNMP” is a vile lie.

                      1. 32

                        After years thinking and reading RFCs and various other documents, today, I finally understood. “Simple” refers to “Network” not to “Management Protocol”! So it is a Management Protocol for Simple Networks not a Simple Protocol for Management of Networks

                        via

                        1. 6

                          It’s simple compared to CIMOM, in the same way LDAP is lightweight compared to DAP.

                          1. 3

                            Let’s not forget ASN.1, DCE, and CORBA. Okay, let’s forget those. In comparison SOAP did seem easier because most of the time you could half-ass it by templating a blob of XML body, fire it off, and hopefully get a response.

                        2. 8

                          achieve simplicity by ignoring or handwaving away inconvenient details

                          Exactly, and the next-order effect is often pushing the complexity (which never went away) towards other parts of the whole-system stack. It’s not “simple”, it’s “the complexity is someone else’s problem”.

                          1. 2

                            or achieve simplicity by ignoring or handwaving away inconvenient details (RSS, and sounds like Gemini too.)

                            what’s inconvenient about RSS?

                            1. 24

                              Some of my personal grievances with RSS 2.0 are:

                              • not mandating a field for uniquely identifying posts
                              • using the mess that RFC822 dates are instead of ISO8601

                              Obviously, neither are too important – RSS works just fine in practice. Still, Atom is way better.

                              1. 15

                                RSS never specified how HTML content should be escaped, for example.

                                The Atom protocol resolved that however.

                                1. 6

                                  Pretty sure that’s because RSS2 is not supposed to contain HTML.

                                  But RSS2 is just really garbage even if people bothered following the spec. Atom should have just called itself RSS3 to keep the brand awareness working.

                                2. 14

                                  O god, don’t get me started. RSS 2 lacked proper versioning, so Dave Fscking Winer would make edits to the spec and change things and it would still be “2.0”. The spec was handwavey and missing a lot of details, so inconsistencies abounded. Dates were underspecified; to write a real-world-useable RSS parser (circa 2005) you had to basically try a dozen different date format strings and try them all until one worked. IIRC there was also ambiguity about the content of articles, like whether it was to be interpreted as plain text or escaped HTML or literal XHTML. Let alone what text encoding to use.

                                  I could be misremembering details; it’s been nearly 20 years. Meanwhile all discussions about the format, and the development of the actually-sane replacement Atom, were perpetual mud-splattered cat fights due to Winer being such a colossal asshat and several of his opponents being little better. (I’d had my fill of Winer back in the early 90s so I steered clear.)

                                  1. 5

                                    Which version of RSS? :)

                                    1. 1

                                      I see what you did there.

                                1. 15

                                  I mean, likewise, can you quickly tell on mobile which is good and which is bad?

                                  (The joke is both are bad)

                                  Even with the correct url, its a tag, so it would be possible to rebind it to a new malicious commit, if the repo gets a malicious actor

                                  Even with all of this, then there is Smart Screen telling you “Hey, don’t run random unsigned executables and be careful with stuff you downloaded”


                                  When is it okay? .rs can be malicious, .com can be malicious, etc. “but .com isn’t common” so its a matter of popularity? What would the change be that would make one okay with .zip existing as a TLD

                                  1. 7

                                    These are two completely different classes of attack though. One is “the people you’re downloading and running arbitrary code from are evil”. The other is “the people you think you’re downloading and running arbitrary code from are good, but the URL is deceiving, so you’re actually downloading code from an unrelated attacker”.

                                    1. 7

                                      Both links are deceptive, the first one links to kubarnetes org, a person could go and take that org right now.

                                      1. 1

                                        Correction: seemingly someone now has taken the org

                                    2. 2

                                      Tags can be deleted and re created with the same name. Just a nitpick :)

                                    1. 2

                                      No serious competitor is likely to step in and build serious apps using a protocol that is directly controlled by Bluesky.

                                      Competitors make use of the Microsoft Exchange protocols, right?

                                      1. 7

                                        I would like to have this back.

                                        1. 7

                                          Very soon. WebAuthn & Passkeys

                                          1. 5

                                            I think you complete misunderstand my comment.

                                            I want this back, because WebAuthn is far to complex and adds the problem that the Website and the backend has to implement the authentication. With KEYGEN all about the keys is handled by the browser. The authentication check then can be done by the httpd.

                                            Yes I know there are some issues with the UI and other issues on implementation side. But this is nothing conceptional and can be improved.

                                            To your other comment about storing the credential on separate device: What stops a browser from doing the same thing with keys generated by KEYGEN?

                                            1. 4

                                              Nothing, in fact browsers support that (smartcards)

                                            2. 3

                                              All I’ve seen from the WebAuthn world has made it seem like an excellent way to lock yourself into either Google’s or Apple’s ecosystem as you he two companies demand to control all your online accounts. Where does the person who uses Android on their phone, macOS on their laptop and Linux on their desktop fit in to this brave new world?

                                              1. 3

                                                You can store credentials on a device that speaks USB or Bluetooth pr NFC. No need to store the material on your computing device.

                                                1. 4

                                                  If only! At $WORK, we use WebAuthn for SSO, and we use YubiKeys as the second factor. We explicitly set “usb” as the only allowed transport because we require people to use their YubiKeys to generate their WebAuthn token. However, neither Chrome nor Safari respect this and will instead try to get the user to register a passkey instead, which naturally won’t work. And the token registration UIs in both are actively hostile to using any methods other than passkeys. Firefox is at least better in this regard, but possibly only because its WebAuthn support is less extensive.

                                                  1. 3

                                                    Right, but that’s now what any of the big players are making, even if it’s technically possible.

                                                    And I’m not going to be bringing around a dedicated Bluetooth or USB key device. And I doubt my iPhone would support it even if I did.

                                                    The whole “let’s get rid of passwords” WebAuthN thing seems like a huge lock-in opportunity for the huge companies and nothing more IMO.

                                                    1. 2

                                                      I get your scepticism, but imho it doesn’t sound too justified to me.

                                                      WebAuthn: You can already buy a Yubikey that does NFC and works for iPhone.

                                                      PassKeys I don’t have experience with, but I know there are open implementations that will help avoid lock-in.

                                                      1. 2

                                                        Alright but I’m not going to be using Yubikeys. So how do I sync my passkeys between my phone and desktop, so that I can log in to any account without the involvement of the other device?

                                                  2. 3

                                                    Nothing in WebAuthn adds a dependency on anything other than your computer. On Windows, the private keys are stored in the TPM, on macOS they’re stored in the Secure Element, and on Android devices they’re stored in whatever the platform provides (a TrustZone enclave in the worst case, a separate hardware root of trust in the best case). All of these are defaults and, as far as I’m aware, all support using an external U2F device as well. At no point does Apple or Google have access to any my WebAuthn private keys. On other platforms, it’s up to the platform how it stores the keys, but I believe the TPM is pretty well supported on Linux.

                                                    1. 1

                                                      Aaaaand by what mechanism are the keys synced between my iPhone and my Linux desktop?

                                                      I need to be able to create an account on my Linux desktop (which doesn’t have a TPM, by the way) and then log in to that account with my iPhone (without the involvement of the desktop at the time of login). I also need to be able to create an account on my iPhone and then log in to that account with my desktop (without the involvement on my phone at the time of login). This is no problem using passwords and password managers. My understanding is that it’s impossible with WebAuthn.

                                                      1. 2

                                                        Aaaaand by what mechanism are the keys synced between my iPhone and my Linux desktop?

                                                        They aren’t. By design, there is no mechanism to remove the keys from secure storage. If there were, an OS compromise could exfiltrate all of your keys instantly. You create a key on one device and use that to authorise the next device. Alternatively, you use a U2F device and move it between the two machines (I believe iOS support U2F devices over NFC, your Linux machine definitely supports them over USB).

                                                        my Linux desktop (which doesn’t have a TPM, by the way)

                                                        Are you sure? Most vaguely recent motherboards have one (at least an FTPM in the CPU). Without one, there’s no good way of protecting LUKS keys, so that might be something to look for in your next upgrade.

                                                        1. 2

                                                          You seem very interested in pushing this U2F device thing. I don’t know how many times I need to say I’m uninterested.

                                                          And if I can’t create an account on one device and then log in on another device without the first device involved, this is not something for me. What do I do if I make an account on my phone, happen to not log in to it on anything other than my phone, but then my phone breaks and I need to log in on my desktop? Is that just … not supported anymore?

                                                          And why should I think Apple’s implementation will even allow me to authorize my Linux machine? Is that something which falls naturally out of the standard or has Apple publicly committed to it or are you just hoping they’ll be nice?

                                                          … TPM …

                                                          Are you sure? Most vaguely recent motherboards have one

                                                          I just know my (rarely used) Windows install doesn’t let me upgrade to 11 due to missing TPM. Also, older hardware is a thing.

                                                          I also don’t have a need for LUKS.

                                                          1. 2

                                                            You seem very interested in pushing this U2F device thing. I don’t know how many times I need to say I’m uninterested.

                                                            You need to store keys somewhere. You have three choices:

                                                            • In software, where anything that can compromise that software layer can exfiltrate them. Check the number of CVEs in the Linux kernel that would allow an attacker to do this before you think it’s a good idea (not particularly singling out Linux here, any kernel that is millions of lines of C is going to be compromised)l
                                                            • In some hardware tied to the device (TPM, Secure Element, whatever). This is convenient for the device and gives you some security in that an OS compromise lets an attacker launch an online attack but not exfiltrate keys (these things often do some rate limiting too). The down side is that it’s tied to the device.
                                                            • I’m some external hardware that you can move between devices. The standard for these to interface with computers is called U2F.

                                                            And if I can’t create an account on one device and then log in on another device without the first device involved, this is not something for me. What do I do if I make an account on my phone, happen to not log in to it on anything other than my phone, but then my phone breaks and I need to log in on my desktop? Is that just … not supported anymore?

                                                            That’s what WebAuthn recovery codes are for. Store them somewhere safe and offline.

                                                            And why should I think Apple’s implementation will even allow me to authorize my Linux machine?

                                                            I have no idea what this even means. Apple, Google, Microsoft, and Mozilla implement the client portion of WebAuthn. They have no control over which other devices any WebAuthn provider lets you use, just as a recommendation to use a strong password in Safari has no impact if you reset the password in Chrome or Edge.

                                                            You seem to think WebAuthn is something completely different to what is actually is. I can’t really help unless you explain what you think it is so that I can understand how you get to the claims you’re making.

                                                            I just know my (rarely used) Windows install doesn’t let me upgrade to 11 due to missing TPM. Also, older hardware is a thing.

                                                            I believe Windows 11 requires a TPM 2.0 implementation. TPM 1.x is fine for these uses and is 14 years old at this point.

                                                            I also don’t have a need for LUKS.

                                                            You place a lot of faith in your physical security.

                                                            1. 2

                                                              I have tried to read up on WebAuthn actually, and have never found out how they intend transfer of identities between devices to work. It leads me to believe that you’re either supposed to have one device (the phone) be the device which authenticates (similar to how 2FA systems work today), or sync keys using some mechanism that’s not standardised. But it sounds like you believe there’s another mechanism; can you explain or link to some documentation on how that’s supposed to work?

                                                              1. 2

                                                                Nothing stops you from having multiple keys with a single account, IIRC. You could have one device initially authorize you on another system and then make a new key for the other device.

                                                                1. 2

                                                                  You haven’t read that because it is out of scope for WebAuthn. WebAuthn provides a mechanism for permitting a remote device to attest to its user’s identity. It is up to the implementer of the server-side part to provide a mechanism (beyond recovery codes) to add a second device. The normal way of doing this is to use one device to enrol another. For example, you try to log in on your computer, it shows a 2-3 digit number, then you log in on your phone and approve, now both devices are authorised.

                                                                  If your objection to WebAuthn is that the higher-level flows that people build on top of it have problems then you should direct your criticisms there.

                                                              2. 1

                                                                Windows 11 requires TPM 2.0, So its possible to have a TPM without W11 supporting it

                                                    2. 2

                                                      Honestly, I’m not sure it would be that usable by modern standards. I don’t think anything other than RSA was widely supported, and then limited to 2048 bit key sizes, etc. It would need a lot of modernisation. I wonder if the web crypto API can provide any suitable alternatives? Not sure if it has facilities for local key storage.

                                                      1. 2

                                                        The limit to RSA and 2048 bit key size is just a implementation limit. Of course this should be improved. The charming part of this is, the website don’t has to interact with the key. Yes I know there are some issues with TLS client auth, but with auth optional this can improved.

                                                    1. 4

                                                      I have no idea why the FSF has the hots for JPEG XL and at this point, I’m afraid to ask.

                                                      I’m a semi-avid photographer and I don’t know how to produce a JPEG XL image. I think there’s both a demand and a supply problem for the format.

                                                      1. 11

                                                        Don’t know about the FSF, but I have the hots for JPEG-XL because $WORK has over 1tb/day egress of user-supplied images in jpeg format. Being able to shrink that by 30% with no loss of quality would be a huge win!

                                                        1. 3

                                                          It suddenly occurs to me that a cloud provider who charges people for egress would have no incentive to fix problems revenue sources like that.

                                                          1. 2

                                                            That might be the case, but while the company @danielrheath works for deals with image data, I have to imagine that for a generic cloud provider video bandwidth dwarfs image bandwidth (if we’re talking media). The big wins are in making video encoding more effective.

                                                          2. 3

                                                            if you control the receiver in any form, Lepton might be of interest

                                                          3. 10

                                                            I’m a semi-avid photographer and I don’t know how to produce a JPEG XL image.

                                                            This is because the format is still brand new, and also because the lack of adoption by the most popular application for consuming the largest distribution network for images. This is why JPEG XL’s removal from Chrome seems suspiciously premature: the bitstream and file format for JPEG XL was finalized last year, but the reference implementation isn’t stable yet.

                                                            As for why people are excited about it, there are a lot of reasons which point to companies with big investments in image delivery being able to save significantly on bandwidth costs without losing quality, as @danielrheath pointed out. I’m one of the few small-time users with a legitimate interest in JPEG XL that goes beyond the comparatively marginal benefit of saving some of my personal disk space, because it offers uniquely good set of characteristics for my use-cases as a photographer. In increasing order of esotericism:

                                                            • I usually edit and save my photos these days assuming they’ll be viewed on a wide gamut display with at least Display P3 capability. Regular old JPEG can do this, but the colour space information is stuffed into EXIF data rather than being intrinsic to the file format, and zealous EXIF stripping programs on various websites sometimes delete it, so the colours in the picture look wrong. Also, regular old JPEG is limited to 8 bits per channel, which starts to get quite thin as the gamut increases. Admittedly, AVIF and WebP are ‘good enough’ at this.

                                                            • I work with scans of medium format and large format film, in the range of tens of thousands of pixels per image dimension. Regular old JPEG is limited to 65k pixels per side, and newer formats – especially AVIF, based on a format designed for video with much smaller dimensions – are actually worse than old-school JPEG at this, and can only ‘cheat’ very large images by gluing them together side by side, so you lose the advantages of being able to share compression information between ‘panels’ of an image, an effect that gets worse as pictures get larger. There may also be visible artefacts between panels because the transition can’t be smoothed over by the compression algorithm effectively. JPEG XL natively supports up to a thousand million pixels per image side.

                                                            • I also sometimes work with photos with transparency. Among older formats, JPEG can’t do this at all, and PNG only offers lossless compression which isn’t intended for photographic data and thus makes absolutely huge files when used for it. Again, AVIF and WebP are probably ‘good enough’ at this, but for some applications they suck because afaict you can’t have transparency in a CMYK image for them, so if I’m preparing an image with transparency for press, there’s no format for that.

                                                            1. 4

                                                              Thanks a lot for taking the time to listing these pros for JPEG XL!

                                                              It’s interesting that according to the wiki page for the format, both Adobe and Flickr (after the SmugMug purchase?) were on the JPEG XL train.

                                                              1. 2

                                                                Thank you for sharing. If I’m understanding correctly, the removal of JPEG XL from Chrome wouldn’t really affect most of your personal use-cases then, right? For instance, I can’t imagine you’d be inlining any billion-by-billion pixel images on a web page.

                                                                1. 1

                                                                  Your points are all valid yet entirely pointless on the web (CMYK? billion pixels in each direction?). JPEG-XL could make inroads in print, design, arts etc without thinking about Chrome even once. Yet it seems that there isn’t much enthusiasm in supporting it and its unique features in that space. How is that Chrome’s fault?

                                                                  That seems more of a self-imposed hegemony: “We only look at the format for any use case once Chrome supports it. Bad Google!” In my opinion that’s a weird notion of “free as in freedom”.

                                                                  1. 5

                                                                    The question was why people are interested in JPEG XL, especially from the perspective of a ‘semi-avid photographer’. I acknowledged explicitly that I benefit unusually extensively from JPEG XL compared to other individual photographers, and pointed to another answer in this thread that makes a more compelling case that’s very relevant to the web. I would also say that each of my points contains something less unusual that actually is relevant for the web (e.g. it’s not that uncommon for images larger than 4K, the maximum size for one panel of AVIF data, to be posted on the web).

                                                                    Also, JPEG XL is actually beginning to see adoption outside of web browsers for these use cases. But that wasn’t the question I was answering here.

                                                                    1. 2

                                                                      Yet it seems that there isn’t much enthusiasm in supporting it and its unique features in that space. How is that Chrome’s fault?

                                                                      Because that’s Google’s line, not reality. Read the issue thread and note how many voices there are (from major companies) speaking against the decision to remove it.

                                                                      1. 3

                                                                        “to remove it” - an experimental feature behind a flag. Nobody seriously used that capability, ever. How many of those opposing the removal in the issue thread are even aware of that instead of just joining the choir?

                                                                        So where’s JPEG-XL support in Edge, Firefox, Safari? Where’s the “10% of the web already use a JPEG-XL decoder polyfill until Chrome finally gets its act together and offers native support” article?

                                                                        This entire debate is in a weird spot between “Google (Chrome) force developments down our throats” (when something new does appear, such as WebUSB, WebMIDI, WebGPU, …) and “We have to wait until Chrome does it before everybody else can follow”.

                                                                        1. 1

                                                                          That’s kind of the point. It wasn’t even given a chance to prove itself. This was very premature and came out of nowhere just as JPEG-XL was starting to gain attention. Why is it so hard to understand why people are frustrated by this? I guess I just don’t understand why you’re against it, or feel the need to suggest that people against the removal are just ‘joining the choir’. Maybe people do really care?

                                                                          I don’t know what this has to do with any other web technology. I would take JPEG-XL over any of those (not that’s that’s really relevant).

                                                                          1. 2

                                                                            Right, JPEG-XL hasn’t got a chance to “prove itself” by becoming part of the standard feature set of Chrome because it was never put in front of ordinary users (it’s always been behind a flag).

                                                                            Every other time that Chrome unilaterally decides to put something in front of ordinary users, people claim that this is just an example of Chrome’s web hegemony. That would have happened with JPEG-XL, too.

                                                                            What should happen for healthy web standard evolution:

                                                                            1. Polyfills implement a strongly sought-after feature in a portable way for browsers, so it’s not up to browser vendors to decide what ends up a part of the web platform and what doesn’t.
                                                                            2. These polyfills become successful so much that they’re impossible to ignore and that browser vendors are incentivized to implement the feature natively for efficiency.
                                                                            3. Multiple browser vendors implement the feature.
                                                                            4. Browser vendors who don’t follow are called out for blocking progress.

                                                                            For some odd reason, JPEG-XL advocacy starts at step 4, simultaneously arguing that Chrome shouldn’t be the arbiter of what becomes a web standard and what doesn’t, and not doing any of the other work. (edit to add: meanwhile it ignores all the other actors on the web who don’t support JPEG-XL, either.)

                                                                            To me that looks like JPEG-XL advocates were expecting Chrome to implement the format, take the heat for “forcing more stuff on people” (as always happens when Chrome unilaterally adds stuff), then have everybody else follow. That’s a comfortable strategy if it works, but I don’t see why JPEG-XL should have a right to this kind of short cut. And I don’t see how “Chrome didn’t employ its hegemony to serve our purpose” is a sign of “how the web works under browser hegemony.”

                                                                            So: where are the Polyfills? Where are the Adobes, Flickrs and everybody using them, with blog posts documenting their traffic savings and newly enabled huge image features, and that “x% of the web runs on JPEG-XL”?

                                                                            1. 1

                                                                              And, to keep that line of thought a bit separate, as its own post:

                                                                              I don’t so much mind JPEG XL folks doing that. That means that they’re working in a world view that presumes such a web hegemony by Chrome. I disagree but given that it’s xPEG, they probably can’t think any other way.

                                                                              The FSF, however, should be well aware how stuff like this ought to work…

                                                                              1. 1

                                                                                Okay, so it’s more about web standards being fast tracked without proper procedure? I can definitely appreciate that.

                                                                                The way Google has gone about this though is not just removing an existing experimental flag for a feature that actually has (or had) a good chance of getting in had it just been given time, but did so in such a way that made it sound like the decision was final and there would be no way back from it, while providing such a weak/misleading explanation that it seemed pretty obvious there must be an ulterior motive. Especially when they didn’t even acknowledge the flood of companies showing an interest and clearly proving them wrong. If even they can’t convince Google to think twice, then clearly they aren’t interested in having an honest discussion.

                                                                                Personally I don’t mind how long it takes for the web to adopt JPEG-XL. I would have been all too happy for it to have gone through the process you describe (although I’m not sure how realistic it is that the major players would use a polyfill for something like this). What’s frustrating is that the way they did handle it may have effectively killed any chance it had prematurely rather than allowing it to gain traction slowly and naturally.

                                                                                Edit: And I really want to be wrong about that. I hope that there is a way back from this.

                                                                                1. 2

                                                                                  so it’s more about web standards being fast tracked without proper procedure?

                                                                                  To some degree, but not only. The other aspect is that all those organizations chiming in on the issue tracker talk the talk but don’t walk the walk. There are numerous options to demonstrate that they care about a JPEG-XL deployment and that kickstart JPEG-XL on the web. I haven’t seen much in that space (see above for ideas what they could do, that nobody working on Chrome could block).

                                                                                  What I’ve seen are complaints that “Mighty Chrome is taking our chance at becoming part of the web platform!” and that seems more like a pressure campaign that Somebody Else (namely: the Chrome folks) should invest in JPEG-XL and bear any risks (whatever they might be, e.g. patent encumbrance issues) without doing anything themselves.

                                                                                  And that’s a pretty clear sign to me that nobody actually cares - it’s not just “Google’s line”.

                                                                                  • Adobe (back then, owner of Flash, the scourge of the web) and Serif Labs (who also chimed in on the tracker) could provide a maintained, freely licensed polyfill for its customers to use to benefit from JXL on the web, driving adoption.
                                                                                  • Facebook/Instagram don’t need to wait to ship JXL data to give “The benefit of smaller file size and/or higher quality can be a great benefit to our users.” - again: ship a polyfill (with a long cache duration it’s worth it sending it over, to then benefit from the space savings)
                                                                                  • In case they’re all too inept to implement JXL themselves (they’re not), they could go for https://github.com/GoogleChromeLabs/squoosh/tree/dev/codecs/jxl (Apache 2.0 licensed), which is a Chrome-initiated project, to get JXL rolling.

                                                                                  Again: Talk is cheap. Chrome’s JXL support was barely beyond “let’s see if it compiles”, given that it was behind a flag. If all these parties who “care so much” truly care and make JXL-on-the-web a thing, I’m quite sure that Chrome will change course and reintroduce that code.

                                                                                  And unlike with IE6, it won’t take 15 years between Chrome finally introducing native support and everybody being able to take out their polyfills because Chrome has an actual planned update process. My guess is that Safari would be the last hold-out, as is usual.

                                                                    2. 5

                                                                      If you work with RAWs and darktable (free software and multi platform), you can export your pics into JXL files (and choose parameters) ;)

                                                                      It’s a bit slow though (compared to JPEG), but still faster than exporting AVIF, which is also supported.

                                                                      As for me, the standard image viewer included with Fedora supports JXL too.

                                                                      1. 2

                                                                        I work with RAWs but I don’t use darktable. I used to use Lightroom but am trying out alternatives.

                                                                        Anyway I publish to Flickr and AFAIK they don’t support to format. This goes back to the original point I guess, browsers not supporting it.

                                                                    1. 16

                                                                      Things are never perfect, but even with some of the issues I ran into I’m very happy I switched. It’s hard to describe, but things feel more solid.

                                                                      Holy crap. You have to restart whatever App Store clone is fancy this season in order to use it more than once, and one of the most widely-use password managers crashes (let me guess, the crash involves Wayland Gnome’s flavour of Wayland, GTK, or both?) and it feels more solid? Are you sure you weren’t using Windows Me with a weird WindowBlinds theme before!?

                                                                      I made the switch the other way round (Linux -> macOS) two years ago. Did I already develop Apple Stockholm syndrome? Am I crazy? Is that kind of stuff normal?

                                                                      Edit: I mean please don’t get me started on macOS Ventura. I’m not trying to scoff at Linux, I’m asking if we are doomed!

                                                                      2+ years later I’m still SSH-into into Linux boxes for a lot of development. Is this going to be my next ten years, choosing between a) using the latest breakthrough in silicon design as a glorified VT-220 strapped to an iPad or b) perpetually reliving 1999 desktop nightmares, except without Quake III Arena to make it all worth it?

                                                                      1. 13

                                                                        Sometimes I’m beginning to wonder if we neckbeards just never run into these problems because we’ve set our ways 20y ago and never changed. On my Ubuntu work machine some sort of graphical apt pops up from time to time (couldn’t be bothered to investigate how to turn it off), but I run my updates regularly via cli-apt-get. There’s no regularly crashing app besides Zoom, and I don’t hold that against any Linux distro.

                                                                        1. 5

                                                                          That’s kind of what I’m leaning towards, too. Gnome Software isn’t the first attempt to bolt a GUI in top of a package manager and/or an upstream software source, people have been trying to do that since the early 00s (possibly earlier, too, but I wasn’t running Linux then). At some point, after enough thrashed installs, I just gave up on them.

                                                                        2. 5

                                                                          I’ve always avoided Gnome (and PulseAudio and software like that) like the plague and it’s been the Year of Linux on the Desktop for 20+ years now and it’s generally been rock solid for me.

                                                                          At the moment I’m running Guix on an MSI Gaming laptop from two years ago with an RTX3080 and I love it. Running Steam, Lutris (Diablo 4 beta last weekend), Stable Diffusion and no crashes. 50+ day uptimes only to reboot because Guix kinda expects it.

                                                                          And ofcourse an ideal dev machine. No Docker shenanigans like on Windows and OSX.

                                                                          1. 3

                                                                            Your response captures my own feelings on reading this.

                                                                            They jumped from frying pan into fire, and they’re happy about the change of scenery. They point out that some extremities are on fire, but hey, it’s different.

                                                                            It’s very odd indeed, but it’s probably part of what life is like if all this stuff is just a mystery to you and you use the built-in tools without question.

                                                                            1. 3

                                                                              I haven’t had to restart a Linux system to fix the package manager in a couple decades on 3 distros I regularly administrate 15-50 systems (depending on the year). This includes systems updated daily/weekly and left running for over a year. Lately most systems get rebooted whenever a new kernel package comes along, but never any other time. Maybe the problem here is you shouldn’t be using the “whatever fancy GUI prototype is in vogue this season” and just use the default system CLI package manager.

                                                                              1. 1

                                                                                Why in the world does everyone think I’m talking about myself here and not about the original post!?

                                                                                Edit: AH! I think I get it. The “you” there is not the generic “you”: the link to that blog post was posted by the post’s author. I’m not using Gnome Software, I’m not even using a Linux desktop anymore. They are :-).

                                                                              2. 3

                                                                                You have to restart whatever App Store clone is fancy this season

                                                                                Why do you use the App Store at all? Which Linux distro are you talking about exactly? Does it not provide a CLI package manager like apt-get or yum or something?

                                                                                1. 3

                                                                                  I don’t use the software app, but I might if it worked. Package names are often undiscoverable, and for whatever reason, I forget if it’s dnf list, dnf search or some other command—the gui has a search window—nice and discoverable.

                                                                                  Beyond that, if it’s crap, why do they ship the damn thing? So many linux users proudly explain that they know better than to stand near the spike pit. I just want software that doesn’t have the spike pit.

                                                                                  1. 3

                                                                                    Why do you use the App Store at all?

                                                                                    I don’t! In my experience, the only App Store-like thing that ever came close to working on Linux was Synaptic!

                                                                                  2. 2

                                                                                    Especially since KeePassXC is one of the most robust applications for me, across 3 machines and 2 operating systems. I don’t have any problems with using KUbuntu as linux daily driver of that sort. Then again, they don’t go all-in on wayland, and it’s not Gnome Wayland. Event though KDE has its own issues.

                                                                                    1. 1

                                                                                      That’s kind of what I’m surprised at, too. I’ve used it everywhere – I used it on Linux, I now use it on both macOS and Windows. It’s one of the applications I’ve never seen crashing. I haven’t used it under a Wayland compositor, mind you, mostly because those tend to crash before I need to login anywhere, hence my suspicion this is Gnome-related somehow…

                                                                                      1. 2

                                                                                        Randomly looked at their issues again. And snap seems to be doing it’s job (TM).

                                                                                        1. 1

                                                                                          Oh, wow, okay. I’m sorry I blamed Gnome Shell or GTK for that – they caused me the most headaches way back but I should’ve obviously realised there are worse things out there.

                                                                                          I’m not even sure Snap is the worse thing here? I’ve heard – but the emphasis is on “heard”, I haven’t had to know in a while and I’m just ecstatic about it – that some KDE-related software can be kard to package due to the different lifecycles of KDE Frameworks, Apps, and Plasma. It might be a case of the folks doing the frameworks packaging getting stuck between a rock (non-KDE applications that nonetheless use kf5 & friends) and a hard place (KDE apps and Plasma).

                                                                                          KDE 3.2 nostalgia intensifies

                                                                                    2. 1

                                                                                      I ran Fedora for 6 months and experienced this level of problems, so I switched to Mint, and it has been much better. I previously tried OpenSUSE Tumbleweed as well, didn’t like it, and concluded that running a Linux with extremely fresh packages is not for me, I want stability and “it just works”. Mint is stable and boring.

                                                                                      1. 1

                                                                                        You have to restart whatever App Store clone is fancy this season

                                                                                        gnome-software is over 10 years old at this point, which is probably also why it has so many issues. Its not the norm, no. The standard for most package management GUI has been fairly responsive, batch installs and uninstalls, etc. (e.g. synaptic for apt).

                                                                                        KPXC doesn’t touch GTK at all, and runs stable under Wayland and under Gnome, at least in my case. (Fedora Silverblue with Flatpaks).

                                                                                        1. 13

                                                                                          Its not the norm, no.

                                                                                          I have to disagree here.

                                                                                          I worked for Red Hat. I was an insider. This kind of PITA is completely 100% normal for RH OSes, but people who live in that world consider it normal and just part of life.

                                                                                          I recently wrote an article about the experience – as a Linux and Mac person – of using an Arm laptop under Windows:

                                                                                          https://www.theregister.com/2023/03/21/lenovo_thinkpad_x13s_the_stealth/

                                                                                          I commented, at length, on the horrors of updating Windows, and said that habitual Windows users wouldn’t notice this stuff.

                                                                                          Sure enough, one commenter goes “well it’s not like this on Intel Windows! It’s just you! Or it’s just Arm! It’s not like that!”

                                                                                          It is EXACTLY like that but if you don’t know anything else, it’s normal.

                                                                                          You say “GNOME software is over 10 years old” like that’s an excuse. It is not an excuse. It is the opposite of an excuse. At ten days old this sort of thing should not happen.

                                                                                          But because GNOME 3.x is a raging dumpster fire of an environment, lashed together in Javascript, and built on a central design principle of “look how others do this and do it differently”, GNOME users have forgotten what a stable solid reliable desktop even feels like, and feel that something a decade old will naturally barely work any more because the foundations have been ripped out and rebuild half a dozen times since then, the UI guidelines replaced totally 3 times, the API changed twice a year as if that is normal

                                                                                          It is not normal. This is not right. This is not OK.

                                                                                          Graphical desktops are simple, old, settled tech, designed in the 1970s, productised in the 1980s, evolved and polished to excellence by the 1990s. There is quite simply no legitimate excuse for this stuff not being perfect by now, implemented in something rock-solid, running lightning fast in native code, with any bugs discovered and fixed decades ago.

                                                                                          Cross-platform packaging was solved in the 1980s. Cross platform native binaries were a thing by a third of a century ago. “Oh but this is a new field and we are learning as we go” is not an excuse.

                                                                                          As Douglas Adams put it:

                                                                                          “Well, you’re obviously being totally naive of course,” said the girl, “When you’ve been in marketing as long as I have you’ll know that before any new product can be developed it has to be properly researched. We’ve got to find out what people want from fire, how they relate to it, what sort of image it has for them.”

                                                                                          The crowd were tense. They were expecting something wonderful from Ford.

                                                                                          “Stick it up your nose,” he said.

                                                                                          “Which is precisely the sort of thing we need to know,” insisted the girl, “Do people want fire that can be applied nasally?”

                                                                                          This is, in a word, such an utterly bogus and ludicrous response that anyone should be ashamed to offer it.

                                                                                          It’s nearly a decade old so of course it doesn’t work is risible.

                                                                                          The correct answer is “it is nearly a decade old, so now it is tiny, blisteringly fast, and has absolutely no known bugs”.

                                                                                          1. 6

                                                                                            Graphical desktops are simple, old, settled tech, designed in the 1970s, productised in the 1980s, evolved and polished to excellence by the 1990s

                                                                                            I’m sorry but modern requirements do have changes to this. Sometimes changes that are so hard to put into the old codebase, people started rewriting it. HighDPI, multi DPI, (fractional scaling), HDR support, screen readers, touch and its UI change requirements, security (hello X11, admin popups..), direct rendering vs “throw some buttons on there”, screen recording. Sure it’s no excuse to have a buggy mess, but it’s not like you could just throw windows 2000 (or similar) on a current system and call it a day. You’ll have a hard time getting any of the modern requirements I mentioned integrated.

                                                                                            1. 4

                                                                                              I don’t really see how that invalidates any part of my comment, TBH.

                                                                                              Desktops are not unique to Linux. Apple macOS has a “desktop”. They call it the “Finder,” because in around 2000 the NeXTstep desktop was rewritten to resemble the classic MacOS desktop which was actually called the Finder.

                                                                                              But the NeXTstep desktop, which used to be called Workspace IIRC, has been around since 1989.

                                                                                              I am using it right now. I have two 27” monitors. One’s a build-in Retina display, which at 5120x2880 is quite HighDPI, and the other one is an older Thunderbolt display, which at 2560x1440 is higher DPI than most of my other screens. Everything looks identical on both my displays, they are both smooth and crisp, and if I drag a window from one to the other, both halves of the window are the same size as I move it even while it’s straddling the display.

                                                                                              This is 34 year old code. Over a third of a century. 35 if you count the first NeXT public demo of version 0.8 in 1988.

                                                                                              Windows has a desktop, called Explorer. It is basically the same one that shipped on Windows 95. It’s 28. Again, Windows 10 and 11, both currently shipping and maintained, can both handle this with aplomb. Took ’em a while to catch up to macOS but they got there.

                                                                                              If GNOME can’t do this properly and well, if this means constant rewrites and functionality being dropped and then reimplemented that means that the GNOME team are doing software development wrong. KDE is a year older than GNOME and I have tried it on a HiDPI display, this month, and it worked fine.

                                                                                              1. 6

                                                                                                I don’t think it’s fair to include pre-OpenStep versions of NeXTSTEP, because the addition of the Foundation Kit was a pretty fundamental rewrite. Most of the NX GUI classes took raw C strings in a bunch of places. So most of this code is really only 28 years old.

                                                                                                To @proctrap’s point, there have been some fundamental changes. OpenStep had resolution independence through it’s PostScript roots and adding screen reader support was a fairly incremental change (just flagging some info that was already there), but CoreAnimation was a moderately large shift in rendering model and is essential for a modern GUI to efficiently use the GPU. OPENSTEP tried very hard to avoid redrawing. When you scrolled, it would copy pixels around and then redraw. It traded this a lot against memory overhead. It used expose events to draw only the area that had been exposed, so nothing needed to keep copies of bits of windows that were hidden. When you dragged a window, you got a bunch of events to draw the new bits (it actually asked for a bit more to be drawn that was exposed so that you didn’t get one event per pixel). With CoreAnimation’s layer model, each view can render to a texture and these live on the GPU. GPUs have a practically infinite amount of RAM in comparison to rather requirements of a 2D UI (remember, OPENSTEP ran on machines with 8 MiB of RAM, including any buffering for display) and so you avoid any redraw events for expose, you only need to redraw views whose contents have changed or which have been resized. For things with simple animation cycles (progress indicators, glowing buttons, whatever), the images are just cycled on the GPU baby uploading different textures.

                                                                                                Text rendering is where this has the biggest impact. On OPENSTEP, each glyph was rasterised on the CPU directly every time it was drawn. On OS X (since around 10.3ish), each glyph in a font that’s used is rendered once to a cache on the GPU and composited there. This resulted in a massive drop in CPU consumption (it’s why you could smooth scroll on a 300 MHz Mac), which translated to lower power consumption on mobile (compositing on the GPU is very cheap, it’s designed to composite hundreds of millions of triangles, the thousands that you need for the GUI barely wake it up).

                                                                                                That said, Apple demonstrated that you can retrofit most of these to existing APIs without problems. A lot of software written for OpenStep can be built against Cocoa with some deprecation warnings but no changes. Updating it is usually fairly painless (the biggest problem is that the nib format changed and so UIs need redrawing, XCode can’t import NeXT-era ones).

                                                                                                If GNUstep had gained the traction that GTK and Qt managed, the *NIX desktop would have been a much more pleasant place.

                                                                                                1. 1

                                                                                                  I defer on the details here, inasmuch as I am confident you’ve forgotten more about NeXTstep and its kin than I ever knew in my life.

                                                                                                  But as you say: old stuff still works. Yes, it’s been rewritten and extended substantially but it still works, as you say better than ever, while every 6mth or so there are breaking changes in GNOME and KDE, as per the messages about KeePassX upthread from here.

                                                                                                  It is not OK that they still can’t get this stuff right.

                                                                                                  I don’t know where to point the finger. Whenever I even try, big names spring out of the woodwork to deny everything and then disappear again.

                                                                                                  I said on the Reg that WSL is a remote cousin of the NT POSIX personality. Some senior MS exec appears out of nowhere to post to say that, no, WSL is a side-offshoot of Android app support. They’re adamant and angry.

                                                                                                  I request citations. (It’s my job.)

                                                                                                  Suddenly even more senior MS exec appears with tons of links that isn’t Googleable anywhere to show that WSL1 is Android runtime with the Android stuff switched out.

                                                                                                  What this really said to me: “we don’t have any engineers who understand the POSIX stuff enough to touch it any more, so we wrote a new one. But it wasn’t good enough, so now, we just use a VM.”

                                                                                                  It is documented history that MS threatened to sue Red Hat, SUSE, Canonical and others over Linux desktops infringing MS patents on Win95. They did. MS invented Win95 from the whole cloth. I watched, I ran the betas, I was there. It’s true.

                                                                                                  So SUSE signed and the KDE juggernaut trundled along without substantial changes.

                                                                                                  RH and Canonical said no, then a total rewrite of GNOME followed. Again, historical record. Canonical tried to get involved; GNOME told them to take a hike. Recorded history. Shuttleworth blogged about it. So GNOME did GNOME 3, with no Start menu, no system tray, no taskbar, and they’re still frantically trying to banish status icons over a decade later.

                                                                                                  Canonical, banished, does Unity. There’s a plan: run it on phones and tablets. It’s a good plan. It’s a good desktop. I still use it.

                                                                                                  I idly blog about this, someone sticks it on HN and suddenly Miguel de Icaza pops up to deny everything. Some former head of desktops at Canonical noone’s ever heard of pops up to deny everything. No citations, no links, no evidence, everyone accepts it because EVERYONE knows that MS <3 LINUX!

                                                                                                  It’s Wheeler’s “We can solve any problem by introducing an extra level of indirection,” only now, we can solve any accusation of fundamental incompetence by introducing an extra level of lies, FUD and BS.

                                                                                                  1. 2

                                                                                                    It is not OK that they still can’t get this stuff right.

                                                                                                    Completely agreed.

                                                                                                    Suddenly even more senior MS exec appears with tons of links that isn’t Googleable anywhere to show that WSL1 is Android runtime with the Android stuff switched out.

                                                                                                    The latest version of the Windows Kernel Internals book has more details on this. The short version is that the POSIX and OS/2 personalities, like the Win32 one, share a lot of code for things like loading PE/COFF binaries and interface with the kernel via very similar mechanisms. WSL1 used a hook that was originally added for Drawbridge called ‘picoprocesses’. The various personalities are all independent layers that provide different APIs to the same underlying functionality, but they’re also completely isolated. One of the reasons that the original NT POSIX personality was so useless was that there was no way of talking to the GUI and very limited IPC, so you couldn’t usefully run POSIX things on Windows unless you ran only POSIX things.

                                                                                                    In contrast, picoprocesses provided a single hook that allowed you to create a(n almost) empty process and give it a custom system call table. This is closer to the FreeBSD ABI layer than the NT personality layer, but with the weird limitation that you can have only one. The goal for WSL wasn’t POSIX compatibility, it was Linux binary compatibility. This meant that it had to implement exactly the system call numbers of Linux and exactly the Linux flavour of the various APIs. This was quite a different motivation. The POSIX personality existed because the US government required POSIX support as a feature checkbox item, but no one was ever expected to use it. The support in WSL originally existed to allow Windows Phone to run Android apps and was shipped on the desktop because Linux (specifically, not POSIX, *BSD, or *NIX) had basically won as the server OS and Microsoft wanted people to deploy Linux things in Azure, and that’s an easier sell if they’re running Windows on the client. Unfortunately, 100% Linux compatibility is almost impossible for anything that isn’t Linux and so WSL set expectations too high and people complained when things didn’t work (especially Docker, which depends on some truly horrific things on Linux).

                                                                                                    They’re surprisingly different in technology. The Win32 layer has more code in common with the POSIX personality than WSL does.

                                                                                                    What this really said to me: “we don’t have any engineers who understand the POSIX stuff enough to touch it any more, so we wrote a new one. But it wasn’t good enough, so now, we just use a VM.”

                                                                                                    Modifying the old POSIX code into a Linux ABI layer would have been very hard. Remember, this was a POSIX layer that still used PE/COFF binaries, used DLLs injected by the kernel for exposing a system-call interface, and so on. It also hadn’t been updated for recent versions of Windows and depended on a lot of things that had been refactored or removed.

                                                                                                    The thing that made me sad was that they didn’t just embed a FreeBSD kernel in NT and use the FreeBSD Linux ABI layer. The license would have permitted it and they’d have benefitted from starting with something that was about as far along as WSL ever got and had other contributors.

                                                                                                    RH and Canonical said no, then a total rewrite of GNOME followed. Again, historical record. Canonical tried to get involved; GNOME told them to take a hike. Recorded history. Shuttleworth blogged about it. So GNOME did GNOME 3, with no Start menu, no system tray, no taskbar, and they’re still frantically trying to banish status icons over a decade later.

                                                                                                    I only vaguely paid attention to that drama, but from the perspective of someone trying to create a GNUstep-based DE at the time, it looked more like Mac-envy than MS-fear: GNOME 3 and Unity both seemed like people trying to copy OS X without understanding what it was that made OS X pleasurable to use and without any of the underlying technology necessary to be able to implement it.

                                                                                                    I idly blog about this, someone sticks it on HN and suddenly Miguel de Icaza pops up to deny everything.

                                                                                                    I was really surprised at the internal reactions when MdI joined Microsoft. The attitude inside the company was that he’s a great leader in the Linux desktop world and it’s fantastic that he’s now helping Microsoft make the best Linux environments and it shows how perception of Microsoft has changed. My recollection of his perception from the F/OSS desktop community (before I gave up, ran OS X, and stopped caring) was that he was the guy that never met a bad MS technology that he didn’t like and tried to force GNOME to copy everything MS did, no matter how much of a bad idea it was. The rumour was that he’d applied to MS and been rejected and so made it his mission to create his own MS-like ecosystem that he could work on.

                                                                                                    EVERYONE knows that MS <3 LINUX!

                                                                                                    Pragmatically, MS knows that Linux brings in huge amounts of money to Azure, and that Linux (Android) brings in a huge amount of money to the Office division. And MS (like any other trillion-dollar company) loves revenue. Unfortunately, in spite of being one of the largest contributors to open source, only a few people in the company actually understand open source. They think of open source as being an ecosystem of products rather than a source of disruptive technologies.

                                                                                                    P.S. When are you going to write an article about CHERIoT for El Reg?

                                                                                            2. 4

                                                                                              ‘The correct answer is “it is nearly a decade old, so now it is tiny, blisteringly fast, and has absolutely no known bugs”.’

                                                                                              This. I’m sad to say that there are still some bugs in XFCE, but none that I encounter on a daily basis and generally fewer in each release. I haven’t understood why people think GNOME is a good idea since their 2.x releases.

                                                                                              I’ve been waiting for Wayland to mature and I’m still not really seeing signs of it.

                                                                                              Every Debian upgrade from stable to new stable is smoother than the last one, modulo specific breaking changes which are (a) usually well documented, (b) aren’t automatable because they require policy choices, and (c) don’t apply to new installs at all, which are also smoother and faster than they used to be.

                                                                                              1. 2

                                                                                                why people think GNOME is a good idea

                                                                                                I would actually recommend it for some people, since it’s looking pretty good (unlike XFCE), has some good defaults and doesn’t come with the amount of options that KDE has. (And I haven’t had any breakage on LTS Ubuntu with Gnome desktops.) I prefer KDE, but I wish I would have recommended some people in my family gnome. (Which I gave KDE back then, as it’s more resembling the windows 7 startup menu.) But you don’t change the Desktop of someone who is over 80 years old. Even if their KDE usage ends up spawning 4 virtual desktops, with 10 firefox windows, 2 Taskbars and 2 start menus. Apparently they like it that way.

                                                                                                1. 3

                                                                                                  GNOME is pretty. Its graphics design is second-to-none in the Linux world, and it pretty much always has been, since the Red Hat Linux era.

                                                                                                  It’s therefore even more of a shame that, to me, it’s an unusable nightmare of a desktop environment.

                                                                                                  KDE, which is boldly redefining “clunky” and “overcomplicated”, is at least minimally usable, but it is, IMHO, fugly and it has been since KDE 2.0.0. And I wrote an article on how to download, compile and install KDE 2.0.0. Can’t remember for whom now; long time ago.

                                                                                                  (When RH applied the RHL 9 Bluecurve theme to KDE, I have never ever seen KDE look so pretty, before or since.)

                                                                                                  Xfce is plain, but it’s not ugly. You can theme it up the wazoo if you want. I don’t want. I leave it alone. But that pales into utter insignificance because it works.

                                                                                                2. 2

                                                                                                  Thank you!

                                                                                                  Sometimes I feel like it’s just me. I really do appreciate this feedback.

                                                                                                3. 1

                                                                                                  Its not the norm, no.

                                                                                                  I have to disagree here. […] This kind of PITA is completely 100% normal for RH OSes […] It is not normal. […]

                                                                                                  Confusing structure.

                                                                                                  You wouldn’t use synaptic, as I mentioned as an example of something more normal, on a RH OS.

                                                                                                  The correct answer is “it is nearly a decade old, so now it is tiny, blisteringly fast, and has absolutely no known bugs”.

                                                                                                  It clearly wouldn’t be the correct answer because that contains a lie?

                                                                                                  1. 4

                                                                                                    I do not think that you understood what I was saying here. I am making extensive use of irony and sarcasm in order to try to make a point.

                                                                                                    Confusing structure.

                                                                                                    I am saying that problems like those described are normal for RH products and people using the RH software ecosystem.

                                                                                                    Then I continue to say that these things are not normal for the rest of the Linux world.

                                                                                                    In other words, my point is that these things are normal for RH, and they are not normal for Linux as a whole.

                                                                                                    In my direct personal experience as a former RH employee, a lot of RH people are not aware of the greater Linux world and that other distros and other communities are not the same, and that often, things are better in the wider Linux world.

                                                                                                    I am sorry that this was not clear. It seemed clear to me when I wrote it.

                                                                                                    It clearly wouldn’t be the correct answer because that contains a lie?

                                                                                                    Again, you are missing the point here.

                                                                                                    I am saying “the correct answer,” as in, this is how things should be.

                                                                                                    In other words, I am saying that in a more normal, sane, healthy software ecosystem, the correct answer ought to be that after a over a decade of biannual releases, which means over 20 major versions, something should have improved and be better than it ever was.

                                                                                                    In a normal healthy project, after 12 years and 44 versions, a component should be completely debugged, totally stable, and then have had 5-10 years to do fine-tuning and performance optimisation.

                                                                                                    (I will also note that 2 major releases per year = 20 major releases. For a healthy software project, you do not need to obfuscate this by, in this example, redefining the minor version as the major version at version 3.40, so that version 3.40 is now called version 40 and from then on everyone pretends that minor versions are major versions.)

                                                                                                    (BTW “obfuscate” is a more polite way of saying “tell a lie about”.)

                                                                                                    I am not saying “GNOME Software is written in native code, is bug free and performance optimised”.

                                                                                                    I am saying “GNOME Software OUGHT TO BE native code, bug free and performance optimised by now.”

                                                                                                    Is that clearer now?

                                                                                                    1. 1

                                                                                                      Then I continue to say that these things are not normal for the rest of the Linux world.

                                                                                                      Which is what I already said, with an example from the rest of the Linux world, so I don’t understand why you say you disagree with me on that topic. Hence my confusion.


                                                                                                      […]

                                                                                                      https://lobste.rs/s/wbcgdt/switching_fedora_from_ubuntu#c_as8hxe

                                                                                                      1. 1

                                                                                                        So, from your quoted reply, you are saying that:

                                                                                                        10 years of development against moving targets (app store trends, flavor-of-the-year GTK api, plugin based package management, abstraction based package management, back to plugin based package management)

                                                                                                        … justifies it hanging? That this is understandable and acceptable given the difficult environment?

                                                                                                4. 7

                                                                                                  gnome-software is over 10 years old at this point, which is probably also why it has so many issues.

                                                                                                  I have clearly developed Stockholm syndrome because IMHO ten year-old software should not have so many issues :-D. Software that’s been maintained for ten years usually gets better with time, not worse. This isn’t some random third-party util that’s been abandoned for six years, Gnome Software is one of the Core Apps.

                                                                                                  1. 3

                                                                                                    To elaborate further, 10 years of development against moving targets (app store trends, flavor-of-the-year GTK api, plugin based package management, abstraction based package management, back to plugin based package management)

                                                                                                    Similarly, Servo easily hitting speed achievements that Firefox struggles to achieve.

                                                                                                    1. 1

                                                                                                      Right, I can see why it reads that way, but I didn’t mean that as a jab specifically at the code in Gnome Software. Its developers are trying to solve a very complicated problem and I am well aware of the fact that the tech churn in the Linux desktop space is half the reason why the year of Linux on the desktop is a meme.

                                                                                                      I mean that, regardless of the reason why (and I’m certainly inclined to believe the churn is the reason), the fact that ten years of constant and apt maintenance are insufficient to make an otherwise barebones piece of software work is troubling. This is not a good base to build a desktop on.

                                                                                              1. 1

                                                                                                For the ffmpeg issue and the stability issues, I’d recommend having a look at the flatpak versions available (and throw in flatseal to manage permissions, if you want).

                                                                                                Though I can definitely understand an aversion to them if you had to deal with snaps prior.

                                                                                                Different experience set since I’m on silverblue, but yes gnome-software is… definitely something. I found it helpful to get multiple coffees during its first sync after installing the OS

                                                                                                Most Gnome apps are getting a more individual look in 44 (with some cute third party apps too, Amberol and Clapper

                                                                                                1. 1

                                                                                                  Oil is the language

                                                                                                  like cpython, Coil is the implementation of Oil in C++

                                                                                                  Bash-Replacing Coil is obviously Broil, avoiding the problems with the word Boil

                                                                                                  Oils for Unix becomes ’oils for Unix, same branding, same search results

                                                                                                  oil -> coil
                                                                                                  broil -> coil
                                                                                                  

                                                                                                  Of course, python based oil implementation is SnakeOil

                                                                                                  1. 1

                                                                                                    Thanks, this is probably the suggestion with the best mnemonics and rationale! Broil isn’t bad.

                                                                                                    I’d probably still cling to the sh suffix though … that seems pretty ingrained.

                                                                                                    There is the tension between the short “coil” name and something globally unique

                                                                                                    I think about it as the Go / Golang issue. “Go” is the name, but it’s not unique enough, so people call it “Golang”

                                                                                                  1. 6

                                                                                                    you have to go through a wall of text convincing you why not using a framework is a bad idea

                                                                                                    https://react.dev/learn/start-a-new-react-project#can-i-use-react-without-a-framework

                                                                                                    and the content at the link in the screenshot from the article is here: https://react.dev/learn/add-react-to-an-existing-project#using-react-for-a-part-of-your-existing-page.


                                                                                                    They document both approaches, and they encourage one and discourage the other. That seems fine to me?

                                                                                                    1. 7

                                                                                                      the eval methods mentioned could be a whole post in of itself! I never knew I shouldn’t use unsanitized data with $(()), (()), and [[]], and even more!

                                                                                                      1. 7

                                                                                                        Yeah it’s pretty bad, but to boil it down to practice, it hasn’t changed my shell script style that much. Here’s where I use arithmetic:

                                                                                                        i=0
                                                                                                        i=$(( i + 1 ))  # the best way and POSIX way to increment an integer :)
                                                                                                        

                                                                                                        Since loop indices don’t come from external untrusted data, the hidden eval of i doesn’t come into play. If there was a better way to increment indices, I’d use it, but there really isn’t.


                                                                                                        Here’s a case where it does come from external data:

                                                                                                        cpus=$(nproc)
                                                                                                        MAX_PROCS=$(( cpus - 1 ))
                                                                                                        

                                                                                                        Anybody who controls the nproc output can now execute arbitrary shell commands on your machine (with bash and affected shells). But if someone controls nproc, they probably already control your machine. It’s similar to controlling ls or cp, etc. There are a lot of things that have already gone wrong by that point.

                                                                                                        On the other hand, data from the network should always be treated as untrusted:

                                                                                                        x=$(curl http://untrusted-example.org/user-supplied-number)
                                                                                                        echo $(( x + 1 ))   # never do this without prior validation
                                                                                                        

                                                                                                        Also, don’t write CGI scripts in shell, at least if you’re using a shell with arrays. Most people don’t do that anymore.

                                                                                                        But it will be possible to do safely and conveniently with YSH / Oil.

                                                                                                        As mentioned, git repos are a gray area. Most people trust them, but you shouldn’t, and git has a bunch of CVEs that indicate this threat model is important.


                                                                                                        Hope that helps!

                                                                                                      1. 3

                                                                                                        I usually make a small point of grabbing the Twemoji’s SVGs if I’m using emoji in an icon context. Everyone (for some definition, I do also use alt attribute as fallback!) gets to see the same thing, in the way I intend it (since emoji can differ so much between fonts too!).


                                                                                                        WCAG ARIA24 & WCAG H86 covers this topic too, aria-label="Warning" role="img" seeming appropriate for this case, though of course it depends on context too.

                                                                                                        1. 3

                                                                                                          The code surrounding the use of this component includes the aria label and row. I didn’t include that in this example for logistical reasons.

                                                                                                        1. 1

                                                                                                          Confusing!

                                                                                                          The one of the features of Bottles is

                                                                                                          Your bottles are isolated from the system and will only hit your personal files when you decide. (only when using Flatpak)

                                                                                                          To have downstream packages, completely removes the security protections provided. The complaints that flatpaks are done wrongly also don’t make much sense in this aspect, it has an incentive to be better at flatpaking than other applications.


                                                                                                          he does not trust that upstream Flatpaks “follow any standard except standard of their authors”

                                                                                                          The program downloads and runs community forks of Wine, which arn’t included in Fedora Packages. One has to wonder if the person complaining even ran the software?

                                                                                                          1. 8

                                                                                                            Regardless if one thinks something like this is good or bad; A program should say something onto stderr if it edits a security setting like this.

                                                                                                            1. 3

                                                                                                              I’m surprised that I’m finding myself fond of the serifs on the Go font: For most fonts, I find that increased line spacing helps a bunch, but the serifs replace that need, sectioning the lines from each other nicely.

                                                                                                              1. 2

                                                                                                                I too like Go Mono precisely because of the serifs. Sadly monospace typefaces with serifs aren’t that popular.

                                                                                                              1. 2

                                                                                                                Understatement!

                                                                                                                in terms of black-hat, I’d say probably a good goal would be https://www.youtube.com/watch?v=CwuEPREECXI , Process involving using the very thankfully panning camera to produce a super-resolution image, perspective correct the envelopes, and then recover what info there is.

                                                                                                                1. 3

                                                                                                                  current: E2EE Signal + plaintext SMS in one app

                                                                                                                  Future: E2EE Signal, and then E2EE RCS in a separate app.

                                                                                                                  Its a pain that this “needs” to be done as google are holding the RCS reins very tightly instead of opening it up to other apps, but until that happens, Signal dropping SMS support will increase security overall.

                                                                                                                  Their goal is that you wouldn’t use Signal for SMS regardless!