1. 2

    Finishing my own esp32 board: usable for prototyping but also usable in production (for personal projects).

    It’s like a typical devboard but no on-board USB-TTL converter (they’re comparatively expensive, use board space with many additional components, and use precious power), some pins laid out several times in order to make connecting to SPI and I2C buses easier, and a battery connection but no on-board charge circuit.

    1. 6

      Impressive. When faced with a similar issue (laptop from 2015 with 4GB of RAM), I turned to zswap and then zram. Zram works incredibly well with 6:1 compression ratios. It seems web browsers data compresses particularly well (much better than it should).

      1. 2

        Hehe, kinda funny that in 2021 you can “download more RAM” for your computer!

        1. 3

          This idea is very old, e.g. in the 1990ies there was RAM Doubler for Macs:

          https://tidbits.com/1996/10/28/ram-doubler-2/

          Or Quarterdeck MagnaRAM for Windows 3.1:

          https://en.wikipedia.org/wiki/QEMM#MagnaRAM

          1. 1

            It worked really well back then for day-to-day things like Office and Netscape… I remember a RAM Doubler/Speed Doubler combo making my PowerMac 7100 really noticeably snappier.

            I really enjoyed this upgrade story, and laughed audibly at the conclusion:

            I’ve now got an XPS13 with 16GB of memory.

            But next time I think I’ll just buy the 16GB variant upfront.

        2. 1

          1:6 seems very reasonable given a lot of the content will be text which compresses very well. Then there will be a lot of UI memory which is also pretty repetitive. http://mattmahoney.net/dc/text.html

        1. 34

          I don’t really agree with a lot of the claims in the article (and I say this as someone who was very actively involved with XMPP when it was going through the IETF process and who wrote two clients and continued to use it actively until 2014 or so):

          Truly Decentralized and Federated (meaning people from different servers can talk to each other while no central authority can have influence on another server unlike Matrix)

          This is true. It also means that you need to do server reputation things if your server is public and you don’t want spam (well, it did for a while - now no one uses XMPP so no one bothers spamming the network). XMPP, unlike email, validates that a message really comes from the originating domain, but that doesn’t stop spammers from registering millions of domains and sending spam from any of them. Google turned off federation because of spam and the core problems remain unsolved.

          End-To-End Encryption (unlike Telegram, unless you’re using secret chats)

          This is completely untrue for the core protocol. End-to-end encryption is (as is typical in the XMPP world) multiple, incompatible, extensions to the core protocol and most clients don’t support any of them. Looking at the list of clients almost none of them support the end-to-end encryption XEP that the article recommends. I’d not looked at XEP-0384 before, but a few things spring to mind:

          • It’s not encrypting any metadata (i.e. the stuff that the NSA thinks is the most valuable bit to intercept), this is visible to the operators of both party’s servers.
          • You can’t encrypt presence stanzas (so anything in your status message is plaintext) without breaking the core protocol.
          • Most info-query stanzas will need to be plain-text as well, so this only affects direct messages, but some client-to-client communication is via pub-sub. This is not necessarily encrypted and clients may or may not expose which things are and aren’t encrypted to the user.
          • The bootstrapping thing involves asking people to trust new fingerprints that exist. This is a security-usability disaster: users will click ‘yes’. Signal does a good job of ensuring that fingerprints don’t change across devices and manages key exchange between clients so that all clients can decrypt a message encrypted with a key assigned to a stable identity. OMEMO requires a wrapped key for every client.
          • The only protection against MITM attacks is the user noticing that a fingerprint has changed. If you don’t validate fingerprints out-of-band (again, Signal gives you a nice mechanism for doing this with a QR code that you can scan on the other person’s phone if you see them in person) then a malicious server can just advertise a new fingerprint once and now you will encrypt all messages with a key that it can decrypt.
          • There’s no revocation story in the case of the above. If a malicious fingerprint is added, you can remove it from the advertised set, but there’s no guarantee that clients will stop sending things encrypted with it.
          • The XEP says that forward secrecy is a requirement and then doesn’t mention it again at all.
          • There’s no sequence counter or equivalent so a server can drop messages without your being aware (or can reorder them, or can send the same message twice - no protection against replay attacks, so if you can make someone send a ‘yes it’s fine’ message once then you can send it in response to a request to a different question).
          • There’s no padding, so message length (which provides a lot of information) is available.

          This is without digging into the protocol. I’d love to read @soatok’s take on it. From a quick skim, my view is that it’s probably fine if your threat model is bored teenagers.

          They recommend looking for servers that support HTTP upload, but this means any file you transfer is stored in plain text on the server.

          Cross-Platform Applications (Desktop, Web, and Mobile)

          True, with the caveat that they have different feature sets. For example, I tried using XMPP again a couple of years ago and needed to have two clients installed on Android because one could send images to someone using a particular iOS client and the other supported persistent messaging. This may be better now.

          Multi-Device Synchronization (available on some servers)

          This, at least, is fairly mature. There are some interesting interactions between it and the security gurantees claimed by OMEMO.

          Voice and Video Calling (available on most servers)

          Servers are the easy part (mostly they do STUN or fall back to relaying if they need to). There are multiple incompatible standards for voice and video calling on top of XMPP. The most widely supported is Jingle which is, in truly fractal fashion, a family of incompatible standards for establishing streams between clients and negotiating a CODEC that both support. It sounds as if clients can now do encrypted Jingle sessions from their article. This didn’t work at all last time I tried, but maybe clients have improved since then.

          1. 8

            Strongly agree – claiming that XMPP is secure and/or private without mentioning all the caveats is surprising! There’s also this article from infosec-handbook.eu outlining some of the downsides: XMPP: Admin-in-the-middle

            The state of XMPP security is a strong argument against decentralization in messengers, in my opinion.

            1. 7

              Spam in XMPP is largely a solved problem today. Operators of open relays, servers where anyone can create an account, police themselves and each other. Anyone running a server that originates spam without dealing with it gets booted off the open federation eventually.

              Another part of the solution is ensuring smaller server operators don’t act as open relays, but instead use invites (like Lobste.rs itself). Snikket is a great example of that.

              but that doesn’t stop spammers from registering millions of domains and sending spam from any of them.

              Bold claim. Citation needed. Where do you register millions of domains cheaply enough for the economics of spam to work out?

              Domains tend to be relatively expensive and are easy to block, just like the IP addresses running any such servers. All I hear from server operators is that spammers slowly register lots of normal accounts on public servers with open registration, which are then used once for spam campaigns. They tend to be deleted by proactive operators, if not before, at least after they are used for spam.

              Google turned off federation because of spam and the core problems remain unsolved.

              That’s what they claim. Does it really seem plausible that Google could not manage spam? It’s not like they have any experience from another federated communications network… Easier for me to believe that there wasn’t much in the way of promotion to be gained from doing anything more with GTalk, so they shut it down and blamed whatever they couldn’t be bothered dealing with at the time.

              1. 3

                Your reasonning about most clients not supporting OMEMO is invalid because noone cares about most clients: it’s all about the marketshare. Most XMPP clients probably don’t support images but that doesn’t matter.

                For replays, this may be dealt with the double ratchet algorithm since the keys change fairly often. Your unknown replay would also have to make sense in an unknown conversation.

                Forward secrecy could be done with the double ratchet algorithm too.

                Overall OMEMO should be very similar to Signal’s protocol, which means that it’s quite likely the features and flaws of one are in the other.

                Conversations on Android also offers showing and scanning QR codes for validation.

                As for HTTP upload, that’s maybe another XEP but there’s encrypted upload with an AES key and a link using the aesgcm:// scheme (as you can guess: where to retrieve the file plus the key).

                I concur that bootstrapping is often painful. I’m not sure it’s possible to do much better without a centralized system however.

                Finally, self-hosting leads to leaking quite a lot of metadata because your network activity is not hidden in large amounts of network activity coming from others. I’m not sure that there’s really much more that is available by reading the XMPP metadata. Battery saving on mobile means the device needs to tell the server that it doesn’t care about status messages and presence from others but who cares if it’s unencrypted to the server (on the wire, there’s TLS) since a) it’s meant for the server, b) even if for clients instead, you could easily spot the change in network traffic frequency. I mean, I’m not sure there’s a lot more that is accessible that way (not even mentionning that if you’re privacy-minded, you avoid stuff like typing notifications and if you don’t, traffic patterns probably leak that anyway). And I’m fairly sure that’s the same with Signal for many of these.

                1. 3

                  now no one uses XMPP so no one bothers spamming the network

                  I guess you’ve been away for awhile :) there is definitely spam, and we have several community groups working hard to combat it (and trying to avoid the mistakes of email, not doing server/ip rep and blocking and all that)

                  1. 3
                    Cross-Platform Applications (Desktop, Web, and Mobile)
                    

                    True, with the caveat that they have different feature sets. For example, I tried using XMPP again a couple of years ago and needed to have two clients installed on Android because one could send images to someone using a particular iOS client and the other supported persistent messaging. This may be better now.

                    Or they’ve also calcified (see: Pidgin). Last time I tried XMPP a few years ago, Conversations on Android was the only tolerable one, and Gajim was janky as hell normally, let alone on Windows.

                    1. 3

                      True, with the caveat that they have different feature sets. For example, I tried using XMPP again a couple of years ago and needed to have two clients installed on Android because one could send images to someone using a particular iOS client and the other supported persistent messaging. This may be better now.

                      This was the reason I couldn’t get on with XMPP. When I tried it a few years ago, you really needed quite a lot of extensions to make a good replacement for something like WhatsApp, but all of the different servers and clients supported different subsets of the features.

                      1. 3

                        I don’t know enough about all the details of XMPP to pass technical judgement, but the main problems never were the technical decisions like XML or not.

                        XMPP had a chance, 10-15 years ago, but either because of poor messaging (pun not intended) or not enough guided activism the XEP thing completely backfired and no two parties really had a proper interaction with all parts working. XMPP wanted to do too much and be too flexible. Even people who wanted it to succeed and run their own server and championed for use in the companies they worked for… it was simply a big mess. And then the mobile disaster with undelivered messages to several clients (originally a feature) and apps using up to much battery, etc.pp.

                        Jitsi also came a few years too late, sadly, and wasn’t exactly user friendly either at the start. (Good people though, they really tried).

                        1. 5

                          I don’t know enough about all the details of XMPP to pass technical judgement, but the main problems never were the technical decisions like XML or not.

                          XML was a problem early on because it made the protocol very verbose. Back when I started working on XMPP, I had a £10/month plan for my phone that came with 40 MB of data per month. A few extra bytes per message added up a lot. A plain text ‘hi’ in XMPP was well over a hundred bytes, with proprietary messengers it was closer to 10-20 bytes. That much protocol overhead is completely irrelevant now that phone plans measure their data allowances in GB and that folks send images in messages (though the requirement to base64-encode images if you’re using in-band bytestreams and not Jingle still matters) but back then it was incredibly important.

                          XMPP was also difficult to integrate with push notifications. It was built on the assumption that you’d keep the connection open, whereas modern push notifications expect a single entity in the phone to poll a global notification source periodically and then prod other apps to make shorter-lived connections. XMPP requires a full roster sync on each connection, so will send a couple of megs of data if you’ve got a moderately large contact list (first download and sync the roster, then get a presence stanza back from everyone once you’re connected). The vcard-based avatar mechanism meant that every presence stanza contained the base64-encoded hash of the current avatar, even if the client didn’t care, which made this worse.

                          A lot of these problems could have been solved by moving to a PubSub-based mechanism, but PubSub and Personal Eventing over PubSub (PEP) weren’t standardised for years and were incredibly complex (much more complex than the core spec) and so took even longer to get consistent implementations.

                          The main lessons I learned from XMPP were:

                          • Federation is not a goal. Avoiding having an untrusted admin being able to intercept / modify my messages is a goal, federation is potentially a technique to limit that.
                          • The client and server must have a single reference implementation that supports anything that is even close to standards track, ideally two. If you want to propose a new extension then you must implement it at least once.
                          • Most users don’t know the difference between a client, a protocol, and a service. They will conflate them, they don’t care about XMPP, they care about Psi or Pidgin - if the experience isn’t good with whatever client you recommend that’s the end.
                          1. 2

                            XMPP requires a full roster sync on each connection, so will send a couple of megs of data if you’ve got a moderately large contact list (first download and sync the roster, then get a presence stanza back from everyone once you’re connected).

                            This is not accurate. Roster versioning, which means that only roster deltas, which are seldom, are transferred, is used widely and also specified in RFC 6121 (even though, not mandatory to implement, but given that it’s easy to implement, I am not aware of any mobile client that doesn’t use it)

                            1. 1

                              Also important to remember that with smacks people are rarely fully disconnected and doing a resync.

                              Also, the roster itself is fully optional. I consider it one of the selling points and would not use it for IM without, but nothing prevents you.

                              1. 1

                                Correct.

                                I want to add that, it may be a good idea to avoid using XMPP jargon to make the test more accessible to a wider audience. Here ‘smacks’ stands for XEP-198: Stream Management.

                          2. 2

                            XMPP had a chance, 10-15 years ago, but either because of poor messaging (pun not intended) or not enough guided activism the XEP thing completely backfired and no two parties really had a proper interaction with all parts working. XMPP wanted to do too much and be too flexible.

                            I’d argue there is at least one other reason. XMPP on smartohones was really bad for a very long time, also due to limitations on those platforms. This only got better later. For this reason having proper messaging used to require spending money.

                            Nowadays so you “only” need is too pay a fee to put stuff into the app store and in case of iOS development buy an overpriced piece of hardware to develop on. Oh and of course deal with a horrible experience there and be at the risk of your app being banned from the store, when they feel like. But I’m drifting off. In short: Doing what the Conversation does used to be harder/impossible on both Android and iOS until certain APIs were added.

                            I think that gave it a pretty big downturn when it started to do okay on the desktop.

                            I agree with the rest though.

                          3. 2

                            I saw a lot of those same issues in the article. Most people don’t realize (myself included until a few weeks ago) that when you stand up Matrix, it still uses matrix.org’s keyserver. I know a few admins who are considering standing up their own keyservers and what that would entail.

                            And the encryption thing too. I remember OTR back in the day (which was terrible) and now we have OMEMO (which is ….. still terrible).

                            This is a great reply. You really detailed a lot of problems with the article and also provided a lot of information about XMPP. Thanks for this.

                            1. 2

                              It’s not encrypting any metadata (i.e. the stuff that the NSA thinks is the most valuable bit to intercept), this is visible to the operators of both party’s servers. You can’t encrypt presence stanzas (so anything in your status message is plaintext) without breaking the core protocol.

                              Do you know if this situation is any better on Matrix? Completely honest question (I use both and run servers for both). Naively it seems to me that at least some important metadata needs to be unencrypted in order to route messages, but maybe they’re doing something clever?

                              1. 3

                                I haven’t looked at Matrix but it’s typically a problem with any Federated system: you need at least an envelope that tells you the server that a message needs to be routed to to be public. Signal avoids this by not having federation and by using their sealed-sender mechanism to avoid the single centralised component from knowing who the sender of a message is.

                                1. 1

                                  Thanks.

                                2. 1

                                  There is a bit of metadata leaking in matrix, because of federation. But it’s something the team is working to improve.

                                3. 2

                                  Fellow active XMPP developer here.

                                  I am sure you know that some of your points, like Metadata encryption, are a deliberate design tradeoff. Systems that provide full metadata encryption have other drawbacks. Other “issues” you mention to be generic and apply to most (all?) cryptographic systems. I am not sure why XEP-0384 needs to mention forward secrecy again, given that forward secrecy is provided by the building blocks the XEP uses and discussed there, i.e., https://www.signal.org/docs/specifications/x3dh/. Some points of yous are also outdated and no longer correct. For example, since the newest version of XEP-0384 uses XEP-0420, there is now padding to disguise the actual message length (XEP-0420 borrows this again from XEP-0373: OpenPGP for XMPP).

                                  From a quick skim, my view is that it’s probably fine if your threat model is bored teenagers.

                                  That makes it sound like your threat model shouldn’t be bored teenagers. But I believe that we should also raise the floor for encryption so that everyone is able to use a sufficiently secured connection. Of course, this does not mean that raising the ceiling shouldn’t be researched and tried also. But we, that is, the XMPP community of volunteers and unpaid spare time developers, don’t have the resources to accomplish everything in one strike. And, as I said before, if you need full metadata encryption, e.g., because you are a journalist in a suppressive regime, then the currently deployed encryption solutions in XMPP are probably not what you want to use. But for my friends, my family, and me, it’s perfectly fine.

                                  They recommend looking for servers that support HTTP upload, but this means any file you transfer is stored in plain text on the server.

                                  That depends on the server configuration, doesn’t it? I imagine at least some servers use disk or filesystem-level encryption for user-data storage.

                                  For example, I tried using XMPP again a couple of years ago and needed to have two clients installed on Android because one could send images to someone using a particular iOS client and the other supported persistent messaging. This may be better now.

                                  It got better. But yes, this is the price we pay for the modularity of XMPP due to its extensibility. I also believe it isn’t possible to have it any other way. Unlike other competitors, most XMPP developers are not “controlled” by a central entity, so they are free to implement what they believe is best for their project. But there is also a strong incentive to implement extensions that the leading Implementations support for compatibility. So there are some checks and balances in the system.

                                1. 41

                                  Nice idea – would be great if there were multiple example languages to choose from (python, c++, something functional).

                                  And perhaps a “blind test” mode where you choose A or B so you are not biased by the fonts you know.

                                  1. 13

                                    And perhaps a “blind test” mode where you choose A or B so you are not biased by the fonts you know.

                                    Hear, hear.

                                    I tried to get the window set up just right to avoid seeing the names, but it was tough.

                                    1. 34

                                      Your voice are heard! I added a new toggle button to hide the font names! Try refresh / hard refresh the page to find a toggle for “Blind Mode”

                                      1. 10

                                        I’d love to be able to see the full tournament bracket after doing a whole run in blind mode, so I can see what my second, third etc. choices were.

                                        1. 4

                                          Ah yeah that’s what I was saying as well. At least the 2nd-place runner-up, but yeah showing the full ladder would be great too!

                                    2. 4

                                      There is a “blind match” button at the bottom of the page.

                                      1. 14

                                        wow, you found it while I was developing the feature – i am being crappy not having a dev site and everything is done to the live site! I finished developing it, now it is moved from the bottom to a more prominent place on the page!

                                        1. 2

                                          That probably explains why it seemed that button wasn’t there the first time I loaded the page. ;-)

                                          And well, you probably don’t need a QA server right now considering that you apparently didn’t (visibly) break the website while doing your changes!

                                      2. 4

                                        One thing I noticed was that there was only one pair of parentheses. A few of the fonts had braces that looked really like parentheses but it wasn’t obvious from just looking at the text. Something with nested brackets would make this much easier to spot. Similarly, most of these fonts made it easy to tell 0 and O apart, but I don’t know how many of them made it easy to distinguish 1 and I or I and l because the sample text didn’t have these characters nearby.

                                        It would be a bit more robust if it showed things more than once. There were a couple of fonts in the list that were almost identical (different shape of 5, most other glyphs basically the same). A few of them happened to look really good or bad at 16pt with my monitor size and antialiasing mode (Ubuntu Mono, in particular, looked terrible) but might be very different at different sizes. Once you’ve made a selection against something though, it’s gone forever, so you don’t get to find a ranking of preferred fonts.

                                        It would also be good if it didn’t tell me the name of the font until after I made my choice. My favourite according to this is Adobe’s Source Code Pro. Purely by coincidence, that’s the font that I have installed for all of my terminals to use. Or possibly seeing the name gave me a positive bias towards it that I wouldn’t have seen if I’d been comparing it without knowing the name.

                                      1. 1

                                        The benchmark here relies heavily on performance per watt… but Apple has never actually released the power usage of their chip, so the guess of 60 watts is just a guess. Intel, AMD, and nVidia are certainly not fully transparent in everything they do, but at least they release basic performance numbers on their chips and publish architecture documents.

                                        1. 1

                                          Indeed: a power meter is a must. Intel’s TDP is absolutely not reliable nowadays since their CPUs can roughly use twice as much power; AMD goes over too but not as much. And Apple probably doesn’t do better.

                                          1. 1

                                            Outside of microcontrollers (where they don’t do it anywhere as much), the manufacturers absolutely bullshit their public TDP numbers. It has been this way for at least a decade.

                                            Only third party analysis of the actual power drawn in a variety of benchmark situations can be somewhat trusted.

                                        1. 8

                                          BTW, is bsdiff still the state of the art?

                                          1. 7

                                            Pretty much, yes. https://github.com/divvun/bidiff is a variation on the theme, but only in that it uses zstd instead of bzip2 for the final compression pass and consumes less memory.

                                            1. 1

                                              The memory consumption of bsdiff is something that I’ve long had trouble with. Well, now I have much more RAM but the usage is really high anyway. Do you have more detailled numbers for bidiff? The github page is light on details for memory usage.

                                              1. 2

                                                It uses 5 × old file size.

                                          1. 3

                                            Ubuntu 21.10 brings the all-new PHP 8 and GCC 11 including full support for static analysis

                                            Why is PHP of all things suddenly the headliner?

                                            1. 6

                                              PHP 8 is much faster. That’s pretty good for something that’s basically old and boring tech nowadays.

                                              1. 3

                                                Going purely off memory here, but doesn’t Wikipedia run on Ubuntu and use PHP?

                                                1. 2

                                                  PHP is still pretty massive.

                                                  1. 1

                                                    brings the all-new PHP 8

                                                    also, the next major release 8.1 is about to be released in about a month. I don’t think “all new” is a valid qualifier any more.

                                                  1. 25

                                                    Fascinating read. Audio was the thing that made me switch from Linux to FreeBSD around 2003. A bit before then, audio was provided by OSS, which was upstream in the kernel and maintained by a company that sold drivers that plugged into the framework. This didn’t make me super happy because those drivers were really expensive. My sound card cost about £20 and the driver cost £15. My machine had an on-board thing as well, so I ended up using that when I was running Linux.

                                                    A bit later, a new version of OSS came out, OSS 4, which was not released as open source. The Linux developers had a tantrum and decided to deprecate OSS and replace it with something completely new: ALSA. If your apps were rewritten to use ALSA they got new features, but if they used OSS (as everything did back then) they didn’t. There was only one feature that really mattered from a user perspective: audio mixing. I wanted two applications to be able both open the sound device and go ‘beep’. I think ALSA on Linux exposed hardware channels for mixing if your card supported it (my on-board one didn’t), OSS didn’t support it at all. I might be misremembering and ALSA supported software mixing, OSS only hardware mixing. Either way, only one OSS application could use the sound device at the time and very few things had been updated to use ASLA.

                                                    GNOME and KDE both worked around this by providing userspace sound mixing. These weren’t great for latency (sound was written to a pipe, then at some point later the userspace sound daemon was scheduled and then did the mixing and wrote the output) but they were fine for going ‘bing’. There was just one problem: I wanted to use Evolution (GNOME) for mail and Psi (KDE) for chat. Only one out of the KDE and GNOME sound daemons could play sound at a time and they were incompatible. Oh, and XMMS didn’t support ALSA and so if I played music the neither of them could do audio notifications.

                                                    Meanwhile, the FreeBSD team just forked the last BSD licensed OSS release and added support for OSS 4 and in-kernel low-latency sound mixing. On FreeBSD 4.x, device nodes were static so you had to configure the number of channels that it exposed but then you got /dev/dsp.0, /dev/dsp.1, and so on. I could configure XMMS and each of the GNOME and KDE sound daemons to use one of these, leaving the default /dev/dsp (a symlink to /dev/dsp.0, as I recall) for whatever ran in the foreground and wanted audio (typically BZFlag). When FreeBSD 5.0 rolled out, this manual configuration went away and you just opened /dev/dsp and got a new vchan. Nothing needed porting to use ALSA, GNOME’s sound daemon, KDE’s sound daemon, PulseAudio, or anything else: the OSS APIs just worked.

                                                    It was several years before audio became reliable on Linux again and it was really only after everything was, once again, rewritten for PulseAudio. Now it’s being rewritten for PipeWire. PipeWire does have some advantages, but there’s no reason that it can’t be used as a back end for the virtual_oss thing mentioned in this article, so software written with OSS could automatically support it, rather than requiring the constant churn of the Linux ecosystem. Software written against OSS 3 20 years ago will still work unmodified on FreeBSD and will have worked every year since it was written.

                                                    1. 8

                                                      everything was, once again, rewritten for PulseAudio. Now it’s being rewritten for PipeWire

                                                      Luckily there’s no need for such a rewrite because pipewire has a PulseAudio API.

                                                      1. 1

                                                        There was technically no need for a rewrite from ALSA to PulseAudio, either, because PulseAudio had an ALSA compat module.

                                                        But most applications got a PulseAudio plug-in anyway because the best that could be said about the compat module is that it made your computer continue to go beep – otherwise, it made everything worse.

                                                        I am slightly more hopeful for PipeWire, partly because (hopefully) some lessons have been drawn from PA’s disastrous roll-out, partly for reasons that I don’t quite know how to formulate without sounding like an ad-hominem attack (tl;dr some of the folks behind PipeWire really do know a thing or two about multimedia and let’s leave it at that). But bridging sound stacks is rarely a simple affair, and depending on how the two stacks are designed, some problems are simply not tractable.

                                                        1. 2

                                                          One could also say that a lot of groundwork was done by PulseAudio, revealing bugs etc so the landscape that PipeWire enters in 2021 is not the same that PulseAudio entered in 2008. For starters there’s no Arts, ESD etc. anymore, these are long dead and gone, the only thing that matters these days is the PulseAudio API and the JACK API.

                                                          1. 3

                                                            I may be misremembering the timeline but as far as I remember it, aRts, ESD & friends were long dead, gone and buried by 2008, as alsa had been supporting proper (eh…) software mixing for several years by then. aRts itself stopped being developed around 2004 or so. It was definitely no longer present in KDE 4, which was launched in 2008, and while it still shipped with KDE 3, it didn’t really see much use outside KDE applications anyway. I don’t recall how things were in Gnome land, I think ESD was dropped around 2009, but pretty much everything had been ported to canberra long before then.

                                                            I, for one, don’t recall seeing either of them or using either of them after 2003, 2004 or so, but I did have some generic Intel on-board sound card, which was probably one of the first ones to get proper software mixing support on alsa, so perhaps my experience wasn’t representative.

                                                            I don’t know how many bugs PulseAudio revealed but the words “PulseAudio” and “bugs” are enough to make me stop consider going back to Linux for at least six months :-D. The way bug reports, and contributors in general, technical and non-technical alike were treated, is one of the reasons why PulseAudio’s reception was not very warm to say the least, and IMHO it’s one of the projects that kickstarted a very hostile and irresponsible attitude that prevails in many Linux-related open-source projects to this day.

                                                      2. 4

                                                        I might be misremembering and ALSA supported software mixing, OSS only hardware mixing.

                                                        That’s more like it on Linux. ALSA did software mixing, enabled by default, in a 2005 release. So it was a pain before then (you could enable it at least as early as 2004, but it didn’t start being easy until 1.0.9 in 2005)… but long before godawful PulseAudio was even minimally usable.

                                                        BSD did the right thing though, no doubt about that. Linux never learns its lesson. Now Wayland lololol.

                                                        1. 4

                                                          GNOME and KDE both worked around this by providing userspace sound mixing. These weren’t great for latency (sound was written to a pipe, then at some point later the userspace sound daemon was scheduled and then did the mixing and wrote the output) but they were fine for going ‘bing’.

                                                          Things got pretty hilarious when you inevitably mixed an OSS app (or maybe an ALSA app, by that time? It’s been a while for me, too…) and one that used, say, aRTs (KDE’s sound daemon).

                                                          What would happen is that the non-aRTs app would grab the sound device and clung to it very, very tight. The sound daemon couldn’t play anything for a while, but it kept queuing sounds. Like, say, Gaim alerts (anyone remember Gaim? I think it was still gAIM at that point, this was long before it was renamed to Pidgin).

                                                          Then you’d close the non-aRTs app, and the sound daemon would get access to the sound card again, and BAM! it would dump like five minutes of gAIM alerts and application error sounds onto it, and your computer would go bing, bing, bing, bang, bing until the queue was finally empty.

                                                          1. 2

                                                            I’d forgotten about that. I remember this happening when people logged out of computers: they’d quit BZFlag (yes, that’s basically what people used computers for in 2002) and log out, aRTs would get access to the sound device and write as many of the notification beeps as it could to the DSP device before it responded to the signal to quit.

                                                            ICQ-inspired systems back then really liked notification beeps. Psi would make a noise both when you sent and when you received a message (we referred to IM as bing-bong because it would go ‘bing’ when you sent a message and ‘bong’ when you received one). If nothing was draining the queue, it could really fill up!

                                                            1. 1

                                                              Then you’d close the non-aRTs app, and the sound daemon would get access to the sound card again, and BAM! it would dump like five minutes of gAIM alerts and application error sounds onto it, and your computer would go bing, bing, bing, bang, bing until the queue was finally empty.

                                                              This is exactly what happens with PulseAudio to me today, provided the applications trying to play the sounds come from different users.

                                                              Back in 2006ish though, alsa apps would mix sound, but OSS ones would queue, waiting to grab the device. I actually liked this a lot because I’d use an oss play command line program and just type up the names of files I want to play. It was an ad-hoc playlist in the shell!

                                                            2. 4

                                                              This is just an example of what the BSDs get right in general. For example, there is no world in which FreeBSD would remove ifconfig and replace it with an all-new command just because the existing code doesn’t have support for a couple of cool features - it gets patched or rewritten instead.

                                                              1. 1

                                                                I’m not sure I’d say “get right” in a global sense, but definitely it’s a matter of differing priorities. Having a stable user experience really isn’t a goal for most Linux distros, so if avoiding user facing churn is a priority, BSDs are a good place to be.

                                                                1. 1

                                                                  I don’t know; the older I get the more heavily I value minimizing churn and creating a system that can be intuitively “modeled” by the brain just from exposure, i.e. no surprises. If there are architectural reasons why something doesn’t work (e.g. the git command line), I can get behind fixing it. But stuff that just works?

                                                              2. 4

                                                                I guess we can’t blame Lennart for breaking audio on Linux if it was already broken….

                                                                1. 7

                                                                  You must be new around here - we never let reality get in the way of blaming Lennart :-/

                                                                  1. 2

                                                                    Same as with systemd, there were dozens of us where everything worked before. I mean, I mostly liked pulseaudio because it brought a few cool features, but I don’t remember sound simply stopping to work before. Sure, it was complicated to setup, but if you didn’t change anything, it simply worked.

                                                                    I don’t see this as blaming. Just stating the fact that if it works for some people, it’s not broken.

                                                                  2. 3

                                                                    Well, can’t blame him personally, but the distros who pushed that PulseAudio trash? Absolutely yes they can be blamed. ALSA was fixed long before PA was, and like the parent post says, they could have just fixed OSS too and been done with that before ALSA!

                                                                    But nah better to force everyone to constantly churn toward the next shiny thing.

                                                                    1. 4

                                                                      ALSA was fixed long before PA was, and like the parent post says, they could have just fixed OSS too and been done with that before ALSA!

                                                                      Huh? I just setup ALSA recently and you very much had to specifically configure dmix, if that’s what you’re referring to. Here’s the official docs on software mixing. It doesn’t do anything as sophisticated as PulseAudio does by default. Not to mention that on a given restart ALSA devices frequently change their device IDs. I have a little script on a Void Linux box that I used to run as a media PC which creates the asoundrc file based on outputs from lspci. I don’t have any such issue with PulseAudio at all.

                                                                      1. 3

                                                                        dmix has been enabled by default since 2005 in alsa upstream. If it wasn’t on your system, perhaps your distro changed things or something. The only alsa config I’ve ever had to do is change the default device from the hdmi to analog speakers.

                                                                        And yeah, it isn’t sophisticated. But I don’t care, it actually works, which is more than I can say about PulseAudio, which even to this day, has random lag and updates break the multi-user setup (which very much did not just work). I didn’t want PA but Firefox kinda forced my hand and I hate it. I should have just ditched Firefox.

                                                                        Everyone tells me the pipewire is better though, but I wish it could just go back to the default alsa setup again.

                                                                        1. 6

                                                                          Shrug, I guess in my experience PulseAudio has “just worked” for me since 2006 or so. I admit that the initial rollout was chaotic, but ever since it’s been fine. I’ve never had random lag and my multi-user setup has never had any problems. It’s been roughly 15 years, so almost half my life, since PulseAudio has given me issues, so at this point I largely consider it stable, boring software. I still find ALSA frustrating to configure to this day, and I’ve used ALSA for even longer. Going forward I don’t think I’ll ever try to use raw ALSA ever again.

                                                                      2. 1

                                                                        I’m pretty sure calvin is tongue in cheek referencing that Lennart created PulseAudio as well as systemd.

                                                                    2. 3

                                                                      I cannot up this comment more. The migration to ALSA was a mess, and the introductions of Gstreamer*, Pulse*, or *sound_daemon fractured the system more. Things in BSD land stayed much simpler.

                                                                      1. 3

                                                                        I was also ‘forced’ out of Linux ecosystem because of mess in sound subsystem.

                                                                        After spending some years on FreeBSD land I got hardware that was not FreeBSD supported at that moment so I tried Ubuntu … what a tragedy it was. When I was using FreeBSD I got my system run for months and rebooted only to install security updates or to upgrade. Everything just worked. Including sound. In Ubuntu land I needed to do HARD RESET every 2-3 days because sound will went dead and I could not find a way to reload/restart anything that caused that ‘glitch’.

                                                                        Details here:

                                                                        https://vermaden.wordpress.com/2018/09/07/my-freebsd-story/

                                                                        1. 1

                                                                          From time to time I try to run my DAW (Bitwig Studio) in Linux. A nice thing about using DAWs from Mac OS X is that, they just find the audio and midi sources and you don’t have to do a lot of setup. There’s a MIDI router application you can use if you want to do something complex.

                                                                          Using the DAW from Linux, if it connects via ALSA or PulseAudio, mostly just works, although it won’t find my audio interface from PulseAudio. But the recommended configuration is with JACK, and despite reading the manual a couple times and trying various recommended distributions, I just can’t seem to wrap my head around it.

                                                                          I should try running Bitwig on FreeBSD via the Linux compatibility layer. It’s just a Java application after all.

                                                                          1. 7

                                                                            Try updating to Pipewire if your distribution supports it already. Then you get systemwide Jack compatibility with no extra configuration/effort and it doesn’t matter much which interface the app uses. Then you can route anything the way you like (audio and MIDI) with even fewer restrictions than MacOS.

                                                                            1. 1

                                                                              I’ll give that a try, thanks!

                                                                        1. 8

                                                                          I don’t think this is objective at all. With such a title, the article should definitely mention that releases are supported for only six months. See https://utcc.utoronto.ca/~cks/space/blog/unix/OpenBSDSupportPolicyResults .

                                                                          I know more places with “outdated” (i.e. > 6 months) openbsd installs, than up-to-date ones. One of the biggest issue is that there is no salvation once you’re running an unsupported setup since you can’t skip any version and manual tweaks when upgrading.

                                                                          1. 5

                                                                            I was running an OpenBSD e-mail server for myself and ran into this. 6.3->6.4 had some MASSIVE changed to opensmtpd, resulting in needing an entirely different configuration file format. I just kept the old one running, but after a while, certbot stopped working (ACMEv1 support ended) and the new version of certbot wasn’t in the 6.3 ports tree. I tried to install it manually with pip, but it depended on cryptography, which now requires Rust, and the version of rust on that system was too old to build it. I then switched from certbot to dehydrate, a fully bash implementation of ACMEv2, but it spit out ECDSA certs which dovecot could read, but not opensmtpd.

                                                                            I’m sure I could have just edited dehydrate, but at that point I finally started looking at 6.3->6.4 migration guides (there were none when it came out. There are a couple now. I’m currently writing one myself) and got updated to the latest opensmtpd .. now running in an Alpine container, on my big dedicated server. I then deleted my openbsd VM.

                                                                            I liked OpenBSD, and still like the simplicity of their SMTP server, but I’ll run it on Linux for now.

                                                                          1. 3

                                                                            I’m using a similar setup (i3-gaps + polybar). Some notes:

                                                                            • picom (and any other compositors) takes its toll on your GPU and battery and noticeably increases input latency. I found it not worth the performance hit.
                                                                            • Consider flameshot for taking screenshots.
                                                                            1. 2

                                                                              I use xcompmgr (picom’s grandparent iirc) but with nothing done besides the compositing: no alpha nor anything. It doesn’t use more power and it actually helps on a variety of hardware+software nowadays. It might add one frame of latency but considering the typical latency of editors themselves, it’s not the usual issue.

                                                                            1. 1

                                                                              Does anybody know a similar improve version for Microsoft Windows?

                                                                              1. 3

                                                                                There seems to be a windows port of mtr. I’ve long given up traceroute in favor of mtr.

                                                                                Plus, with mtr (as root) you can use TCP SYNs to probe the network. Useful for weird (read: “bad”) network equipment and also useful to be quickly blacklisted by the server on the other side (and they’re right to do so).

                                                                              1. 34

                                                                                I had to stop coding right before going to bed because of this. Instead of falling asleep, my mind would start spinning incoherently, thinking in terms of programming constructs (loops, arrays, structs, etc.) about random or even undefined stuff, resulting in complete nonsense but mentally exhausting.

                                                                                1. 12

                                                                                  I dreamt about 68k assembly once. Figured that probably wasn’t healthy.

                                                                                  1. 4

                                                                                    Only once? I might have gone off the deep end.

                                                                                    1. 3

                                                                                      Just be thankful it wasn’t x86 assembly!

                                                                                      1. 3

                                                                                        I said dream, not nightmare.

                                                                                        1. 2

                                                                                          Don’t you mean unreal mode?

                                                                                          being chased by segment descriptors

                                                                                          only got flat 24bit addresses, got to calculate the right segment bases and offsets, faster than the pursuer

                                                                                    2. 6

                                                                                      One of my most vivid dreams ever was once when I had a bad fever and dreamed about implementing Puyo Puyo as a derived mode of M-x tetris in Emacs Lisp.

                                                                                      1. 19

                                                                                        When I was especially sleep-deprived (and also on call) in the few months after my first daughter was born, I distinctly remember waking up to crying, absolutely convinced that I could solve the problem by scaling up another few instances behind the load balancer.

                                                                                        1. 4

                                                                                          Oh my god.

                                                                                          1. 2

                                                                                            Wow that’s exactly what tetris syndrome is about. Thanks for sharing!

                                                                                        2. 5

                                                                                          Even if I turn off all electronics two hours before bed, this still happens to me. My brain just won’t shut up.

                                                                                          “What if I do it this way? What if I do it that way? What was the name of that one song? Oh, I could do it this other way! Bagels!”

                                                                                          1. 4

                                                                                            even undefined stuff

                                                                                            Last thing you want when trying to go to sleep is for your whole brain to say “Undefined is not a function” and shut down completely

                                                                                            1. 4

                                                                                              Tony Hoare has a lot to answer for.

                                                                                            2. 2

                                                                                              Different but related: I’ve found out (the hard way) that I need to stop coding one hour before sleeping. If I go to bed less than one hour after coding, I spend the remaining of the said hour not being able to sleep.

                                                                                              1. 1

                                                                                                I know this all too well. Never heard of the tetris syndrome before. I need to investigate this now right before going to bed.

                                                                                              1. 9

                                                                                                M1 Linux does not have a cool logo or name

                                                                                                Maybe not yours, but the one getting upstreamed does…

                                                                                                1. 1

                                                                                                  There’s a difference between the two though: this one doesn’t rely on virtualization like Asahi does AFAIU.

                                                                                                  That being said, I expect there is a lot in common between the two.

                                                                                                  1. 3

                                                                                                    Asahi doesn’t rely on virtualization; it’s just done in development useful for tracing MMIO.

                                                                                                    It’s also clean-roomed to make it easy to upstream as possible.

                                                                                                    1. 1

                                                                                                      Hmmm, I think I read that Asahi would work as long as there were no changes to virtualization. That lead me to believe they relied on it.

                                                                                                1. 8

                                                                                                  I’m slightly disappointed to see that this article is mostly about making Firefox look faster rather than actually making it faster.

                                                                                                  I’m also curious, what does XUL.dll contain? I remember reading articles about replacing XUL with HTML for interfaces, why is XUL.dll still needed?

                                                                                                  1. 22

                                                                                                    The visual and perceived performance wins are arguably easier to explain and visualize and were an explicit focus on the for the major release in June. This isn’t just lipstick on a pig though. An unresponsive UI is a big. Regardless of whether the browser doing work under hood or not.

                                                                                                    But the IOUtils stuff has some really clear wins in interacting with the disk. Process switching and process pre-allocation also have som really good wins that aren’t just “perceived performance”.

                                                                                                    1. 5

                                                                                                      But the IOUtils stuff has some really clear wins in interacting with the disk. Process switching and process pre-allocation also have som really good wins that aren’t just “perceived performance”.

                                                                                                      No numbers were provided for these unfortunately. :’(

                                                                                                    2. 11

                                                                                                      I’m also curious, what does XUL.dll contain? I remember reading articles about replacing XUL with HTML for interfaces, why is XUL.dll still needed?

                                                                                                      That’s basically “the rendering engine”. The Gecko build system uses libxul / xul.dll as the name for the core rendering code in Firefox. There’s no real connection between the file name and whether XUL elements are still used or not.

                                                                                                      Not sure why it’s not just named “Gecko”, but that probably requires even more archaeology…

                                                                                                      1. 3

                                                                                                        It’s because XUL refers to ‘XML User Interface Language’, which is how Gecko was originally meant to be interfaced with. Gecko sits under XUL, and XUL hasn’t been completely replaced yet.

                                                                                                        “There is no Gecko, only XUL”

                                                                                                        1. 2

                                                                                                          I see, thanks!

                                                                                                        2. 4

                                                                                                          I’m slightly disappointed to see that this article is mostly about making Firefox look faster rather than actually making it faster.

                                                                                                          User-perceived performance can be just as important as actual performance. There are tons of tricks for this and many go back decades while still being relevant today. For example: screenshotting your UI to instantly paint it back to the screen when the user reopens/resumes your app. It’ll still be a moment before you’re actually ready for user interaction, but most of the time it’s actually good enough to offer the illusion of readiness: a user will almost always spend a moment or two looking at the contents of the screen again before actually trying to initiate a more complex interaction, so you don’t actually have to be ready for interaction instantly.

                                                                                                          IIRC this is how the multitasking on many mobile operating systems works today – apps get screenshotted when you switch away from them, and may be suspended or even closed in the background while not being used. But showing the screenshot in the task switching UI and immediately painting it when you come back to that app gives just enough illusion of continual running and instant response that most people don’t notice most of the time.

                                                                                                          1. 1

                                                                                                            Yeah but what’s better, implementing complex machinery to make your slow software look faster, or implementing complex machinery to make your slow software faster ? I’d argue that making the software actually faster is always better, and if it is faster, it’ll look faster too, no need to trick the user.

                                                                                                            I agree that there comes a point where you made your software as fast as it can be and all that remains is making it look faster, but that still makes for disappointing articles to me. I prefer reading about making software faster than reading about making software perceptually faster.

                                                                                                            1. 5

                                                                                                              What’s better is for it to be faster and more usable to the user, regardless of the method. The above noted screenshotting/painting is more than a trick. It gives users the ability to read and ingest what was already on the screen which gets them back to what they were doing faster. That’s much more important than, say, a 50% reduction in load time from 2s to 1s. Those numbers are satisfying for people who love to look at numbers, but really doesn’t mean anything to the end-user experience.

                                                                                                              1. 2

                                                                                                                That’s the thing: sometimes speed isn’t a good thing. For instance, you could have your UI draw to the screen as fast as possible, but if you do that, you’ll end up with screen tearing, which makes the user experience worse. If you slow things down a tad (which doesn’t consume any resources, because the software it just waiting), the UI gives the perception of working better. Also, some slowdowns are there to give feedback to the user, such as animations when you click buttons, or resize things: these give the perception that something is happening, and create a causal link in the user’s head between what they just did and what’s happening, which it harder to get when something just appears out of nowhere.

                                                                                                                It’s not about tricking the user, even if there happens to be some smoke and mirrors involved, but about giving the user feedback. People like things to be fluid (which is what screenshotting a window for fast starts gives you), not abrupt. You might say that you’d be OK with this, but to give you a real-world example: if you were taking a taxi, would you be OK with your driver taking hard turns even if it got you to your destination a bit faster? Unless you were under severe time pressure, probably not.

                                                                                                                If you want to be genuinely disappointed, there are user interfaces out there that introduce delays for other reasons. You’ve probably encountered UIs in the wild that seem to take a longer time to do things that seems reasonable, such as giving the result of some sort of calculation or some search results for flights or hotel booking. Those delays are there not because they serve a purpose, but to increase trust in the result. This is because people’s brains are broken, and if you give them an answer straight away, it seems as if you’re not doing any work, which makes the result less trustworthy. However, if you introduce a short delay or give the results back in chunks, it gives the perception that the machine is doing real work, thus making the results more “trustworthy”.

                                                                                                                So no, faster is not always better, much as we might wish it to be.

                                                                                                                1. 1

                                                                                                                  For instance, you could have your UI draw to the screen as fast as possible, but if you do that, you’ll end up with screen tearing, which makes the user experience worse. If you slow things down a tad (which doesn’t consume any resources, because the software it just waiting)

                                                                                                                  This is a bad example. Doing things as fast as possible and then waiting for the next frame is the best thing to do, it allows the CPU to go back to idling and preserves battery. Making the software faster here means more time idling means more battery saved.

                                                                                                                  Also, some slowdowns are there to give feedback to the user, such as animations when you click buttons, or resize things

                                                                                                                  I hate animations and always disable them when I can. I understand that other people feel differently about them, but I don’t care, that still makes reading articles about perceptual performance improvements disappointing when I go in expecting actual performance improvements.

                                                                                                                  You might say that you’d be OK with this, but to give you a real-world example: if you were taking a taxi, would you be OK with your driver taking hard turns even if it got you to your destination a bit faster?

                                                                                                                  That’s a bad example. Having abrupt screen changes is different from being thrown around in a car.

                                                                                                                  such as giving the result of some sort of calculation or some search results for flights or hotel booking. Those delays are there not because they serve a purpose, but to increase trust in the result.

                                                                                                                  Making things perceptually slower is not what we are talking about. We are talking about making things perceptually faster.

                                                                                                                  So no, faster is not always better, much as we might wish it to be.

                                                                                                                  Making your software actually faster when you want it to be perceptually faster is better than just making it perceptually faster. That was my point and I don’t think any of your arguments proved it wrong.

                                                                                                            2. 2

                                                                                                              It’s a legacy name.

                                                                                                            1. 5

                                                                                                              Is it me, or does a dataset of 200.000 pictures seem a bit small?

                                                                                                              1. 8

                                                                                                                For now. Who’s to say authorities won’t ask to scan photos for known terrorists, criminals, or political agitators? Or how long until Apple is “forced” to scan phones directly because pedophiles are avoiding the Apple Cloud?

                                                                                                                1. 11

                                                                                                                  That’s not how the technology works. It matches known images only. Like PhotoDNA—the original technology used for this purpose—it’s resistant to things like cropping, resizing, or re-encoding. But it can’t do things like facial recognition, it only detects a fixed set of images compiled by various authorities and tech companies. Read this technical summary from Apple.

                                                                                                                  FWIW, most major tech companies that host images have been matching against this shared database for years. Google Photos, Drive, Gmail, DropBox, OneDrive, and plenty more things commonly used on both iPhones and Androids. Apple is a decade late to this party—I’m genuinely surprised they haven’t been doing this already.

                                                                                                                  1. 5

                                                                                                                    Apple does scan this when it hits iCloud.

                                                                                                                    The difference is now they’re making your phone scan it’s own photos before they ever leave your device.

                                                                                                                    1. 3

                                                                                                                      Only if they are uploaded to iCloud. I understand it feels iffy that the matching against known bad hashes is done on-device, but this could be a way to implement E2E for iCloud Photos later on.

                                                                                                                2. 4

                                                                                                                  But one match is enough. The goal is to detect, not to rate.

                                                                                                                1. 7

                                                                                                                  Very interesting read.

                                                                                                                  I’m parted when it comes to some lessons: the author aimed for very small and attributes part of the success to that but I think it would have been similarly successful by being merely small. At that time the competition was basically Adobe Acrobat Reader which was already huge and slow. FoxIt Reader came a bit later than SumatraPDF and was bigger but still much smaller than Acrobat Reader but then it grew larger and larger. IIRC the speeds were roughly as follows: SumatraPDF might take <~100ms to launch, FoxIt would take >~1s, and Acrobat Reader ~10s. That left a huge gap for a small-ish application.

                                                                                                                  I think the need to avoid non-small dependencies is far less important than stated. The author is still free to do so obviously but I think the reasonning in the blog post is sometimes wrong.

                                                                                                                  1. 10

                                                                                                                    AOSP’s Linux days appear numbered, progress on Fuchsia has been fast.

                                                                                                                    1. 1

                                                                                                                      Maybe, but possibly only for the android ecosystem. I doubt that the server world will move away from that. And any company that wants to have a collaboration is better of using linux than some google OS which allows them to have an edge over many functions (by simply keeping them private, so everyone has their own extensions).

                                                                                                                      1. 10

                                                                                                                        AOSP is the Android Open Source Project. https://source.android.com/

                                                                                                                        1. 2

                                                                                                                          AOSP doesn’t include anything required to run something you’d call android for normal users

                                                                                                                          • location
                                                                                                                          • sync
                                                                                                                          • playstore
                                                                                                                          • playstore APIs required for many apps (see location,sync..)
                                                                                                                          • google camera (no it won’t run unless modified to do so on approved hardware/os variations)

                                                                                                                          This is why people are using things like microG or opengapps on their custom ROMs.

                                                                                                                          Google explicitly got sued under anti-trust because they dictate the “kind” of android you can use when you want a real android and not some useless thing that’s running as much apps as ubuntu phone.

                                                                                                                          And let’s not talk about their driver stack, where every vendor ships their own crippled variation of android with their own binary blobs you have to use - if you can even download them by yourself and not have to compile their own version of android.

                                                                                                                          1. 5

                                                                                                                            I think the person you originally replied to was referring to the usage of the Linux kernel in the Android project and suggesting that the days that Linux will continue to be used as the base for the Android operating system are numbered.

                                                                                                                            I’m aware of the things you’ve mentioned here, but I’m not sure how they are related to the original point.

                                                                                                                            1. 2

                                                                                                                              Yeah you’re right, totally missed that

                                                                                                                        2. 8

                                                                                                                          is better of using linux than some google OS

                                                                                                                          But what if… Fuchsia was actually much less work to write and maintain drivers for, due to drivers running in userspace and hitting stable APIs and ABIs?

                                                                                                                          1. 5

                                                                                                                            That might be a nice side effect (I know little about Fuschia internals other than some former Be people were working on it), but my suspicion is Google is in it primarily for the non-copyleft.

                                                                                                                            1. 12

                                                                                                                              I think you have it exactly backwards. Getting rid of copyleft is a nice bonus, but the thing that seems to be causing actual problems is HW vendors refusing to forward-port their drivers to new kernel versions, whether or not they throw some source code over the wall. With a stable driver/kernel interface, this would not be a problem.

                                                                                                                              1. 3

                                                                                                                                I think you under-estimate the ability of vendors to produce crap. Kernel-land vs user-land and APIs are very secondary. As soon as they can write crap, they’ll write crap. Sure the crap will be more isolated but it won’t work better and with less incentive to produce working stuff, it might even end up worse. I don’t know how things would turn out but “works well in practice, not only in demos” doesn’t seem to have been the goal of vendors so far.

                                                                                                                                1. 3

                                                                                                                                  Sure the crap will be more isolated but it won’t work better and with less incentive to produce working stuff

                                                                                                                                  That’s the purpose of isolation though, isn’t it? To allow folks to write crap and not have it mess up the entire kernel. It’s accepting the human nature of writing bad code and trying to contain the damage.

                                                                                                                                  1. 3

                                                                                                                                    To play devil’s advocate: reducing penalties for writing crap incentivizes developers to write crap, so more crap will appear.

                                                                                                                                    To be clear, I’m entirely in favor of better OSes - but usually, when you incentivize something, people will do it. Ideally, we’d have developers trying hard to write good software and OSes that compensate for their mistakes - but in order to do that, we’ll need to incentivize them properly.

                                                                                                                                    1. 1

                                                                                                                                      Yup, that’s the purpose. But if that’s the camera driver and it’s crap, you’re going to have troubles taking pictures and that will cripple more than one app. I’m wary they’ll manage to make things worse. Not that the OS would make that more prevalent, but that these vendors are really really terrible.

                                                                                                                                      1. 3

                                                                                                                                        It sounds like the problem for Google is that the vendor makes it work once and then abandons it, so upgrading the kernel becomes hard.

                                                                                                                                        If it never worked in the first place, it wouldn’t matter whether the kernel upgrade broke it, because it never worked in the first place. The problem Google seems to be facing is that it did work at one point, but keeping it working is a PITA due to the lack of encapsulation/isolation.

                                                                                                                                    2. 2

                                                                                                                                      When running in unprivileged mode, drivers are this much easier to both debug and reverse engineer. This is on top of all other advantages.

                                                                                                                                      It’s a win-win situation.

                                                                                                                                  2. 4

                                                                                                                                    I believe Fuchsia is MIT licensed. https://fuchsia.googlesource.com/fuchsia/

                                                                                                                                    1. 7

                                                                                                                                      Yeah, not copyleft like GPLv2 is.

                                                                                                                                      1. 2

                                                                                                                                        Sorry, my brain read non-copyleft as proprietary for some reason.

                                                                                                                                      2. 3

                                                                                                                                        looks like 2-clause BSD (aka “Simplified BSD License” or “FreeBSD License”): https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/LICENSE

                                                                                                                                        EDIT: With a litigation revoking patent grant! https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/PATENTS
                                                                                                                                        Not sure why they didn’t just use apache-2 license?

                                                                                                                                      3. 4

                                                                                                                                        but my suspicion is Google is in it primarily for the non-copyleft.

                                                                                                                                        You would be correct.

                                                                                                                                      4. 3

                                                                                                                                        It’s ultimately up to Google whether Fuchsia has stable ABI. I do think the biggest liability is the fact that this is after all a Google project - and they choose to kill it regardless of how good or beloved it is (eg: Reader)

                                                                                                                                  1. 15

                                                                                                                                    Heh, they did it for #ocaml. The channel is almost 18 years old. And I didn’t even say the channel was migrated but that it was merely available on irc.libera.chat because a) I wasn’t completely sure which network should be used (or at least I couldn’t justify it completely properly to others), b) there is no matrix bridge yet on libera, c) the discord bot has not been changed yet.

                                                                                                                                    They just made a) clear and c) easier (no need to think about the bot handling two networks at once).

                                                                                                                                    1. 10

                                                                                                                                      I also had two of my channels destroyed which only mentioned libera and had not moved over yet. It certainly hastened my departure.

                                                                                                                                    1. 1

                                                                                                                                      Is that a copy of io_uring basically? That’s not a criticism: it’s a clear improvement for microsoft to be able to adopt ideas from others (and do it so quickly). (but they don’t seem to give credit)

                                                                                                                                      1. 3

                                                                                                                                        io_uring itself seems inspired by QIO/AST from VMS, which NT itself embraced.

                                                                                                                                        I wonder if this specific implementation was meant for WSL, since it’s only exposed through low-level NT APIs and not Win32.

                                                                                                                                      1. 3

                                                                                                                                        Compression speed is similar to CMIX’ but a thousand times slower than xz -9.

                                                                                                                                        I would also add that xz’ -9 option is not necessarily an appropriate option: it’s exactly like -6 but with a larger dictionary.

                                                                                                                                        I spent a few CPU cycles: xz -9 gets a ratio of 0.213 and xz –lzma2=preset=6,dict=1G gets 0.200.

                                                                                                                                        And if I understand correctly, the model isn’t transmitted for decompression, therefore saving space but it has to be built again from scratch and that requires the exact same hardware and software versions. The base model (lower compression) contains 56M parameters and the large one 187M parameters. Everything is running with a Nvidia 3090 RTX GPU. That doesn’t sound practical at all to me. Sure it’s maybe a first but……

                                                                                                                                        1. 1

                                                                                                                                          I don’t think practicality was something that was optimized for here. At least, I didn’t see that stated as a goal (implied or otherwise).

                                                                                                                                          1. 2

                                                                                                                                            I’ve used “practical” because it’s used in https://bellard.org/nncp/nncp_v2.1.pdf :

                                                                                                                                            We presented the first practical Transformer implementation able to outperform the best text compression programs on the enwik9 benchmark [8]. Unlike most today’s state-of-the-art natural language processing results, it is achieved on a desktop PC with a single GPU

                                                                                                                                            Of course there are different levels of practicality but I would have rather said that it is compatible with desktop-class hardware. I mean… at 1KB/s, it takes close to two weeks to compress enwik9 and again two weeks to decompress it!

                                                                                                                                            1. 2

                                                                                                                                              Ah interesting. Yeah, I’d say that’s a pretty loose definition of “practical,” although he did kinda clarify what he meant by it.