1. 31

    We’re disabling HTTP/3 for the time being, which is hopefully picked up automatically upon restart. Restarting the browser should just help. If it doesn’t, disable HTTP3 manually.

    Edit: The bug was in HTTP/3, but not in “all of HTTP/3”. We solved this on the server-end. A post-mortem will be held and I’ll make sure the outcome lands on lobste.rs

    1. 16

      Do I understand from this that Mozilla can just update my browser settings remotely without my updating‽

      1. 19

        Update: In the end, we disabled H3 on the offending server not in any client.

        1. 8

          Thanks for posting it here with your official hat on and being so honest! Mistakes can happen and exactly this behavior gives me confidence in the FF crew.

        2. 18

          Yes. This is part of the “Remote settings” service, which we can use to ship or unship features (or gradually) or recover from breakage (like here!). We mostly use it as a backend for Firefox Sync and certificate revocation lists. Technically, we could also ship a new Firefox executable and undo / change settings, but that would realistically take many more hours. Needless to says, every “large” software project has these capabilities.

          BTW I’d encourage you not to disable remote settings, because it also contains certificate revocation updates and (obviously) helps with cases like this here. I understand that this is causing some concern for some people. If you’re one of those, please take a look at https://support.mozilla.org/en-US/kb/how-stop-firefox-making-automatic-connections

          Edit: I’m told sync is using a different backend and some of this is inaccurate. It seems that lots of folks are linking to this thread, which is why I will leave this comment, but strike-through.

          1. 6

            How do I disable Remote Settings but keep CRL updates on?

            1. 5

              CRL updates are part of the payload that the “remote settings” service provides. So, I’m not sure what you are asking. I only know of the all-or-nothing switch.

              1. 2

                I think driib is asking “If I control the DNS for a public wifi point, can I use an NXDOMAIN for use-application-dns.net and a spoof of aus5.mozilla.org to force an update to my own (possibly evil) version of Firefox; and if so, how do I defend against that?”. But I could be wrong.

                1. 1

                  We sign our remote settings as well as our Firefox browser updates.

                  1. 1

                    Good.

            2. 1

              😱

        1. 4

          I just want to address one line in the “Dependency Risk and Funding” post:

          Daniel Stenberg of curl doesn’t wield that power (and probably also doesn’t want to).

          Knowing what I know about Daniel, he probably does not want that power, as the author says.

          However, he absolutely has that power.

          Daniel could hide deliberate vulnerabilities in Curl that would allow him to take control of every machine running it. He could also hide code that would destroy those machines. In fact, he could hide code in Curl to delete itself and whatever code is using it, as well as mirrors of it, thus effectively wiping Curl off of the face of the Earth, even more so than what Marak did.

          Just because people mirror Daniel’s code does not mean he doesn’t have the power to do serious damage.

          1. 4

            However, he absolutely has that power.

            I think you missed the point. Let me explain.

            I have commit and release power over a very popular library (Harfbuzz) that goes into millions of machines too. I also have distro packager signing keys for more than one Linux distro including Arch Linux. The issue here is not commit or even release power, the issue is visibility. I know full well that every freaking character I commit gets scrutinized by several folks with even more programming chops than myself. Even if I turned evil and wanted to hijack something I would be called up short and tarred and feathered so fast I’d never recover.

            Daniel is in a similar boat. Not only is he a known entity but the code he writes is directly scrutinized by others and he would have to be very devious indeed over a long haul to get something really dangerous past all the watchers.

            The NPM and other similar ecosystems with deep dependency trees (where most people writing and release apps don’t even know where most of the code is coming from at compile time) are different. It is ultimately quite easy to write and maintain something trivial but “useful” and then hijack high profile projects with a dependency attack in a way that it is not easy for actual maintainers of high profile projects to do directly on their own projects.

            I believe that’s what the article was referring to when it said Daniel doesn’t have the same power. He would have to work a lot harder to get even something trivial through compared to how a lone actor deep in a Node dependency tree could so easily affect so many projects.

            1. 6

              A quick web search turned up this as well, if it’s the same person….

              https://gist.github.com/lclarkmichalek/716164

              1. 1

                Given that the owner of the colors.js repository replied to that Gist with a thank you, I think we can call it confirmed it is the same person. I don’t know about the Reuters article but it looks conceivable it was the same. Even if the latter turns out to be a different chap of the same name it seems like the FOSS world will be better off without relying on code from such a loose cannon.

            1. 4

              I wish there were lighter alternatives to the full-blown Mastodon server when you want to self-host a (small?) federated news channel… 🤔

              It’s good to see Gitea on the fediverse still, and the announce that they received a grant to work on the project makes it better. Well done.

              1. 9

                There’s Pleroma, which seems much lighter and more intended for smaller deployments.

                1. 3

                  Even for pleroma you need a postgresql, I am expecting a rust with sqlite as db with api that’s compatible with pleroma.

                  1. 1

                    I have just given up on trying to make it run again. Pleroma is an endless source of pain. It’s bloated as hell.

                    An actual option would be more like https://humungus.tedunangst.com/r/honk

                  2. 14

                    There’s also the more whimsical honk: https://honk.tedunangst.com/

                    It’s written in Go, rather minimalist and lightweight.

                    Frankly one barrier holding me back from trying it is they’re using Mercurial for version control, and I barely understand git so I’m not eager to half-learn some other system. I’ll probably get around to it eventually.

                    EDIT: Given it has libsqlite3 as a dependency, I’m assuming it uses SQLite for its database.

                    1. 7

                      @tedu literally just publishes tarballs; you don’t need to care about the SCM whatsoever.

                      1. 3

                        Fair point, I was assuming I would want to hack on it at some point, but it should be perfectly usable without modifying the source yourself.

                      2. 6

                        Mercurial is 100x more user friendly than git, don’t be afraid! For a long time I dreamt of an alternative timeline, where Mercurial won the lottery of history and DVCS is just a thing silently humming in the background, while so obviously intuitive people don’t really need to think about it in their day-to-day. But eventually I accepted the reality and grew to respect and appreciate the fickle tool that’s now standard instead, having learnt the contorted gestures required to hold it such that I don’t gain and more deep scars, and to keep my attention level always high enough and movement speed slow enough so as to usually avoid even the passive-aggressive light bruises and smacks it considers “expression of affection”.

                        1. 7

                          Off topic: when it comes to VCS dreams, I don’t want Mercurial to come back, I want Pijul to win.

                          1. 1

                            What are the perks with it that you prefer over other solutions?

                            1. 3

                              It makes collaboration and LTS branch maintenance considerably easier because any two patches that could be made independently can be applied independently in any order. Darcs had that many years ago, but never gained popularity due to its low performance. Pijul developers found a way to make that fast.

                              1. 1

                                Ohh, wow, it has the darcs thing?? I had no idea!

                                1. 1

                                  Is this like hg graft?

                            2. 1

                              I’d like to see the end user usability study that concluded Mercurial was the nice round figure of exactly “100× more user friendly that Git” as a result.

                              1. 1

                                Ouch, sorry; I sincerely only meant this as a subjective opinion and a rhetoric device; obvious as it may sound to you, it honestly didn’t even occur to me that it could be taken as a solid number; only now that you wrote I see this possibility, so I now realize it would be better if I at least prefixed it with an “IMO” or an “I find it…”; sorry again; ehh, I find again and again that this whole language thing is such a fickle beast and lossy protocol when trying to express things in my head, never ceasing to surprise me.

                            3. 1

                              The nice thing about Mercurial is that there’s nothing to learn when you’re first getting started because it has a reasonable CLI UI. Don’t hesitate to try it.

                            4. 6

                              I wish there were lighter alternatives to the full-blown Mastodon server when you want to self-host a (small?) federated news channel

                              Same. I currently run Pleroma but compared to most other things I run it’s huge. I’ve been keeping a close eye on the https://github.com/superseriousbusiness/gotosocial project. It’s still in the early stage but they made an initial release last year and it looks promising.

                              1. 1

                                That’s a very interesting perspective, thanks.

                                Lately I find myself wishing somebody would combine the “how to get off Google” genre of blog post with a “why language X is awesome” genre to create a “how to self-host/federate a whole lot of software using language X and minimal libraries”. There’s significant operational and security value in minimizing package dependencies. If you happen to be using lots of Golang services on your server that would be a very interesting case study.

                            1. 8

                              I have not explored Vim9 script, so I don’t know how hopeful or how sad to be about the language itself. The documentation promises potentially significant performance improvements: “An increase in execution speed of 10 to 100 times can be expected.” (That said, like many people, I would much rather write scripts for Vim in Lua or Python. But maybe Vim9 script will improve the syntax as well as the performance?)

                              But I do worry about this causing a significant rift in plugin development since Neovim has support for Vim9 script as a non-goal.

                              1. 6

                                The rift is already there. In the latest release Lua is pretty much first class and many plugins have already jumped ship and become NeoVIM only. I don’t expect Vim9 to open the gap much wider than it already is, and if it does (for example if Vim9 only plugins start having hot stuff people don’t want to live without) it would not be surprising to see that non-goal be removed. After all they have kept up pretty well with vimscript support and porting VIM patches in general.

                                1. 6

                                  Agreed. After neovim 0.5 I would need a really good set of arguments to move away from neovim and the thriving plug-in ecosystem using lua

                                  1. 2

                                    I could see pressure growing for vim9script support, but on the other hand, many may just author stuff in the legacy scripting for cross-compatibility because neither vim9script nor lua are necessary.

                                    I do hate to see this rift for code that needs the performance or flexibility though. It’s been pretty annoying for years where the core of an addon will be implemented in a pluggable scripting language and you have to make sure that’s feature-enabled and available and they’re all picking different languages. I’m disappointed that vim9script is becoming just another one of these, just without the external library dependency, and for now, definitely not available on nvim. It sounds like enough of a pain that I’d stay legacy, or do an IPC model like LSP, for compatibility, or just decide compatibility isn’t my problem.

                                    I think if vim9script takes off it will be through the sheer weight of vim’s popularity compared to nvim and people not concerned about compatibility, or willing to maintain two or more copies of the same logic. But I’m also not sure it’ll take off and I would’ve liked to see first-class lua in vim too. Just static linked, guaranteed in the build would’ve been enough for me!!

                                    Anyway, maybe it’s sad-but-okay if it’s just time to start saying vim and nvim are becoming too different. Clearly that’s happened with lua-only plugins.

                                1. 11

                                  If I’m reading this article correctly, glibc is compliant by default when compiling for 64 bit architectures, it is only when building for 32 bit platforms that it does not use the new flag. The article makes it sound like most GNU/Linux systems are going to explode, but given that many distros (including my stomping grounds of Arch) aren’t even building for 32 bit platforms any more this might not turn out to be that big a deal.

                                  1. 16

                                    A better title would’ve been “glibc in 32 bit user space is still not Y2038 compliant by default” as suggested by someone on HN, which would be less clickbait.

                                    1. 4

                                      Based on previous posts submitted here, the author of the blog seems mainly interested in making posts that highlight specific deficiencies in glibc vs musl and using that to imply that you should never use glibc for any reason ever.

                                      1. 1

                                        I have made the same observation, with Alpine the obviously superior distributions, why aren’t all people using Alpine?

                                      2. 4

                                        A more accurate headline would be ‘being Y2038 compliant on 32-bit platforms is an ABI break and glibc requires you to opt into ABI-breaking changes’. Any distro that wants to do an ABI break is free to toggle the default (though doing so within a release series is probably a bad idea). Given that none of the LTS Linux distros has a support window that will last until 2038, it isn’t yet urgent for anyone.

                                        1. 3

                                          “glibc in embedded/I(di)oT will bite you in 2038”

                                        2. 7

                                          Some embedded boards on x86 / 32bit ARM deployed right now will still be working into 2038. This is about them.

                                          1. 12

                                            Sure. But that brings up two things the article should have addressed:

                                            1. State the scope of the problem rather than generalizing to all GNU/Linux except Alpine.

                                            2. Note that fixing the compiler may or may not fix boards that are already deployed anyway, the stuff that is likely to still be running in 17 years is also the stuff that never ever gets firmware updates. The real issue here is “what has been deployed before compilers started fixing this” and/or “what is currently being deployed to a never-updated long term deployment without proper compiler options turned on”.

                                            1. 3

                                              Fortunately the share of embedded Linux systems still using glibc is tiny.

                                              1. 4

                                                Quite so! And of those that do, the number of them that are 32 bit and don’t get upgraded ever is also tiny. Maybe a single system in a non critical role, like a greenhouse watering system watchdog, actually falls under this criteria.

                                              2. 2

                                                Fortunately there’s more awareness about things like that in embedded development. It’s not perfect of course, but you tend to deal with custom glibc and similar issues more often, so a lot of people deploying things that depend on real dates today will know to compile the right way.

                                            1. 8

                                              Anybody know of a current breakdown between the status of forks Audacium and Tenacity? Both seem to have active progress but all the “news” out there on Reddit and elsewhere is at least a couple of months old and it would be useful to have a current progress report that stacks them up against the official 3.1.3.

                                              1. 5

                                                Also what exactly they’re trying to succeed in ? The audacity “tracking” is crash analytics and they ditched google as requested for the data gathering..

                                                1. 2

                                                  Telemetry was one issue, but the official project also has lots of other long standing issues. One major one is the requirement for a forked wxWidgets build to run right. Other oddities of the build system and the ability to use more system libraries is something some of the forks were addressing besides ripping out telemetry. Some of the forks planned new features too. Obviously any other features or bug fixes obviously need to be compared to what has been done in the official project. Have any of the forks been keeping up with backporting? Are any of them ahead of each other feature wise?

                                                  1. 2

                                                    a forked wxWidgets build to run right

                                                    Ok but I’d guess that’s less of an political issue and the original project would also like to migrate away from that ? Personally I’d like to see them migrating to QT or GTK as the current library doesn’t understand dpi-scaling even remotely and the issue on wxwidgets for that seems pretty stale or requires huge amounts of work. (Take this with a grain of salt, I just glanced over that ~3 weeks ago.)

                                                    them ahead of each other feature wise

                                                    It does look like tenacity does have a UI refresh I like. But ultimately I think they’re better of merging all their hard work. From my experience it takes so much work to maintain even (in comparison) tiny open source software that has a stable user base and tries to get things right (not breaking stable experience etc).

                                                    1. 2

                                                      Your guess would be wrong. The original project did not want that, they were offered community patches that accomplished major parts of it along the way and they were consistently refused. Honestly there is a long history of this project not playing very nice with the FOSS community. I suppose there is some hope the current project mangers might take a different stand on that and maybe we will see progress on other fronts, but if playing nice with FOSS folks is now on the table their licensing and telemetry fiascoes straight out of the gate were not a good foot forward.

                                                      We’ll see I guess, but there is reason there are forks and some (like myself) are watching from the sidelines hoping to see momentum build behind at least one of them. At this point it isn’t clear to me which one that might be. Hence the question.

                                              1. 12

                                                It seems like I’ve seen this before on here and I didn’t think it was worth celebrating with a new news item — Then I saw the PR has 3973 commits! That has to be a record of some kind. I guess this has been in the works for a while, so not surprising it has been news before. More power to you guys I guess, have fun.

                                                1. 13

                                                  There’s been a lot of preliminary work to upstream parts of the work, make the runtime reentrant, etc. But as far as I understand this is the actual big one, the PR that pushes OCaml into the 5.0 era 🙂

                                                  1. 13

                                                    The podcast Signals and Threads covered multicore OCaml quite recently: https://signalsandthreads.com/what-is-an-operating-system/

                                                    It’s great podcast!

                                                1. 1

                                                  Don’t see any use case for this. Graphic software and browsers won’t support it and if it has to be converted to svg to be usable, then it’s basically just one more step. All old technologies can probably be done in a better way if started from scratch, but the question is if it’s worth it.

                                                  1. 3

                                                    Have you noticed how many novel new raster image formats have gotten browser support in the last few years? It actually isn’t that had a bar to pass. There is some bureaucracy involved, but the hardest part is usually getting enough developer buy in and agreement on the details of a format spec. Once enough developers like the format and agree on how it should work, submitting an implementation to one browser vendor and getting buy in isn’t an impossible task, and once you get one the others have been following suite pretty quickly.

                                                    1. 8

                                                      Oh no. For image formats the bar is very high. In the last 25 years we’ve got:

                                                      • WebP that came to existence only because of the enthusiasm for the VP8 codec (which in retrospect was overhyped and too early to get excited about). It took several years of Google’s devrel marketing and Chrome-only website breakages before other vendors relented.

                                                      • APNG, because it was a relatively minor backwards-compatible addition. Still, it was a decade between when it was first introduced and became widely supported.

                                                      • AVIF. It’s still not well supported. It got in only because of enthusiasm for the AV1 codec (the jury is still out on whether it’s a repeat of the WebP mistake). It’s an expensive format with legacy ISO HEIF baggage, and nobody would touch it if it wasn’t an “adopt 1 get 1 free” deal for browsers with AV1 video support, plus optimism that maybe it’d be easy for Apple to replace HEIC with it.

                                                      Video codecs got in quicker, but market pressures are quite different for video. Video bandwidth is an order of magnitude more painful problem. Previous codecs were patented by a commercial entity that was a PITA for browser vendors. OTOH existing image codecs are completely free and widely interoperable. While not perfect, they work acceptably well, so there isn’t as much appetite for replacing them.

                                                      Future of JPEG XL in browsers is uncertain, because AVIF may end up being a good enough solution to WebP’s deficiencies. AV1 support is a sunk cost, and browser vendors don’t want more attack surface.

                                                      JPEG 2000 is dead. JPEG XR is dead. JPEG XT wasn’t adopted. Even arithmetic-codec old JPEG wasn’t enabled after the patents expired.

                                                      1. 1

                                                        JPEG XL has a really strong chance thanks to JPEG compatibility and best-in-class lossless compression.

                                                        https://cloudinary.com/blog/time_for_next_gen_codecs_to_dethrone_jpeg

                                                        1. 3

                                                          That’s what authors of JPEG XL say, not what browser vendors say. And in this case browser vendors are the ones making the decision.

                                                          JPEG XL does have very good compression and a bunch of flashy features, but browser vendors aren’t evaluating it from this perspective. They are looking at newly exposed attack surface (which for JPEG XL is substantial: it’s a large C++ library). They are looking at risk of future problems (there’s only a single implementation of JPEG XL, and vendors have been burned by single implementations becoming impossible to replace/upgrade/spec-compliance-fix due to “bug-compatible” users). They are weighing benefits of new codec vs cost of maintaining it forever, and growth of code size and memory usage, and growth of the Accept header that is always sent everywhere. You could say the costs are small, but with AVIF already in, the benefits are also small.

                                                          Here are my bets:

                                                          1. If Safari adds AVIF, then AVIF wins, and JPEG XL is dead. This is because AVIF will become usable without content negotiation, which will mean it will be a permanent requirement of the web stack, and browsers won’t be able to get rid of it. Supporting JPEG 2000 when everyone else supported WebP didn’t work out well for Safari, so I don’t expect Safari to add JPEG XL first.

                                                          2. OTOH if AV1 flops, or gets obsoleted by AV2 before AVIF becomes established, then we could see browser vendors drop AVIF and add JPEG XL instead (unless they keep AV1 anyway, and maybe go for a lazy option of AVIF2).

                                                          1. 1

                                                            Chrome & Firefox have JPEG XL implemented behind flags in deployment. (I can look at JPEG XL images in Firefox Nightly on Android right now.) WebKit is currently implementing Bug 208235 - Support JPEG XL [NEW]:

                                                            • Bug 233113 - Implement JPEG XL image decoder using libjxl [RESOLVED FIXED]
                                                            • Bug 233325 - [WPE][GTK] Allow enabling JPEG-XL support at build time [RESOLVED FIXED]
                                                            • Bug 233364 - JPEG XL decoder should support understand color profiles [RESOLVED FIXED]
                                                            • Bug 233545 - Support Animated JPEG-XL images [RESOLVED FIXED]
                                                    2. 2

                                                      Maybe it’s not a perfect match for the browser, but for example Qt applications can benefit largly from this by reducing the complexity of the icon rendering implementation.

                                                      The same goes for embedded or in general memory-constraint applications. TinyVG graphics can be rendered with as much as 32k RAM, so there is also a speed benefit in that (less memory usage => faster)

                                                    1. 3

                                                      As a long time happy user of two Kinesis Advantage keybords (one for home, one for work, because once you have one you can’t go back) this keyboard just gave be a tinge of buyers remorse. I pre-ordered two Keyboardio Model 100 keyboards a while back. The regret is that — if I had known Kinesis had this in the works I might have held back and only pre-ordered one and given this Advantage 360 a side by side run with Keyboardio.

                                                      Looking over the press release this seems to have addressed many of my minor gripes with the Advantage.

                                                      1. 7

                                                        Yeah it looks like they are finally offering switches that have tactile feedback. I was a fan of the shape of the old Advantage but I didn’t have the patience to desolder and replace every single one of the MX Brown switches so it never felt like a serious contender to me.

                                                        This seems like they’re playing catch-up to Keyboardio, which is … well, a step in the right direction, though IMO the wood still looks a lot cooler.

                                                        (disclaimer: I have a business relationship with Keyboardio, but it doesn’t involve the Model 01 or 100; just a big fan of those)

                                                      1. 5

                                                        I’m glad the project maintainers verified the status quo, but I can’t help but think they are a couple years late and it would have been better if they had allowed some of the energy to go into co-maintainers instead of forcing a fork. Obviously in this case there was enough momentum to keep it going strong, but I also remember the couple years where it was unclear what was going to happen and the fragmentation was a turn off to everybody.

                                                        1. 8

                                                          Nonsense.

                                                          This is an opportunistic article that rides the current wave of attention but misdiagnoses the root problem. Lots of commercial software has been affected by this specific bug, by similar library bugs in the past even when the company has sponsored the source projects, and by similar design stupidity in their own code. This sort of bone-headedness isn’t the fault of an open-souce model, the problem is more fundamental to the nature of programming (and programmers).

                                                          1. 2

                                                            You have misunderstood the article, whether accidentally or on purpose, I don’t know. It’s not trying to diagnose the root problem, it’s trying to say that many crucial foundational open source libraries are essentially unpaid, unsupported hobby projects and that this is morally wrong.

                                                          1. 10

                                                            I just use https://github.com/tpope/vim-eunuch which includes a :SudoWrite in its list of goodies.

                                                            Everything in the plugin is pretty easy to live without but in my mind the simple plugin to have everything on-hand is worth it.

                                                            1. 3

                                                              Same here since 2014, with the addition of this shortcut in my rc file:

                                                              cmap w!! SudoWrite
                                                              
                                                              1. 1

                                                                Nice! I’ve recently added https://github.com/lambdalisue/suda.vim for Neovim, but I may look at this instead, especially as it adds some other things

                                                              1. 7

                                                                As an Arch Linux packager I can sympathize with the Gentoo folks here. It’s been quite frustrating to have working solutions deprecated before working replacements are in place.

                                                                1. 9

                                                                  Honestly, this xkcd is the perfect summary of the entire Python ecosystem. Since that comic was authored, the situation has only got more complicated.

                                                                  1. 5

                                                                    Every time I’ve actually talked to someone who claimed to be fighting with that, their story inevitably led back to “well, first I looked at the single standard default tooling, and decided against using it”.

                                                                    (yes, there is a simple default stack of packaging tools: setuptools as the builder, pip as the installer, venv as the managed/isolated environment. They work well and do their tasks. No, I don’t know why people seem to go to nearly any lengths and unimaginable levels of pain and frustration to try to avoid them)

                                                                    1. 2

                                                                      The “standard” tooling changes extremely often in the Python ecosystem. Distutils, setuptools, PEP-517 (and even this is ridiculously fragmented with flit, poetry and a whole bunch of other options). None of it is “standard” by any means. There is also no clear migration path for each of these methods, and most Python projects really don’t care.

                                                                      1. 1

                                                                        The “standard” tooling changes extremely often in the Python ecosystem. Distutils, setuptools, PEP-517 (and even this is ridiculously fragmented with flit, poetry and a whole bunch of other options). None of it is “standard” by any means.

                                                                        I don’t really know where this idea comes from.

                                                                        Well, actually, I do, and I said so above: pip is 13 years old, setuptools is 17 years old, virtualenv is 14 years old (and the venv module containing its core functionality has been in the Python standard library for 9 years). The problem isn’t “Python” or lack of a “standard”. Those are the standard tools. They’ve been around for ages, they’re battle-tested, they do their jobs extremely well, and their end-user interfaces evolve extremely slowly (when they evolve at all). I’ve been writing pip install for literally over a decade at this point!

                                                                        And that’s it. That’s the whole thing. People seem to get in this vicious loop where for whatever reason they utterly refuse to even look at the core standard tooling and then go cobble together their own monstrosity out of baling twine and duct tape, and then blame Python or “Python packaging” for the resulting problems.

                                                                    2. 3

                                                                      I honestly assumed you were linking to XKCD 927 but figured I’d click through for the laugh anyway. I had no idea there was a Python specific variant.

                                                                  1. 1

                                                                    Rewriting Redis in a language other than C might be a worthwhile endeavor, but why Ruby? I looked for an answer to the question on the landing page or hints at it from chapter titles. Finding none, I bounced out…

                                                                    1. 3

                                                                      Who is this for? is in big bold letters above the fold even on my phone.

                                                                      taken from the author’s opening statements:

                                                                      Anyone who worked with a web application, regardless of the language, should have enough experience to read this book.

                                                                      Considering Ruby is one of the mostly widely comprehended web languages with a very strong stdlib for TCP, threads, and a lot of the other things he would need to do this, it seems ideal to illustrate concepts.

                                                                      I appreciate the author taking the time to step through technology that powers a lot of the underlying systems web developers use with a language they are probably more comfortable groking, surely if this was more common the web would be a better place.

                                                                      1. 1

                                                                        For educational purposes?

                                                                      1. 11

                                                                        “The future of the internet is here” and it is written in 78.2% Javascript and runs in Electron? Count me out for now.

                                                                        1. 1

                                                                          Having to write a file to disk to cd seems like a step backwards. At the very least using source or eval directly instead of temp files would be an improvement.

                                                                          On the other hand my shell already does this kind of fuzzy completion and navigation (built into zsh) and also for filename completion (using fzf) so I think I’m good without…

                                                                          1. 1

                                                                            I use bash)

                                                                          1. 25

                                                                            This is one of the least informative postmortems ever. I knew more details about the outage from 3rd party observations before it was even over.

                                                                            1. 4

                                                                              This is not a postmortem. That will likely come later once there has been a thorough investigation.

                                                                              1. 7

                                                                                It’s Facebook, so I doubt there will ever be a real engineering postmortem.

                                                                                1. 4

                                                                                  My point exactly, I think this is what we get. I suspect there will be another post in a few days or weeks with about 10× as many words and some investor-calming action plan “so this doesn’t happen again”, but no more technical details than what we have.

                                                                            1. 8

                                                                              I’m never one to advocate for a web stack given an alternative but I wonder how much of this is down to using the GPU, rather than anything to do with C++ vs JavaScript part. If you use WebGL / WebGPU to do the transitions / animation then I’d expect that this should be completely fine in the browser. If you write CPU-side C++ to do the transitions then the RPI 4’s CPU might struggle at 4K.

                                                                              1. 2

                                                                                I suspect you’re right. My gut feeling is that you could achieve these affects with CSS transitions, and the performance would be good. Probably wouldn’t amount to much code, or take very long, either.

                                                                                1. 3

                                                                                  I tried to do this in the browser with CSS transitions and animations first. Nothing seems to be GPU accelerated on the pi. Performance was really bad and CPU usage very high.

                                                                                  I didn’t try webgl. At the time I thought it would be easy to do with native OpenGL. It’s just 2 triangles and a texture right?

                                                                                  It turned out to be a little bit more involved than that but I ended up sticking with it.

                                                                                  1. 2

                                                                                    You probably didn’t do anything to ensure the browser is accelerated. Firefox wouldn’t enable it automatically because the GPU is not on the qualified list. For good reason – WebRender is still glitchy on v3d. But you can use the old GL layers backend with layers.acceleration.force-enable, that would run CSS transforms on the GPU just fine. (That backend will be gone eventually but for now it still exists.)

                                                                                    1. 1

                                                                                      I tried to do this in the browser with CSS transitions and animations first. Nothing seems to be GPU accelerated on the pi. Performance was really bad and CPU usage very high.

                                                                                      Oh, cool, I didn’t see CSS mentioned in your post. I don’t think that CSS effects really turned out to be the big deal they were introduced as, but they have been around for probably ten years. They’re very standard, so I’m surprised they’re not GPU accelerated on the Pi.

                                                                                  2. 1

                                                                                    WebGl / WebGPU does tend to be impressively performant and quite viable for many projects — as long as the target is a platform that doesn’t have any trouble running a browser in the first place. In this case you’d still have the overhead of a browser just to get to your app, and even if the platform could run your code just fine you’ve lost access to all the resources the browser itself is hogging. For the Pi and other embedded platforms that’s often a deal breaker.

                                                                                  1. 5

                                                                                    This is pretty careless, and the fact that they know and don’t care makes it egregious. As you well point out there isn’t even a usability tradeoff in play here, just composing the command differently would help.

                                                                                    Perhaps it’s worth reminding them that it isn’t just other trusted users of a system getting access to the secrets, it is any process on the system, authorized or not. They have some (not lots but some) measures to protect users against keyloggers, clipboard scrapers, shoulder surfers, and screen recorders; yet they just blindly trust every process on the system with CLI secrets?