Threads for glacambre

    1. 6

      git config –global fetch.prune true

      git config –global fetch.pruneTags true

      This one can be rather dangerous. At $WORK, a colleague put this in his global gitconfig, forgot about it and some time later our internal package manager stopped working for him. It turns out that part of our package manager’s configuration is stored in git, and in order to switch between different modes, the package manager created different local references that were wiped by every other invocation. This was quite mystifying as I was completely unable to reproduce. After two hours of debugging over slack I had to just ask him to send me a strace trace and then asked him to gradually remove the files that were read by the package manager and its children until we saw the same behavior. I really wasn’t expecting removing his .gitconfig to solve the problem.

      I solved it in the package manager by forcing fetch.prune to false, but I’m traumatized enough that I’ll never use that option in any of my gitconfig files.

      1. 5

        fetch.prune is safe, that only removes remote-tracking branches, not local ones.

        fetch.pruneTags does remove all local tags that are not on the remote though. The Git documentation does have a warning about it: https://git-scm.com/docs/git-fetch#Documentation/git-fetch.txt---prune-tags

        But sounds more to me like that package manager is just horribly written and needs to be fixed.

        1. 3

          fetch.prune is safe, that only removes remote-tracking branches, not local ones.

          According to https://git-scm.com/docs/git-fetch#Documentation/git-fetch.txt---prune , –prune will delete tags too, and so would not be safe.

          But sounds more to me like that package manager is just horribly written and needs to be fixed.

          Despite sharing this opinion, I think it’s a bit presumptuous to come to this conclusion without any knowledge of said package manager besides my anecdote.

      2. 1

        I’ve worked on GCC’s Ada front-end at $WORK, the main internal testsuite is >20000 “snapshot” tests. I find it absolutely awful to work with and I wish we had unit tests instead (or better, something that would ensure the semantics of the the higher-level language are preserved through the various IRs, although I don’t blame $COMPANY for not wanting to re-implement CompCert for Ada).

        1. 19

          Company: AdaCore

          Company site: https://www.adacore.com/

          Position(s):

          Location: Depends on the position. Most permanent positions can be remote, the internships will have to be in-office. We have offices in New-York, Bristol, Paris, Vannes, Grenoble, Toulouse, Dresden and Tallinn.

          Description: At the beginning, AdaCore was the company behind GNAT, GCC’s Ada front-end. We’ve built a whole ecosystem (formal proof tooling, static analyzers, fuzzers, coverage tools, build tools, IDEs…) for the Ada language and are now aiming to become the “one-stop shop” of the embedded safety-critical world (hence the occasional Rust positions :) ).

          Tech stack: See the corresponding positions. There is of course a lot of Ada, but also a lot of non-Ada things: the GCC stuff is obviously C++, the static analysis stuff is Ocaml, the Rust stuff is Rust. All of our testing/build infrastructure is in Python. Knowing Ada isn’t a requirement - we can offer training if the position requires it.

          Compensation: “Industry standard”, whatever that means. I’ve been told our US colleagues have the same benefits as the ones enforced by the law in Europe (extremely good health insurance, lots of vacation days etc.)

          Contact: Please go through the website.

          1. 1

            Do you think there will be full-time positions in formal verification/static analysis anytime soon?

            1. 2

              Short answer: I don’t know :). Slightly longer answer: these are two separate teams (the one working on Spark and the one working on Infer). I think it’s unlikely the Spark team will be offering full-time positions anytime soon as I believe they recently increased their headcount. For the Infer team, we’ve asked for new positions to be opened, but it hasn’t been okayed yet (I don’t know if that means the request has been denied or if it’s just a slow process, I am as far from the people making these decisions as one could be).

            2. 1

              Compensation: “Industry standard”, whatever that means. I’ve been told our US colleagues have the same benefits as the ones enforced by the law in Europe (extremely good health insurance, lots of vacation days etc.)

              Given a portion of the work is with the defense industry, I imagine compensation is more inline with a larger business versus a startup? I’ve had ambient awareness of AdaCore for a while and am quite interested in Rust PM/Advocate work (considering I do similar work where I’m at now, but it’s not what I’m getting paid for), but I want to know it’s worth my time to apply first.

              1. 1

                Sorry for the delay answering you, I did not know how much this position was paid and had to ask HR… Who told me that the salary range was available on the offer posted on Linkedin, and did not give me any more info than that. I don’t have a Linkedin account and so can’t go check the amount (too bad, I’m curious now 🫠).

                I know I don’t have a FAANG salary. I know also I don’t have a terribly low startup salary (in fact, when I joined AdaCore, they offered me 50% more than what I was paid at the startup I was working for, without me even attempting to negotiate anything). I know I’m paid the average salary someone in my position and with my level of experience is paid at AdaCore (I was given this information by the person who hands out raises), but I also believe I’m paid slightly below what the median developer with my experience gets paid in my area.

                1. 2

                  I appreciate the pointer. I found the listening on LinkedIn and it’s in line with expectations. For other readers, they list the range at $160-200k.

                  What’s your opinion on the culture of work over there @glacambre? My issue with my current employment is poor communication all around in my team that I have no power to improve as my suggestions are consistently ignored, making me feel quite alienated.

                  1. 2

                    What’s your opinion on the culture of work over there @glacambre?

                    I’m not sure my opinion/experience will be very relevant to someone working as a Rust product manager :). In my experience, Product Management has long been pretty bad at AdaCore, especially when it came to the product I’m currently working on (the product manager had no time and interest in managing the product). However, during the past year, things have started to improve a lot, thanks to a new Product Manager who has been doing stellar work.

                    I don’t feel like it’s worked yet, but there are attempts at improving communication between teams/orgs within the company.

                    Regarding agency/ability to improve things, the company has historically had a “do it yourself” attitude, you were free to go work on things that were outside of your purview and make them better. However, as the company grew, natural silos started to form and it’s not as easy anymore (that doesn’t mean it’s impossible, just that it requires more energy). Initiatives to work around the issues these silos cause have been launched (e.g. orgs treating each-other as customers and enforcing SLAs for requests, standardization of practices, better documentation).

                    Finally, about the Rust team: I interact with some of the people on that team frequently and they’re pretty nice. I believe the Product Manager <-> Team Lead in communication in particular will be great, the Team Lead of the Rust team is great at documentation, technical design and project management.

            3. 3

              is there any terminal stack out there that doesn’t rely on in-band signaling like this? Wondering how, in a world where some of this stuff is out of band, you would handle coordination between that and the “main” output/input streams.

              1. 12

                Windows, prior to 2016.

                If you wanted to write to the console, you’d use WriteConsole. If you wanted to read the title, you’d use GetConsoleTitle. The real difference is that every message, including boring text output, is wrapped in some form of message descriptor.

                The “coordination” part wasn’t hard: the calls are synchronous, so a single program maintains its order. Things get messy when multiple processes talk to a single terminal, but that’s true for in-band sequences too. Synchronous execution wasn’t that expensive originally, because these calls were handled by the kernel; it became more expensive later when a context switch was needed for conhost to process each message before the program can start the next.

                AFAICT there’s some vestigial folklore about in-band being necessary due to serial cables and similar thinking. In the very early days, that was true; but fairly quickly we saw different terminals with different capabilities, and ioctls to distinguish them. Then virtual terminals come along, and those ioctls need to be handled by the terminal emulator - hence, pty. The raw text stream is insufficient to implement a terminal, and programs end up with isatty and similar to know if full terminal support is available. Ordering between text and ioctls matters. Even an ssh session has to be more than just an encrypted byte stream - it needs to packetize ioctls, interleave that with boring text, and encrypt all of it. (Edit: if you’re still not convinced, my personal favorite is SIGWINCH. Try putting that on a raw, unpacketed byte stream.)

                I’m fairly sure the NT folks looked at this in ~1990 and decided there was no good reason to keep both in-band and encapsulated messages, so they should just do one. That one happens to mean that cat/type can’t corrupt the terminal, but it can’t do anything else either - piping colored output to grep is indescribable.

                I’m really not sure how to feel about Windows having VT escapes now. It’s necessary for compatibility, but it also feels like substituting a 1970s design for a 1990s design. The original model seems to have been a genuine attempt to design something on the assumption that all terminals are graphical, support color, mice, resizing, and scollback, then come up with an API to describe those things.

                1. 8

                  Yeah, this is bugs me a lot. Terminals are a useful idea (text stream as an interface), implemented on top of the biggest pile of legacy I know of. It’s not only in-band control signals, it’s also terminfo, pty master-slave where the kernel has to be involved, isatty, etc.

                  It feels like it should be easy to do something 10x better by just removing bad parts, and 100x better by adding judicious extensions.

                  Of course, the big problem there would be adoption — the value of terminals is that its a common standard applications are developed against.

                  But we don’t even get to that problem somehow? There isn’t three different semi-dead alternatives, hardly anyone even tries to do something here! ngs, arcan and terminal.click are sort-of barking roughly at the direction of the forest here, and that’s it?

                  1. 4

                    ngs, arcan and terminal.click are sort-of barking roughly at the direction of the forest here, and that’s it?

                    ???

                    https://lobste.rs/s/xhcdg3/majjit_lsp#c_4agnik

                    I linked a whole bunch of prior art here just a few days ago, in a reply to you

                    https://domterm.org/index.html

                    https://hyper.is/

                    https://github.com/unconed/TermKit

                    Dozens of others here - https://github.com/oils-for-unix/oils/wiki/Interactive-Shell

                    Is there some reason those don’t “count” ?

                    1. 2

                      Thanks! I missed that comment originally. I did see the interactive shell page before, and I wanted to link it instead of inventing the list of prior art on the spot (which, as expected, I made a terrible job of), but I failed to find it after 10 minutes of googling:(

                      1. 3

                        Yeah unfortunately Github wikis are not crawlable on purpose. I think they ran into SEO spam or something.

                        I somehow didn’t find this out until using Github wikis for 5+ years!

                        GitHub prevents crawling of repository’s Wiki pages - no Google

                        https://github.com/isaacs/github/issues/1683

                        There used to be a site that mirrored them, but it’s down. So in the back of my TODO list is to mirror our Wiki to the website, like that other site used to …

                  2. 4

                    I doubt it. A physical terminal is something you hook up to the computer via a serial port. It would have been expensive if terminals required two serial ports (one for control, one for data), and as you stated, there would have been issue synchronizing between the two channels.

                    1. 2

                      Granted, but OTOH 1995 was 30 years ago. Maybe something is out there with a system designed to work in a different way. I get that it wouldn’t be a universal system by any stretch of the imagination, but I’d be surprised if there hasn’t been somebody who provided some ideas that aren’t merely “make stdout follow a schema”

                      1. 3

                        There was a system, first documented in 1971 that had separate control and data channels. That system was (and technically, still is) FTP. If you had both a server and client that supported the entirety of FTP, you could initiate a transfer of files between servers from a client. It fell out of favor for two reasons (that I know of): 1) there was no standardized way of encrypting the command and data channels; 2) it ran afoul of NAT because of how it worked (you could run it through a NAT, but it required a statefull firewall to handle it properly). It’s not a terminal, and there’s not much to synchronize between the command and data channels, but it does exist as a counter to a single channel.

                    2. 3

                      It’s not exactly a terminal/shell combination, but Neovim’s UI protocol comes to mind. It’s entirely messagepack-rpc based and UIs (including the default terminal-based one) communicate with Neovim by calling the nvim_input function with keypresses as parameter and receive different kinds of events (grid_line, grid_resize, cmdline_show…). Neovim then decides where the keypresses should go, and that could be a text buffer or a process.

                      I don’t know how it works, but given what I’ve seen of it, I would expect Arcan’s pipeworld shell ( https://arcan-fe.com/2021/04/12/introducing-pipeworld/ ) to work with out-of-band signaling too (AFAIU, if you want the regular in-band signaling that lets you communicate with exisiting software, you have to start your commands with the magic # character).

                      1. 4

                        Pipeworld’s (main) purpose is asynchronous interactive/declarative media pipeline process composition, I personally use it for bespoke surveillance systems.

                        The long list of steps (8+ years and counting) getting away from all the facets of terminal emulation would be (linkdump content warning):

                        https://arcan-fe.com/2016/12/29/chasing-the-dream-of-a-terminal-free-cli/

                        https://arcan-fe.com/2017/07/12/the-dawn-of-a-new-command-line-interface/

                        https://arcan-fe.com/2022/04/02/the-day-of-a-new-command-line-interface-shell/

                        https://arcan-fe.com/2023/11/18/a12-visions-of-the-fully-networked-desktop/ (because SSH is also infested)

                        (and the final missing post of ’yyyy/mm/dd/the-day-of-a-new-command-line-interface-curses/).

                        This also needs a tangent into interactive shells specifically, as having fancy asynchronous processing and out of band signalling is rather useless unless you do interesting things with it:

                        https://arcan-fe.com/2022/10/15/whipping-up-a-new-shell-lashcat9/

                        https://arcan-fe.com/2024/05/17/cat9-microdosing-stash-and-list/

                        https://arcan-fe.com/2024/08/05/cat9-microdosing-each-and-contain/

                        https://arcan-fe.com/2024/09/16/a-spreadsheet-and-a-debugger-walks-into-a-shell/

                    3. 5

                      Flagged as off-topic because the answer to all three topicality questions is a resounding ‘no’, I think.

                      1. 29

                        I’m less concerned about topicality in this case (we have fun little trivia posts sometimes, and as long as they don’t overwhelm the queue I’m fine with it; we all need a little whimsy in our lives), but I am concerned that the poster seems to almost entirely post their own content.

                        Looking at their posting history it’s pretty lopsided.

                        1. 16

                          By those metrics, half of the front page is off-topic. For some examples, all of these would be considered off-topic by your argument: “Why did Windows 95 setup use three operating systems?”, “Gleam v1.6.0 released”, “ChatGPT is Slipping”, “I was banned from the hCaptcha accessibility account for not being blind”. Not to mention that it says those are “rules of thumb for great stories to submit”, not the end all be all of what and what not to post.

                          1. 4

                            I think all of those examples can answer ‘yes’ to at least one of the questions for at least some Lobsters readers. (If you’re dealing with upgrade compatibility issues for a platform you develop; if you’re interested in programming languages or systems in the niche of Gleam; if you’re building LLM-based systems; if you maintain any kind of system that has a CAPTCHA-type access control).

                            1. 4

                              It’s funny, I wouldn’t have flagged this story as off-topic, but I did flag the Half Life documentary you posted as off topic. I feel like this story is on topic because it explains a weird error message, while the Half Life documentary did not give me technical insights about building or running software. What makes you feel the Half Life documentary is more on-topic?

                              (To be fair, while I think this story is on topic, I also feel it’s not very good.)

                              1. 3

                                (To be fair, while I think this story is on topic, I also feel it’s not very good.)

                                I felt the same way about it. I think some exposition about __builtin_unreachable and why we might want that in release builds but something different in debug builds could have made it a good article.

                                But my take was “on-topic, but not very good,” which led to neither a vote nor a flag from me. It’s certainly about computing, even if it doesn’t meet those rules of thumb for a “good” submission.

                                1. 2

                                  As a ‘making-of’ documentary I think it’s interesting and on-topic to anyone who does game development, even as a twenty-years-later retrospective – in particular because it’s about a game that has been so massively influential. Sure, there’s a lot in it that isn’t on-topic in that sense (I don’t think an entire documentary about the Valve/Vivendi lawsuit would be on-topic), but also a lot that is. It interested me even though I’ve never done any game development (but I’ve been curious about it for a long time).

                              2. 3

                                Good point, all other off-topic stories should be flagged as well.

                            2. 4

                              Antithesis is super cool, but it’s a bit annoying to read “debuggers are stagnant” when RR and Pernosco are literal revolutions in the space.

                              1. 5

                                Interesting, I didn’t know about mutation testing. To me, it looks like it might be better than simple line-based coverage, but not better than MC/DC coverage: while mutation testing can prove that the testsuite is missing a testcase, MC/DC can prove that the testsuite is not missing any testcases. I think this difference must have been the reason for the question about path coverage at the end of the talk (and perhaps also the one about the severity of the bugs found).

                                1. 8

                                  Say that one method of testing is strictly stronger than another if the first method always fails when the second method fails, but not vice-versa.

                                  Lemma: Mutation testing is strictly stronger than line-based coverage.

                                  Proof. We need to show that (i) if line-based coverage fails then mutation testing also fails, and (ii) if mutation testing fails then line-based coverage might nevertheless succeed.

                                  For (i), if line-based coverage fails it’s because some particular line (say line 287) is never executed in any test case. If so, mutation testing would also fail, because a modification of this line could not cause a test case to fail because it’s never even run. (Unless you have a flaky test case! We’re assuming deterministic test results.)

                                  For (ii), consider the code if (x > 0) { f(x) } else { f(x) }, and suppose that there are test cases that test both the case “x > 0” and “x <= 0”. This code passes line coverage because every line is executed by at least one test case, but it fails mutation testing b.c. it behaves identically if the check x > 0 is modified (say to x >= 0).

                                  Lemma: MC/DC testing is strictly stronger than line-based coverage.

                                  Proof. We need to show that (i) if line-based coverage fails then MC/DC testing also fails, and (ii) if MC/DC testing fails then line-based coverage might nevertheless succeed.

                                  For (i), show the contra-positive. If MC/DC testing succeeds, then every entry point is invoked and every decision takes every possible outcome, which in languages with normal control flow implies that every line of code is executed.

                                  For (ii), consider the code if (x > 0 || x <= 0) { ... }. This passes line coverage because every line is executed (both the if and the body {...}. But it fails MC/DC’s second rule “each decision takes every possible outcome” because the decision (x > 0 || x <= 0) never takes the outcome false.

                                  Lemma: Neither mutation testing nor MC/DC is strictly stronger than the other.

                                  Proof. We need to show a case where mutation testing passes but MC/DC fails, and one where MC/DC passes but mutation testing fails.

                                  For a case where mutation testing passes but MC/DC fails, consider the code let x = 0; return (x == 0) and a test case that checks that this returns true. Any mutation to this code will cause the test case to fail, so mutation testing passes. (Any mutation? The mutation return (0 == 0) doesn’t cause the test case to fail, but this mutated code is equivalent to the original in a way that’s easy to verify if you executed the original. But this does bring up some real tricky questions about how you perform mutation testing automatically if you’re not sure whether the mutated code is equivalent to the original. Questions which I’m pointedly ignoring.) So mutation testing passes. But MC/DC fails, because the decision x == 0 doesn’t take on all possible outcomes.

                                  For a case where MC/DC passes but mutation testing fails, consider the code let array = [1, 2, 3]; return array[1];, and a test case that checks that the return value is positive. This passes MC/DC because all entry and exit points are invoked and there are no conditions or decisions. It fails mutation testing, because changing array[1] to array[0] or array[2] does not cause the test case to fail. (This brings up the question of how many mutations we’re checking. We can’t actually try every value for every integer in the entire code base. But changing array[1] to array[0] (or to 1) seems like a reasonable ask.)

                                  1. 1

                                    For (ii), consider the code if (x > 0) { f(x) } else { f(x) }

                                    I assume you mean something like else { g(x) } (the code is supposed to have different behaviour in the two branches?).

                                    1. 3

                                      Nope, I meant what I said there. It’s meant to be a counter-example, not realistic code, though you could probably construct a more realistic example.

                                      The reason that mutation testing fails is because it tries mutating the line x > 0 (say to x <= 0), and finds that the behavior hasn’t changed. So it says “that condition x > 0 isn’t being tested properly!”.

                                      1. 3

                                        Ahh, yes, now I understand the argument. Thanks!

                                  2. 1

                                    MC/DC can prove that the testsuite is not missing any testcases.

                                    I don’t think that’s how it works? Proving you ran every branch of every condition and decision tells you absolutely nothing about the quality of your assertions.

                                    1. 2

                                      I guess I wasn’t clear, there’s indeed no way to formally verify whether a piece of code matches a specification, unless said specification is expressed in a computer language itself, so there’s no way to verify whether a testsuite tests for everything specified by a specification.

                                      What I meant was that MC/DC is enough to prove that every condition of your program is covered by a test, which, given the strategies for mutation testing (very focused on if/match constructs) evoked by the talk, made me think that MC/DC was strictly better that mutation testing. But @justinpombrio’s example made me realize that I was wrong, because mutation testing can operate on more constructs than if/match.

                                  3. 19

                                    I am truly baffled why Valve uses Arch Linux, but any investment into open source I support!

                                    1. 51

                                      My theory is that ubuntu/debian move too slow for Valve. Previous releases of SteamOS were based on Ubuntu, but since Valve is pushing a lot of changes to the linux graphics stack and they probably doesn’t want to diverge from upstream too much, Valve needs to have their changes trickle back to them quickly, which is something Arch is really good at.

                                        1. 2

                                          Debian Sid would’ve also fit to those requirements. But cool nonetheless!

                                          1. 12

                                            Having used Sid for a few years before switching to Arch, Arch is infinitely more stable. With Sid there’s no expectations of upgrading a random package and dpkg not being irremediably broken or even your computer being still able to boot afterwards (and it happened to me more than a few times), it’s more akin to Arch’s testing repos.

                                            1. 11

                                              I would theorize that Sid is not as widely used compared to default Arch. They’re essentially getting more test users for free as a result.

                                              1. 9

                                                Not during the lead-up to a Debian release, and not during a binary transition.

                                            2. 2

                                              Don’t forget the package formats :)

                                            3. 11

                                              Arch is a distro that leaves a lot of work to the user. But this can make it a good choice for somebody building their own distro, like SteamOS. Same idea with Gentoo and Chrome OS.

                                              1. 11

                                                Being an archlinux user for about 15 years, I have to say Archlinux is the best base for a distro targeted for consumer market. Everything in Arch is simple, straight-forward and consistent enough.

                                                1. 5

                                                  I switched over my PC from Ubuntu a few years ago when I realized I was out of touch with changes in core system tools and I was always headed to the Arch wiki to debug anything anyways. I’ve been really happy with it. If I have to rebuild the Lobsters VPSs it’s tempting to move to Arch. I’ve had hassles with backups because versions of Ruby and mariadb-dump in Ubuntu LTS are well behind what’s convenient. We’re simply not operating at a scale where that pace is valuable to us.

                                                  1. 11

                                                    I ran Arch in an Internet-facing production environment for a while years ago (circa 2013 or so), and I strongly advise against it. I totally get it - in fact, I came to the idea the same way you did (I ran Arch on my laptop and thought the simplicity and the fact that I knew exactly how everything was set up would be valuable in production - and indeed they were).

                                                    The problem is security updates. Arch updates sometimes require manual intervention (whether that be because of Pacman shenanigans or because a package upgraded a major version), and there isn’t a good way to tell beforehand what’s a security update and what’s not. Because of that, if your Arch host is internet-facing, you’re signing up to SSH in every few days and upgrade packages and babysit the machine, in perpetuity. Even if you’re on vacation, or tired.

                                                    Major package upgrades are also an issue. You have to take them, because partial upgrades are unsupported, but they can be really disruptive. I got a hard lesson in this when I went to apply security updates and all of a sudden unexpectedly had to sit there for 2-3 hours learning/rewriting configs because Arch had upgraded from Apache httpd 2.2 to 2.4.

                                                    1. 2

                                                      Thanks for these experience reports (ping sibling replies @sunng and @Exagone313). This sounds like moving to Arch would be a significant maintenance burden in the form of surprise breakages. It’d probably be fine if we ran enough servers to green/blue or have a staging env, but not with our current setup. I guess a better plan would be to take a smaller step from Ubuntu LTS to Ubuntu Interim, which would mean less churn in ansible for mostly-current versions of packages.

                                                      1. 1

                                                        Yeah non-LTS Ubuntu was going to be my suggestion. Fedora Server is potentially another option if you want really new stuff, but I haven’t done it myself so I’m not sure what other issues that approach has. (I’m interested in it for reasons that Lobsters wouldn’t be - FreeIPA looks less annoying to set up on it, etc.)

                                                    2. 3

                                                      When I say Arch is good for consumer market, I mean it’s not a good idea to run Arch for your server.

                                                      With Arch, you must upgrade frequently but that’s not the case for servers. I have my VPS running arch and I update every few months. It runs into a lot of issues if you don’t update frequently.

                                                      1. 3

                                                        In addition to what strugee has written, I had some issues with running Arch Linux servers (though I still do for some specific use cases). Note that I don’t do this in professional production, so I don’t run tests before running upgrades on those servers.

                                                        • A few years ago, a NodeJS upgrade broke all features relying on OpenSSL. I had to downgrade the package. Package upgrades are not always tested.
                                                        • When running software wrote by third parties, I often ends up having incompatible (too recent) versions of software, such as for PostgreSQL.
                                                          • It’s possible to run postgresql-old-upgrade (after editing the systemd unit) but it can break, as it’s only packaged for running pg_upgrade.
                                                          • I used rbenv, for Ruby, and nvm, for NodeJS, when I needed specific versions of those tools, but it requires self-building and tracking upgrades yourself. (You have packages for NodeJS LTS versions now, but sometimes it’s not enough.)
                                                          • Running containers (with Docker or Podman) can fix the issue, but making sure that images are maintained properly (or making your own) and upgrading the containers can be complex.
                                                        • I often skip kernel upgrades (pacman -Syu --ignore linux-lts) because it would require a reboot about every week, even with linux-lts (upgrading removes kernel module files, those could be needed).
                                                        • I don’t expose my Arch Linux servers directly on the internet anymore. My internet-facing servers are running Debian or Ubuntu with unattended-upgrades. I still have to reboot for kernel upgrades, but it happens way less often than for Arch Linux.
                                                        1. 5

                                                          I often skip kernel upgrades (pacman -Syu –ignore linux-lts) because it would require a reboot about every week, even with linux-lts (upgrading removes kernel module files, those could be needed).

                                                          You can use the linux-keep-modules package to save the running kernel’s modules and remove them after you reboot.

                                                        2. 1

                                                          If I have to rebuild the Lobsters VPSs it’s tempting to move to Arch.

                                                          I’ll provide 24/7 support if you do :)

                                                          EDIT: And despite comments, I’d like to point out that all Arch infra runs on Arch. All transparently managed with ansible. https://gitlab.archlinux.org/archlinux/infrastructure

                                                        3. 3

                                                          I migrated from Arch to NixOS relatively recently.

                                                          Arch is still better for some usecases. For example, the Arch Build System is really simple compared to patching packages using Nix tooling, which is not so well documented and forces you to fight upstream as it deliberately diverges from FHS.

                                                          But NixOS is surprisingly easy to install and maintain. I think it’s the easiest and most robust distro ever in that regard. In particular, you can make dramatic changes with no fear. NixOS also shines if you want to keep lots of services running, as you have some centralized options, compared to the need to maintain conf files.

                                                        4. 6

                                                          The steam deck is arch based afaik. I guess the devs were familiar with it.

                                                          I was equally surprised when I learned that ChromeOS is based on gentoo.

                                                          1. 8

                                                            ChromeOS isn’t based on Gentoo. That’s a common misconception

                                                            ChromeOS uses Gentoo’s package manager Portage as part of its (extremely convoluted) build system

                                                            1. 2

                                                              Would it be fair to say that Chrome OS started as a Gentoo distro? And then with time it became its own fullfledged distribution?

                                                              1. 2

                                                                I’d say it started as a Chromium build and got enough added on to be an OS

                                                                1. 1

                                                                  ChromeOS was never based on Gentoo. Early builds I believe were actually Ubuntu based

                                                                  1. 1

                                                                    We have many packages in the Chromium OS tree. While many come from upstream Gentoo

                                                                    https://www.chromium.org/chromium-os/packages/

                                                                    They are calling gentoo the upstream. That tells me that it is the upstream

                                                            2. 1

                                                              Many batteries-included distros have two disadvantages that come to my mind:

                                                              • They are heavy - coming with lots, and lots of stuff pre-installed by default, and not really needed in specialized distros like SteamOS. I cannot imagine Steam requiring LibreOffice, for instance, so they could either go with a lightweight base like Arch or Alpine, or take Ubuntu and strip it down to the bones - building on Arch is likely much easier
                                                              • They often have lengthy release cadence - maybe with the exception of Fedora. Arch Linux follows “move fast and break things” philosophy. I left Ubuntu / Linux Mint exactly because I couldn’t stand having to work around bluetooth driver problems for half a year before they would finally support kernel versions that got rid of the bluetooth problems.
                                                            3. 3

                                                              I’m flagging this as off-topic because this isn’t about programming. I find this part of the guidelines particularly fitting:

                                                              Lobsters is focused pretty narrowly on computing; tags like art don’t imply every piece of art is on-topic.

                                                              Replace “art” with “games” and “design” and you’ve got an exact match for this article.

                                                              1. 11

                                                                I think this is a fair criticism. I saw it right as it was posted and I seriously considered removing it for this same reason.

                                                                To explain why I didn’t: I saw the story explicitly made reasonably broad points about UI/UX design, coming from video games might be a nicely novel perspective, and it seemed unlikely to spawn a flamewar. So I left it up and I think the resulting discussion is a worthwhile one with only minor off-topic comments (though I’m surprised nobody connected this to skeumorphic design).

                                                                More broadly, I think it’s healthy to occasionally let a story push the boundaries of topicality to make sure they’re not overly restrictive. We have to occasionally see actual positives and negatives to make sure we’re able to discriminate both false positives (off-topic submissions that stay live) and false negatives (removing stories that would have rewarding good topical discussions). I’ll admit that I’m acting intuitively here - what I noted above is certainly some of the reasons, but I can’t articulate all of what went into this decision or universal criteria for when it’s worthwhile.

                                                                1. 8

                                                                  I agree, but also I feel like everyone is having fun in this thread. Might be one of those rare harmless exceptions. Plus, it’s technically about game design, which is kind of infamously as much as it is about computer science as it is about writing and world building. Creating a game is actually my go-to example of a good starter project for somebody new to programming, as a game can be as simple or complex as you want it to be. A game can span virtually every sector of computer science if you really want it to, or it could be as simple as a console application asking you to guess a number.

                                                                  1. 7

                                                                    I would counter with this line from the About section:

                                                                    Will this improve the reader’s next program? Will it deepen their understanding of their last program?

                                                                    Discussions about UX/UI are on topic. This is a discussion of that.

                                                                  2. 6

                                                                    When asking for a new tag it’s good practice to list stories that would have fallen under that tag and why they’re not really suitable for other tags.

                                                                    Here are some examples:

                                                                    1. 2

                                                                      oh, thank you - I’ll add some example stories into the post

                                                                    2. 4

                                                                      I am happy that thunderbird keeps developing and improves.

                                                                      I feel like email is woefully underrated in its potential. It is a platform that has integration with Google, jira, miro, GitHub, git and many more. A good client could easily take advantage of that and extend the interface.

                                                                      I could for example see thunderbird with a vcs plugin that allows you to preview and comment on email patches similar to GitHub etc.

                                                                      1. 3

                                                                        I would love it if Thunderbird had deep enough understanding of GitHub to be able to do things like archive a thread of PR messages after the PR has merged. Or have some sort of “stale” metric for automated emails that automatically removes them from the inbox if I haven’t read them for two weeks.

                                                                        1. 4

                                                                          Or have some sort of “stale” metric for automated emails that automatically removes them from the inbox if I haven’t read them for two weeks.

                                                                          I believe Thunderdbird’s message filters (accessed through the “Tools” menu) is exactly what you want.

                                                                        2. 2

                                                                          A good client could easily take advantage of that and extend the interface.

                                                                          100% this.

                                                                          An underrated feature in T’bird $RECENT is the Matrix support. Matrix is a poor messaging system IMHO but it shows that part is getting some love. If T’bird could embrace libpurple and add a bunch of other modern messaging protocols, I would be delighted to dump a couple of messaging apps I have to keep around.

                                                                          I use Ferdi for Slack/Whatsapp/Telegram/SMS/Skype/Discord/FB. I need to keep Signal around for one awkward person and one old channel.

                                                                          In the past I had FB, Telegram, Skype, and Slack working well in Pidgin (alongside IRC and Rocket.chat, which I no longer really need).

                                                                          Those are doable, realistic targets: there are working connectors for Pidgin. But I use macOS on the desktop and there’s no Pidgin for macOS. Adium hasn’t been updated in a decade and no longer works. Trillian, remarkably, does work but has almost no connectors in its Mac version.

                                                                          1. 1

                                                                            For what it’s worth… Pidgin is in homebrew and works without an X server

                                                                            1. 1

                                                                              I tried brew install pidgin a couple of months ago, the last time I was told about this. It does launch but it needed WINE. It’s a sort of port but it’s basically the Windows binary under translation, and I couldn’t work out where in the macOS filesystem I needed to place DLLs to get most protocols working.

                                                                              1. 2

                                                                                I decided to give it a try right now, as something about this didn’t sound right after looking at the formula in Homebrew and not seeing anything WINE-ish. I’m on an M-series so it’s all Arm.

                                                                                After installing Many dependencies, it all seemed to install OK. The default support included Gadu-Gadu, GTalk, GroupWise, IRC, SIMPLE, XMPP, Zephyr, so probably missing some important ones. I assume they go in /opt/homebrew/Cellar/pidgin/2.14.13/lib/purple-2/ since there’s a lot of things like libxmpp.os and related in there. Maybe it’d be OK now?

                                                                                1. 1

                                                                                  Really? How very odd. I will try again. After the unsuccessful attempt, I removed it.

                                                                        3. 2

                                                                          Extremely cool research! I feel like a lot of these issues are due to bad API design, in particular if there was a better way for extensions to reach into the page’s JavaScript context (the way Firefox has with XPCNativeWrapper/wrappedJSObject) or to append elements invisible to the page (like ShadowDom, but that would actually work) developpers might not need to rely on messaging APIs that inevitably will end up being used without any authentication.

                                                                            1. 2

                                                                              FWIW the post on hacks.m.o is the abridged version of what we are posting in full technical details at https://blog.mozilla.org/attack-and-defense/2024/06/24/ipc-fuzzing-with-snapshots/, which was submitted as https://lobste.rs/s/ishj9g/ipc_fuzzing_firefox_with_vm_snapshots

                                                                              1. 2

                                                                                Ah, right, my link should probably be deleted or merged under yours. Hopefully @pushx can take care of that.

                                                                              2. 1

                                                                                Personally, I’m extremely excited about VR headsets finally getting usable passthrough, 4K pixels per eye and lightweight. After all, why buy another monitor when you could have as many screens as you want in 360 degrees around you, without any cabling and usable anywhere you go, even in places where there’s no room for a physical display?

                                                                                I’m just hoping we’ll eventually have a headset with an open platform, I don’t want Apple or Facebook as gatekeepers of my operating system…

                                                                                1. 19

                                                                                  This post is really interesting for non-artist users as it explains biggest flaws of Wayland for professional use.

                                                                                  1. 16

                                                                                    It’s very easy to explain the problem with Wayland in non-technical terms. [Thing] worked before, and now it doesn’t anymore. Times thousand. As long as technical purity is prioritized higher than the user’s needs, this won’t change. You can’t just rewrite everything, drop and break half the functionality and pretend that it’s a drop-in replacement. This will be more painful than the Python 2 to Python 3 version jump. But I guess things need to get worse before they can get better if you are in a local optimum.

                                                                                    1. 10

                                                                                      [Thing] worked before, and now it doesn’t anymore.

                                                                                      excapt when it didn’t, like keyboard shortcuts on non-latin languages, and now it does

                                                                                      1. 11

                                                                                        The presence of actual improvements does not preclude the presence of a large amount of regressions.

                                                                                      2. 6

                                                                                        But also, on the “technical purity” side: how does it help to make input handling for every exotic device the responsibility of the desktop environment? To my uneducated eyes, it just seems to create duplicate work.

                                                                                        1. 4

                                                                                          how does it help to make input handling for every exotic device the responsibility of the desktop environment?

                                                                                          Genuine question: what does “handling” mean here? AFAIK there’s a libwacom the same way there’s a libxkbcommon that all compositors can leverage to know what the hardware does.

                                                                                          1. 5

                                                                                            On X, adding client side device support requires one to program, at minimum:

                                                                                            • Hardware support libraries
                                                                                            • An X server extension (if necessary)
                                                                                            • A userspace configuration mechanism

                                                                                            Under Wayland, adding support requires one to program, at minimum:

                                                                                            • Hardware support libraries
                                                                                            • A library to integrate with Wayland compositors
                                                                                            • Compositor-specific (!) integrations of that library in every compositor you wish to support
                                                                                            • A userspace configuration mechanism

                                                                                            For instance, in order for all compositors to support libwacom, they all have to write compositor-specific integration code that calls libwacom; whereas xsetwacom works with any X desktop.

                                                                                            The X server is a narrow waist that Wayland doesn’t have.

                                                                                            1. 3

                                                                                              Is there anything preventing libwayland/wlroots (the narrow waists of the wayland world) from using libwacom to provide wacom hardware integration?

                                                                                              1. 2

                                                                                                I’m not an expert here, which is why I’m asking possibly-dumb questions. But I’m going off this quoted part from the article:

                                                                                                The universal command line interface xsetwacom, which is desktop environment agnostic, has been deprecated. Now the only way to setup many professional aspects of Graphics Tablets (and setup many non-Wacom tablets) is only through the GUI of the Desktop Environment. It’s now up to each desktop environment project (e.g. GNOME or KDE Plasma) to develop their own full featured GUI for tablet configuration.

                                                                                            2. 2

                                                                                              worked before, and now it doesn’t anymore

                                                                                              That’s a bit harsh, X11 and Wayland each have advantages for different use cases.

                                                                                              I use Wayland because I notice it has less latency, no tearing and the window managers I like support it. It doesn’t have any downsides that I notice but I’m sure there are many. I notice X11’s downsides more but it can also do things that Wayland can’t.

                                                                                              The author of the post has very valid reasons to prefer X11 and it seems like you do too. I have colleague who’s WM only works with X11 so he also prefers it.

                                                                                              Most of the development resources are going into Wayland and X11 isn’t worked on much anymore so I get that Wayland can be threatening but it’s not all bad.

                                                                                              1. 5

                                                                                                The problem is people pushing “you should use Wayland” and not “you should try Wayland” - they want X users to permanently switch. For anyone whose use-cases are better covered by Wayland, they switch on their own and thus leave the X-defense. X-defenders aren’t complaining about happy Wayland users, so it’s pointless to point out that in some cases Wayland is better.

                                                                                            1. 12

                                                                                              It’s sad to watch greed destroy the internet.

                                                                                              1. 19

                                                                                                Just the internet?

                                                                                                It’s sad to see greed becoming normalized, seeping into everything, destroying all that used to make us human.

                                                                                                1. 3

                                                                                                  People don’t want to pay for a browser, especially one that is Open-Source, because people don’t want to pay unless there’s scarcity.

                                                                                                  Mozilla is built on Google’s Ads, and Google can currently kill them at any time by just dropping their deal. Which means Firefox can’t actually compete with Google’s Chrome, unless they diversify. When Mozilla tries stuff out, like integrating paid services (e.g., Pocket or VPN) in the browser, people get mad. Also, Ads, for better or worse, have been fuelling OSS work and the open Internet.

                                                                                                  So, I’m curious, how exactly do you think should Mozilla keep the lights on? And what’s the thinking process in establishing that “greed” is the cause, or why is that a problem?

                                                                                                  1. 5

                                                                                                    There’s no option to pay for firefox. None. You can donate to Mozilla, but they will use the money to any and all projects, not just firefox.

                                                                                                    1. 5

                                                                                                      They won’t use that money for Firefox at all, since you’re donating to the foundation, while FF is developed by the corporation.

                                                                                                      1. 1

                                                                                                        I understand this frustration, but it’s irrelevant.

                                                                                                        Enumerate all consumer projects that are as complex as a browser, that are developed out of donations and that have to compete with FOSS products from Big Tech. Donations rarely work. Paying for FOSS doesn’t work, unless you’re paying for complements, like support, certification, or extra proprietary features.

                                                                                                        It’s a fantasy to think that if only we had a way to pay, Firefox would get the needed funding.

                                                                                                        1. 1

                                                                                                          Indeed, like every other company and organization under the sun who doesn’t want to depend on only one successful product. Where else would they get the resources for developing new ones?

                                                                                                    2. 10

                                                                                                      And don’t forget: Collecting User Data! I’m getting a nervous twitch every time I read “Firefox” and “privacy” in the same sentence. Being ever so slightly less bad than the competition doesn’t make you “privacy first”.

                                                                                                      1. 8

                                                                                                        tbh that does seem like one of the better attempts of squaring the circle of “telemetry is genuinely useful” and “we really don’t want to know details about individuals”?

                                                                                                        1. 2

                                                                                                          I’m not so convinced. You basically have to trust that their anonymization technique is doing the right thing, since you can’t really verify what’s happening on the server side. If it actually does the right thing, then it should be easy to poison the results, which, given the subject matter at hand, there would be a massive incentive do to so for certain players.

                                                                                                          1. 3

                                                                                                            You basically have to trust that their anonymization technique is doing the right thing, since you can’t really verify what’s happening on the server side.

                                                                                                            This is an oversimplification of the situation. Yes, you need to trust that the right thing is happening server-side, but you don’t need to trust Mozilla. You need to trust ISRG or Fastly, which are independent actors with their own reputation to uphold. Absolutely not perfect, but significantly better than the picture you’re painting here IMO.

                                                                                                            1. 5

                                                                                                              Given that the telemetry is opt-out instead of opt-in, there can be no trust. Trust, for me, involves at a bare minimum consent.

                                                                                                              I don’t mind them collecting data, but I don’t want my browser to be adversarial — the reason I stay off Chrome is because I have to check its settings on every release, and think hard about “how is this new setting going to screw me in the future?”

                                                                                                              Of all organisations, I hoped Mozilla would have understood this, especially as it caters to the privacy crowd.

                                                                                                              1. 1

                                                                                                                I don’t think it’s a problem with organizational understanding. I think Mozilla understands this perfectly well, but they also understand that you’ll suck it up and keep using Firefox because there’s no better option.

                                                                                                          2. 1

                                                                                                            Yeah, agreed, this really doesn’t seem so bad to me. Given the level of trust a browser required it still makes me nervous though.

                                                                                                          3. 6

                                                                                                            I wrote initial version of Firefox telemetry. My goal was to be different from projects like chrome where we can make telemetry available to the public web. Eg I could not make further progress on ff perf without data like https://docs.telemetry.mozilla.org/cookbooks/main_ping_exponential_histograms . Hardest part of this was convincing privacy team that Firefox was gonna die without perf data collection. Soon as we shipped that feature we had a few dozen of critical performance problems that we were not able to see inhouse.

                                                                                                            Hardest part was figuring out balance between genuinely useful data and not collecting anything too personal. In practice it turns out it’s not useful for perf work to collect anything that resembles tracking. However, it’s a slippery slope, since my days, they got greedy with data.

                                                                                                          4. 2

                                                                                                            they are an appendage of an advertising company after all

                                                                                                          5. 3

                                                                                                            This week I learned that WinDbg has a Time Travelling Debugger. I had no idea this existed, and I suspect I’m not alone!

                                                                                                            Indeed, you’re not alone! I haven’t done a lot of windows debugging, but the few times I tried, WinDbg’s UI was so awful (worse than GDB for me), that I wasn’t able to do anything at all, not even have it load my sources. So every time I ended up using x64dbg instead, which I found much easier to use. But time travel is such an amazing debugging tool that I’ll go back to WinDbg and bang my head against it until I’m finally able to use it (or a Pernosco-like tool exists for windows).

                                                                                                            1. 3

                                                                                                              It’s not the symbol Bazin chose, but the Point d’Ironie already exists in unicode, it’s « ⸮ ». https://en.wikipedia.org/wiki/Irony_punctuation

                                                                                                              1. 6

                                                                                                                I find this piece of news quite dismaying. Wolvic was started by Igalia, and here’s what we can find in their announcement:

                                                                                                                Interesting name. Why did you choose it? […] It’s also well known that wolves are very important to maintaining the health of ecosystems in which they exist. The browser ecosystem is very important to us at Igalia, whether in the traditionally 2D space of the Web or the 3D space of WebXR. We believe we can play an important role in helping keep the web ecosystem healthy and balanced.

                                                                                                                So Gecko was chosen with the explicit goal of maintaining browser diversity, and now the entity that probably has the most expertise in Gecko after Mozilla chose to move away from it because Gecko just doesn’t perform well enough.

                                                                                                                Since Firefox is the browser I dislike the least, Wolvic’s announcement was a ray of hope for me. Perhaps Firefox wouldn’t eventually disappear. Perhaps I was wrong to worry about Mozilla replacing Firefox’s usage of Gecko with Chromium in a couple of decades. With Igalia moving away from Gecko and Mozilla getting worse year after year, I don’t see a reason to keep hope.

                                                                                                                I don’t blame Igalia for their choice though, they tried to do the right thing and it just didn’t work out…