1. 1

    I made a similar switch a while ago, awesome to sway, because one game (Cyberpunk 2077 with wine) was crashing my whole computer randomly under awesome.

    I managed to make a usable configuration but I still miss awesome a lot because of:

    • the lack of workspace aware alt-tab: this is the most frustrating thing, I just can’t understand how it is considered acceptable to rotate through all windows regardless of workspace, I never want that;
    • my feeling that my configuration is made of a dozen loosely coupled pieces.

    I really just want awesome-on-wayland.

    1. 3

      An alternative could be Qtile. I have never tried it myself, but it looks like Awesome with Python and it works on both X11 and Wayland. I didn’t know about it before starting on i3, but I think I would have given a try.

      1. 1

        Interesting, I’ll take a look. That’d be the occasion to learn Python.

        1. 1

          Very interesting indeed, Qtile has a strong feeling of awesome-but-with-python. Sadly the lack of systray when in wayland and more importantly of XWayland support make it a non-starter for me. I’ll keep it on my radar nonetheless.

      2. 1

        Sometimes it’s worthwhile to run a dedicated xserver for a game only.

      1. 18

        I actually wound up switching off i3 (well, sway, but they’re basically the same) because I kept getting things into weird situations where I didn’t understand how the tiling works. Containers with only one child, that sort of thing.

        river, my current wm, has an interesting model: the layout management is done in an entirely separate process that communicates over an IPC mechanism. river sends it a list of windows, and the layout daemon responds with where to put them.

        Also, since you brought it up: sway is almost entirely compatible with i3. The biggest missing feature is layout save/restore. But it can do one thing i3 can’t do, and that’s rearranging windows by dragging them.

        1. 26

          That’s pretty much why I wrote river. I was using sway beforehand as well but grew increasingly frustrated with how much implicit state i3-style window management required me to keep in my head and how unpredictable that state makes managing windows if your mental model/memory of the tree isn’t accurate.

          1. 19

            link to the project: https://github.com/ifreund/river

            Looks interesting!

          2. 6

            I’m in the same boat (pre-switch). I use sway but, after many years, still don’t really understand how I sometimes end up with single child (sometimes multi generational) containers.

            My personal ideal was spectrwm, which simply had a single primary window and then, to the right, an infinitely subdividing tower of smaller windows which could be swapped in. I briefly toyed with the idea of writing a wayland spectrwm clone.

            1. 7

              That sounds exactly like the default layout of dwm, awesomewm, xmonad, and river. If you’re looking for that kind of dynamic tiling on wayland feel free to give river a try!

              1. 4

                I will! I had some trouble compiling it last time I tried. But I will return to it.

                1. 4

                  Feel free to stop by #river on irc.libera.chat if you run into issues compiling again!

              2. 1

                Your reasons for spectrwm (and xmonad’s, etc. model) is exactly the reason I use tiling window managers like i3, exwm and StumpWM: I don’t like that dynamic at all ;-)

                No accounting for different tastes.

                Is there a name for those two different tiling models?

                1. 1

                  automatic vs manual?

                  1. 1

                    I’ve seen the terms static (for when the containers have to be created by the user) vs dynamic used.

                    ArchLinux seems to call them dynamic vs manual. See the management style column https://wiki.archlinux.org/title/Comparison_of_tiling_window_managers

                2. 1

                  I was also quite lost with the way tiling works at the beginning. There is not much resource around this subject. It seems people just get used to it and avoid creating these useless containers. I am lucky, it was my case.

                1. 8

                  Alternatively, one can use static named directory. It has the advantage of being expanded at any place, so it works with any command, not just with cd.

                  1. 1

                    Is this approach zsh only?

                    1. 2

                      Yes. AFAIK, bash does not have this.

                    2. 1

                      Wow! I had no idea such a thing existed. Definitely it looks like a more complete solution than mine, although also more complex. Still good to learn something new :)

                      1. 1

                        I also like the words of wisdom thar you can just use shell variables.

                        https://vincent.bernat.ch/en/blog/2015-zsh-directory-bookmarks#fn-variables

                        You only lose the prompt expansion.

                      1. 3

                        Is it possible to buy it without paying the Microsoft tax?

                        1. 5

                          Yes. All Lenovo laptops can be bought without Windows now.

                          1. 1

                            You might end up paying more without it though since the bundled crapware has been known subsidize the cost.

                          1. -1

                            3:2 aspect ratio display? Great, I look forward to every game I own being distorted or letterbox’ed on it. I didn’t know they even made displays with that aspect ratio.

                            1. 19

                              People often complains about the lack of vertical space. Why would games be distorted? Only video will be problematic on this ratio.

                              1. 16

                                The screen was specifically what put me over the top to buy one. I’ve been dreaming about a taller aspect screen since they made everything “wide” ten years ago.

                                1. 2

                                  I would prefer a “normal” 4:3. The pixel density of the 13.5” 2256x1504 display is too low for 2x scaling. Something like 2400x1600 should be the lowest option.

                                  1. 2

                                    Obviously it’s all subjective, but I’d say 4:3 feels “dated” while 3:2 feels “super cool”.

                                2. 8

                                  It is a lot more pleasant for coding, reading, and writing in my experience. 3:2 is great for that and I prefer it, but I don’t play many videogames anymore.

                                  Work forced me to used a 16:9 display for a while. I use a ultra-widescreen now because that breaks into 3 reasonable panes, but 2 vertical 16:9s was ok. For me, xrandr is linux’s killer-app.

                                  1. 5

                                    3:2 is great. My old Surface Book has 3:2; I’d much rather get a few inches of vertical space for coding and reading than avoid letterboxing for movies.

                                    In my experience PC games work just fine with 3:2 as well typically.

                                    1. 4

                                      I used one of the original Chromebooks, the Chromebook Pixel, for several months in 2015 and adored the 3:2 aspect ratio for everything except media consumption. It was a little awkward for fullscreen 16:9 videos but fine for 4:3. I recognize that not much content is 4:3 anymore, though.

                                      I’m a little concerned about the pixel density of the Framework screen being too low for HiDPI but I’m unlikely to buy one anytime soon having just bought a Lenovo Flex 5 CB earlier this year for my main mobile computing device.

                                      1. 2

                                        Yeah, I have one of those Chromebooks too. The aspect ratio is definitely the best thing about the whole machine by a long shot. If it weren’t for the glossy display I would have been tempted to use it as my daily driver (after wiping the OS of course).

                                    1. 4

                                      the LG UltraFine exposes a USB HID device to control its brightness

                                      huh, that’s interesting. Usually brightness is controlled over DDC/CI on DisplayPort monitors.

                                      As for my mouse, I’m not a gamer so I don’t need something with 35 buttons on it

                                      35 button mice are not the most common gaming mice, btw. They are specifically made for MMO players (though I’ve heard of a GTA speedrunner leveraging those too, heh). For FPS players, the mouse looks pretty normal – the important things are having a great sensor, minimum latency, good ergonomics. Non-gaming mice are often just bad in terms of all that. Speaking of:

                                      I recently switched to a Logitech M355

                                      Ouch, this shape looks painfully unergonomic. About as bad as the Apple Magic Mouse. Maybe somewhat better because the front is slightly raised up.

                                      1. 1

                                        Yes. For example, the Logitech G305 is quite nice. Good battery life. There is a cord version if you don’t mind to have a bit of RGB.

                                        1. 1

                                          G603/G703 is my favorite shape, but wow, that G305 has awesome color options!

                                      1. 1

                                        This particular version of the LG UltraFine is no longer being produced

                                        Which is a shame, since I’m unaware of any other monitor that has such a high DPI and works on non-Apple products. I’ve basically given up on one ever existing, and am now hoping to pick up a 27in 4K 120Hz monitor to run at 1.5x scaling since it’s probably the best I’ll be able to manage for the foreseeable future. Here’s hoping Microsoft finally introduces a Surface monitor!

                                        1. 1

                                          I was also looking recently and it is terribly difficult to find anything. 16:9, 16:10 or ultra wide, the best you can hope seems to be 1.5x. The LG UltraFine 24MD4KL that is quoted in the article is quite expensive. It does not come with a tin bezel and only provides USB-C. AFAIK, this is the only current option to get 200 ppi.

                                          1. 1

                                            I have a 28” 16:9 4K, started out at 1.5x but switched to 2x. Honestly it’s fine, it’s not as comically big as you’d think at first.

                                        1. 3

                                          Does anyone still parse access logs? Seems like a good option for a small site with limited aims. It adds no page weight, can’t be blocked, and doesn’t even require much back-end infrastructure.

                                          1. 2

                                            Yes, but there are a couple of niggles that I may or may not work on something to try and ‘solve’:

                                            (a) all the log analysers pretty much assume you have just one web server.

                                            (b) they’re either quite featured, and look like they were designed in the 90s, or they look nice, but have some glaring gaps in functionality.

                                            I’ve toyed with the idea (only to the point of some PoC stuff to test it out so far) of a “simpler” analyser that would work for the use-cases I’ve seen: a really-simplistic (i.e. probably just shell, for the initial version) ‘parsing’ of the log entries, and then relying on Redis’ increment functionality to bump up scores based for the various metrics, using a few date-related keys.

                                            1. 1

                                              Let me know if you ever get around to building such a thing. I would be happy to test it. All I really want is a graph of visitors over time broken down by page. I had been using Google Analytics, which was overkill and I was feeling guilty about supplying traffic data to Google. Now I just run less on the access file occasionally, which is nearly enough for the traffic volume (can I call less an MVP for web traffic analysis?)

                                              1. 3

                                                Thanks for the offer. I’ll be sure to post something here if I get something working.

                                                You may want to also look at Goaccess (https://goaccess.io) - it does does static log analysis and might well be enough for what you need.

                                                The issue for us has been (a) it’s a PITA to make it work across multiple servers and (b) it has no built in ability to filter to a given date range. On the CLI it’s possible (although not necessarily simple) to just filter the input for an ad-hoc run, but from the ‘HTML’ frontend (i.e. what business people want to use) it’s just not possible.

                                                1. 2

                                                  To gather logs from multiple web servers, I am using:

                                                  for h in web01 web02 web03 web04; do
                                                    ssh $h zcat -f /var/log/nginx/vincent.bernat.ch.log\* | grep -Fv atom.xml
                                                  done | goaccess --output=goaccess.html ...
                                                  
                                                  1. 1

                                                    Thanks for the suggestion. I’ll be sure to check it out.

                                                    1. 1

                                                      I love goaccess and use it all the time. I try to keep things in one server but I have used it with multi server setups.

                                                      Could you be specific about what it is a PITA when handling multi server setups. How is it any more complicated (or simpler) than any other tool? You always need to agregate the data, whatever solution you use. What’s specific about go-access?

                                                      1. 1

                                                        So the problem is that we want the analytics frontend to be served from multiple servers too, and want it to work in realtime html mode.

                                                        As much as analytics isn’t really business critical, the goal here is that nothing we control for prod, is a SPOF.

                                                        So the kind of working setup now relies on rsyslog to take varnishncsa access logs and send/receive to/from peer syslog servers and also write to disk locally, which goaccess consumes. This isn’t what I’d call a robust setup.

                                                        The plan in my head/on some notes/kind of in a PoC is to have the storage layer (i.e. redis is my idea for now, might end up being either something else, or being adaptable to a couple of options) be the point of aggregation, so each log producing service (in our case varnish, but in another case it might be Nginx or apache or whatever that can produce access logs) has something locally running which does really basic handling of the log entry, and then just increments a series of counters in the shared storage layer, based on the metrics from the log entry.

                                                        1. 1

                                                          Rsyslog, fluentd, or just watch the logs with tail or what have you and append to a remote server via socket.

                                                          I don’t really see the use case of serving the UI from several servers. They are behind a proxy anyways.

                                                          Personally I would just reach the files via SSH like @vbernat suggests.

                                                          1. 1

                                                            They are not behind a single proxy, that’s the point.

                                                            Copying files via ssh means you lose any capability for real-time logs too.

                                                          2. 1

                                                            GoatCounter supports log parsing as well; the way it works is a bit different than e.g. goaccess: you still have your main goatcounter instance running like usual, and you run goatcounter import [..] which parses the logfiles and uses the API to send it to goatcounter. The upshot of this is that it should solve problems like this, and generally being a bit more flexible.

                                                            (disclaimer: I am the GoatCounter author, not trying to advertise it or anything, just seems like a useful thing to mention here)

                                                            1. 1

                                                              Thats interesting, thanks.

                                                              1. 1

                                                                Hey I don’t want to turn this into a GoatCounter FAQ, but there’s no way to have the computed metrics be shared somehow is there (i.e. so the analytics are not reliant on a single machine being up to record/view) ?

                                                                1. 1

                                                                  I would solve that by running two instances with the same PostgreSQL database.

                                                                  Other than that, you can send the data to two instances, and you can export/import as CSV. But in general, if there isn’t really a failover solution built-in. But I think using the same (possibly redundant) PostgreSQL database should work fairly well, but it’s not a setup I’ve tried so there may be some issues I’m not thinking of at the moment (but if there are, I expect them to be solvable without too much problems).

                                                                  1. 1

                                                                    The shared DB solution sounds most like what I had in mind, thanks - I wasn’t even aware it supports Postgres. I guess it’s a deliberate decision to leave the self-hosting info on GH and have the main site be more about the hosted version?

                                                                    1. 1

                                                                      I guess it’s a deliberate decision to leave the self-hosting info on GH and have the main site be more about the hosted version?

                                                                      Yeah, that’s pretty much the general idea; a lot of people looking for the self-hosted option aren’t really interested in details of the SaaS stuff, and vice versa. Maybe I should make that a bit clearer actually 🤔

                                                  1. 2

                                                    When do all their mirrors support https? Downloading something over http or even ftp does not feel like 2021.

                                                    1. 12

                                                      If they do this right (signed packages and so on), then https will only help with privacy. Which is important for sure, but leaking which packages you download is less horrible than leaking the contents of your conversations, or even just who you’ve been in contact with lately.

                                                      1. -1

                                                        HTTPS is more than just privacy. It also prevents JavaScript injection via ISPs, or any other MITM.

                                                        1. 21

                                                          It does that for web pages, not for packages. Packages are signed by the distro’s keys, so if anyone were to mess with your packages as you download them, your package manager would notice and prevent you from installing the package. The only real advantage to HTTPS for package distribution is that it helps conceal which packages you download (though even then, I get an attacker could get a pretty good idea just by seeing which server you’re downloading from and how many bytes you’re downloading).

                                                          1. 1

                                                            It does that for web pages, not for packages

                                                            Indeed, however ISOs, USB installers, etc. can still downloaded from the web site.

                                                            Packages are signed by the distro’s keys, so if anyone were to mess with your packages as you download them, your package manager would notice and prevent you from installing the package.

                                                            Yes, I’m familiar with cryptographic signatures.

                                                            1. 9

                                                              Indeed, however ISOs, USB installers, etc. can still downloaded from the web site.

                                                              Yes. The Debian website uses HTTPS, and it looks like the images are distributed using HTTPS too. I thought we were talking bout distributing packages using HTTP vs HTTPS. If your only point is that the ISOs should be distributed over HTTPS then of course I agree, and the Debian project seems to as well.

                                                              1. 0

                                                                No, the point is that there is no need for HTTP when HTTPS is available. Regardless of traffic, all HTTP should redirect to HTTPS IMNSHO.

                                                                1. 16

                                                                  But… why? Your argument for why HTTPS is better is that it prevents JavaScript injection and other forms of MITM. But MITM clearly isn’t a problem for package distribution. Is your argument that “HTTPS protects websites against MITM so packages should use HTTPS (even thought HTTPS doesn’t do anything to protect packages from MITM)”?

                                                                  I truly don’t understand what your reasoning is. Would you be happier if apt used a custom TCP-based transport protocol instead of HTTP?

                                                                  1. 6

                                                                    I suspect that a big reason is cost.

                                                                    Debian mirrors will be serving an absurd amount of traffic, and will probably want to serve data as close to wire speed as possible (likely 10G). Adding a layer of TLS on top means you need to spend money on a powerful CPU or accelerator kit, instead of (mostly) shipping bytes from the disk to the network card.

                                                                    Debian won’t be made of money, and sponsors won’t want to spend more than they absolutely have to.

                                                                    1. 4

                                                                      But MITM clearly isn’t a problem for package distribution.

                                                                      It is though! Package managers still accept untrusted input data and usually do some parsing on it. apt has had vulnerabilities and pacman as well.

                                                                      https://justi.cz/security/2019/01/22/apt-rce.html

                                                                      https://xn--1xa.duncano.de/pacman-CVE-2019-18182-CVE-2019-18183.html

                                                                      TLS would not prevent malicious mirrors in either of these cases, but it would prevent MITM attacks exploiting these issues.

                                                                      1. 7

                                                                        Adding TLS implementations also bring bugs, including RCE. And Debian is using GnuTLS for apt.

                                                                        1. 1

                                                                          Idd. It was one of the reasons for OpenBSD to create signify so I’m delighted to see Debians new approach might be based on it.

                                                                          From https://www.openbsd.org/papers/bsdcan-signify.html:

                                                                          … And if not CAs, then why use TLS? It takes more code for a TLS client just to negotiate hello than in all of signify.

                                                                          The first most likely option we might consider is PGP or GPG. I hear other operating systems do so. The concerns I had using an existing tool were complexity, quality, and complexity.

                                                                2. 7

                                                                  @sandro originally said: “When do all their mirrors support https?” Emphasis on “mirrors”. To the best of my knowledge, “mirror” in this context does not refer to a web site, or a copy thereof, but to a packages repository.

                                                                  I responded specifically in this context. I was not talking about web sites, which rely on the transport mechanism for all security. In the context I was responding to, each package is signed. Your talk of JavaScript injection and other MITM attacks is simply off topic.

                                                          2. 9

                                                            ftp.XX.debian.org are CNAMEs to servers accepting to host a mirror. These servers are handled by unrelated organisations, so it is not possible to provide a proper cert for them. This match the level of trust: mirrors are not trusted with the content nor the privacy. This is not the case of deb.debian.org which is available over HTTPS if you want (ftp.debian.org is an alias for it).

                                                            1. 2

                                                              Off line mirrors, people without direct internet access, decades later offline archives, people in the future, local DVD sets.

                                                              Why “trust” silent media?

                                                            1. 3

                                                              From my understanding, these kind of speed runs are done without loading/saving states. So they have to play in one go from the start without doing an error while doing some pixel-perfect moves. This seems impossible to me!

                                                              1. 5

                                                                It helps that the overall time is so short. You wouldn’t go for risky pixel-perfect tricks near the end of a 1h speedrun. But in SMB1 you can just reset and get back in literally a couple of minutes.

                                                                1. 4

                                                                  Right. If they use savestates it would be considered a segmented run or, depending on what else they did, a tool-assisted speedrun.

                                                                  This run is full of pixel- and frame-perfect moves. It’s incredible. As the video suggests, this might be the best a human can possibly do on SMB1. Even the best TAS is only like 0.6 seconds better and it does stuff that requires things like extreme subpixel manipulation and single-pixel accuracy in 4-2.

                                                                  Unless there’s some major breakthrough that we just can’t imagine right now, 4:54 is the last second barrier.

                                                                  In other words, it’s quite the accomplishment.

                                                                  (If you’re interested in this sort of thing, do check out the other videos on that channel. Bismuth does good stuff.)

                                                                  1. 4

                                                                    They do it over and over until some combination of practice and sheer luck means that they hit every critical move correctly.

                                                                  1. 78

                                                                    It would help if Firefox would actually make a better product that’s not a crappy Chrome clone. The “you need to do something different because [abstract ethical reason X]” doesn’t work with veganism, it doesn’t work with chocolate sourced from dubious sources, it doesn’t work with sweatshop-based clothing, doesn’t work with Free Software, and it sure as hell isn’t going to work here. Okay, some people are going to do it, but not at scale.

                                                                    Sometimes I think that Mozilla has been infiltrated by Google people to sabotage it. I have no evidence for this, but observed events don’t contradict it either.

                                                                    1. 24

                                                                      It would help if Firefox would actually make a better product that’s not a crappy Chrome clone. The “you need to do something different because [abstract ethical reason X]” doesn’t work with veganism, it doesn’t work with chocolate sourced from dubious sources, it doesn’t work with sweatshop-based clothing, doesn’t work with Free Software, and it sure as hell isn’t going to work here. Okay, some people are going to do it, but not at scale.

                                                                      I agree, but the deck is stacked against Mozilla. They are a relatively small nonprofit largely funded by Google. Structurally, there is no way they can make a product that competes. The problem is simply that there is no institutional counterweight to big tech right now, and the only real solutions are political: antitrust, regulation, maybe creating a publicly-funded institution with a charter to steward the internet in the way Mozilla was supposed to. There’s no solution to the problem merely through better organizational decisions or product design.

                                                                      1. 49

                                                                        I don’t really agree; there’s a lot of stuff they could be doing better, like not pushing out updates that change the colour scheme in such a way that it becomes nigh-impossible to see which tab is active. I don’t really care about “how it looks”, but this is just objectively bad. Maybe if you have some 16k super-HD IPS screen with perfect colour reproduction at full brightness in good office conditions it’s fine, but I just have a shitty ThinkPad screen and the sun in my home half the time (you know, like a normal person). It’s darn near invisible for me, and I have near-perfect eyesight (which not everyone has). I spent some time downgrading Firefox to 88 yesterday just for this – which it also doesn’t easily allow, not if you want to keep your profile anyway – because I couldn’t be arsed to muck about with userChrome.css hacks. Why can’t I just change themes? Or why isn’t there just a setting to change the colour?

                                                                        There’s loads of other things; one small thing I like to do is not have a “x” on tabs to close it. I keep clicking it by accident because I have the motor skills of a 6 year old and it’s rather annoying to keep accidentally closing tabs. It used to be a setting, then it was about:config, then it was a userChrome.css hack, now it’s a userChrome.css hack that you need to explicitly enable in about:config for it to take effect, and in the future I probably need to sacrifice a goat to our Mozilla overlords if I want to change it.

                                                                        I also keep accidentally bookmarking stuff. I press ^D to close terminal windows and sometimes Firefox is focused and oops, new bookmark for you! Want to configure keybinds for Firefox? Firefox say no; you’re not allowed, mere mortal end user; our keybinds are perfect and work for everyone, there must be something wrong with you if you don’t like it! It’s pretty darn hard to hack around this too – more time than I was willing to spend on it anyway – so I just accepted this annoyance as part of my life 🤷

                                                                        “But metrics show only 1% of people use this!” Yeah, maybe; but 1% here and 5% there and 2% somewhere else and before you know it you’ve annoyed half (of not more) of your userbase with a bunch of stuff like that. It’s the difference between software that’s tolerable and software that’s a joy to use. Firefox is tolerable, but not a joy. I’m also fairly sure metrics are biased as especially many power users disable it, so while useful, blindly trusting it is probably not a good idea (I keep it enabled for this reason, to give some “power user” feedback too).

                                                                        Hell, I’m not even a “power user” really; I have maybe 10 tabs open at the most, usually much less (3 right now) and most settings are just the defaults because I don’t really want to spend time mucking about with stuff. I just happen to be a programmer with an interest in UX who cares about a healthy web and knows none of this is hard, just a choice they made.

                                                                        These are all really simple things; not rocket science. As I mentioned a few days ago, Firefox seems have fallen victim to a mistaken and fallacious mindset in their design.

                                                                        Currently Firefox sits in a weird limbo that satisfies no one: “power users” (which are not necessarily programmers and the like, loads of people with other jobs interested in computers and/or use computers many hours every day) are annoyed with Firefox because they keep taking away capabilities, and “simple” users are annoyed because quite frankly, Chrome gives a better experience in many ways (this, I do agree, is not an easy problem to solve, but it does work “good enough” for most). And hey, even “simple” users occasionally want to do “difficult” things like change something that doesn’t work well for them.

                                                                        So sure, while there are some difficult challenges Firefox faces in competing against Google, a lot of it is just simple every-day stuff where they just choose to make what I consider to be a very mediocre product with no real distinguishing features at best. Firefox has an opportunity to differentiate themselves from Chrome by saying “yeah, maybe it’s a bit slower – it’s hard and we’re working on that – but in the meanwhile here’s all this cool stuff you can do with Firefox that you can’t with Chrome!” I don’t think Firefox will ever truly “catch up” to Chrome, and that’s fine, but I do think they can capture and retain a healthy 15%-20% (if not more) with a vision that consists of more than “Chrome is popular, therefore, we need to copy Chrome” and “use us because we’re not Chrome!”

                                                                        1. 21

                                                                          Speaking of key bindings, Ctrl + Q is still “quit without any confirmation”. Someone filed a bug requesting this was changeable (not even default changed), that bug is now 20 years old.

                                                                          It strikes me that this would be a great first issue for a new contributor, except the reason it’s been unfixed for so long is presumably that they don’t want it fixed.

                                                                          1. 9

                                                                            A shortcut to quit isn’t a problem, losing user data when you quit is a problem. Safari has this behaviour too, and I quite often hit command-Q and accidentally quit Safari instead of the thing I thought I was quitting (since someone on the OS X 10.8 team decided that the big visual clues differentiating the active window and others was too ugly and removed it). It doesn’t bother me, because when I restart Safari I get back the same windows, in the same positions, with the same tabs, scrolled to the same position, with the same unsaved form data.

                                                                            I haven’t used Firefox for a while, so I don’t know what happens with Firefox, but if it isn’t in the same position then that’s probably the big thing to fix, since it also impacts experience across any other kind of browser restart (OS reboots, crashes, security updates). If accidentally quitting the browser loses you 5-10 seconds of time, it’s not a problem. If it loses you a load of data then it’s really annoying.

                                                                            1. 4

                                                                              Firefox does this when closing tabs (restoring closed tabs usually restores form content etc.) but not when closing the window.

                                                                              The weird thing is that it does actually have a setting to confirm when quitting, it’s just that it only triggers when you have multiple tabs or windows open and not when there’s just one tab 🤷

                                                                              1. 1

                                                                                The weird thing is that it does actually have a setting to confirm when quitting, it’s just that it only triggers when you have multiple tabs or windows open and not when there’s just one tab

                                                                                Does changing browser.tabs.closeWindowWithLastTab in about:config fix that?

                                                                                1. 1

                                                                                  I have it set to false already, I tested it to make sure and it doesn’t make a difference (^W won’t close the tab, as expected, but ^Q with one tab will still just quit).

                                                                              2. 2

                                                                                I quite often hit command-Q and accidentally quit Safari

                                                                                One of the first things I do when setting up a new macOS user for myself is adding alt-command-Q in Preferences → Keyboard → Shortcuts → App Shortcuts for “Quit Safari” in Safari. Saves my sanity every day.

                                                                                1. 1

                                                                                  Does this somehow remove the default ⌘Q binding?

                                                                                  1. 1

                                                                                    Yes, it changes the binding on the OS level, so the shortcut hint in the menu bar is updated to show the change

                                                                                    1. 1

                                                                                      It overrides it - Safari’s menu shows ⌥⌘Q against “Quit Safari”.

                                                                                    2. 1

                                                                                      You can do this in windows for firefox (or any browser) too with an autohotkey script. You can set it up to catch and handle a keypress combination before it reaches any other application. This will be global of course and will disable and ctrl-q hotkey in all your applications, but if you want to get into detail and write a more complex script you can actually check which application has focus and only block the combination for the browser.

                                                                                    3. 2

                                                                                      This sounds like something Chrome gets right - if I hit CMD + Q I get a prompt saying “Hold CMD+Q to Quit” which has prevented me from accidentally quitting lots of times. I assumed this was MacOS behaviour, but I just tested Safari and it quit immediately.

                                                                                    4. 6

                                                                                      Disabling this shortcut with browser.quitShortcut.disabled works for me, but I agree that bug should be fixed.

                                                                                      1. 1

                                                                                        Speaking of key bindings, Ctrl + Q is still “quit without any confirmation”.

                                                                                        That was fixed a long time ago, at least on Linux. When I press it, a modal says “You are about to close 5 windows with 24 tabs. Tabs in non-private windows will be restored when you restart.” ESC cancels.

                                                                                        1. 1

                                                                                          That’s strange. I’m using latest Firefox, from Firefox, on Linux, and I don’t ever get a prompt. Another reply suggested a config tweak to try.

                                                                                          1. 1

                                                                                            I had that problem for a while but it went away. I have browser.quitShortcut.disabled as false in about:config. I’m not sure if it’s a default setting or not.

                                                                                            1. 1

                                                                                              quitShortcut

                                                                                              It seems that this defaults to false. The fact you have it false, but don’t experience the problem, is counter-intuitive to me. Anyway the other poster’s suggestion was to flip this, so I’ll try that. Thanks!

                                                                                              1. 1

                                                                                                That does seem backwards. Something else must be overriding it. I’m using Ubuntu 20.04, if that matters. I just found an online answer that mentions the setting.

                                                                                      2. 7

                                                                                        On one level, I disagree – I have zero problems with Firefox. My only complaint is that sometimes website that are built to be Chrome-only don’t work sometimes, which isn’t really Firefox’s problem, but the ecosystem’s problem (see my comment above about antitrust, etc). But I will grant you that Firefox’s UX could be better, that there are ways the browser could be improved in general. However, I disagree here:

                                                                                        retain a healthy 15%-20% (if not more)

                                                                                        I don’t think this is possible given the amount of resources Firefox has. No matter how much they improve Firefox, there are two things that are beyond their control:

                                                                                        1. Most users use Google products (gmail, calendar, etc), and without an antitrust case, these features will be seamlessly integrated into Chrome, and not Firefox.
                                                                                        2. Increasingly, websites are simple not targeting Firefox for support, so normal users who want to say, access online banking, are SOL on Firefox. (This happens to me, I still have to use Chrome for some websites)

                                                                                        Even the best product managers and engineers could not reverse Firefox’s design. We need a political solution, unless we want the web to become Google Web (tm).

                                                                                        1. 3

                                                                                          Why can’t I just change themes?

                                                                                          You can. The switcher is at the bottom of the Customize Toolbar… view.

                                                                                          1. 2

                                                                                            Hm, last time I tried this it didn’t do much of anything other than change the colour of the toolbar to something else or a background picture; but maybe it’s improved now. I’ll have a look next time I try mucking about with 89 again; thanks!

                                                                                            1. 3

                                                                                              You might try the Firefox Colors extension, too. It’s a pretty simple custom theme builder.

                                                                                              1. 2

                                                                                                https://color.firefox.com/ to save the trouble of searching.

                                                                                          2. 4

                                                                                            I agree with Firefox’s approach of choosing mainstream users over power-users - that’s the only way they’ll ever have 10% or more of users. Firefox is doing things with theming that I wish other systems would do - they have full “fresco” themes (images?) in their chrome! It looks awesome! I dream about entire DEs and app suites built from the ground up with the same theme of frescoes (but with an different specific fresco for each specific app, perhaps tailored to that app). Super cool!

                                                                                            I don’t like the lack of contrast on the current tab, but “give users the choice to fix this very specific issue or not” tends to be extremely shortsighted - the way to fix it is to fix it. Making it optional means yet another maintenance point on an already underfunded system, and doesn’t necessarily even fix the problem for most users!

                                                                                            More importantly, making ultra-specific optionss like that is usually pushing decisions onto the user as a method of avoiding internal politicking/arguments, and not because pushing to the user is the optimal solution for that specific design aspect.

                                                                                            1. 2

                                                                                              As for the close button, I am like you. You can set browser.tabs.tabClipWidth to 1000. Dunno if it is scheduled to be removed.

                                                                                              As for most of the other grips, adding options and features to cater for the needs of a small portion of users has a maintenance cost. Maybe adding the option is only one line, but then a new feature needs to work with the option enabled and disabled. Removing options is just a way to keep the code lean.

                                                                                              My favorite example in the distribution world is Debian. Debian supports tries to be the universal OS. We are drowning with having to support everything. For examples, supporting many init systems is more work. People will get to you if there is a bug in the init system you don’t use. You spend time on this. At the end, people not liking systemd are still unhappy and switch to Devuan which supports less init systems. I respect Mozilla to keep a tight ship and maintaining only the features they can support.

                                                                                              1. 7

                                                                                                Nobody would say anything if their strategy worked. The core issue is that their strategy obviously doesn’t work.

                                                                                                adding options and features to cater for the needs of a small portion of users

                                                                                                It ’s not even about that.

                                                                                                It’s removing things that worked and users liked by pretending that their preferences are invalid. (And every user belongs to some minority that likes a feature others may be unaware of.)

                                                                                                See the recent debacle of gradually blowing up UI sizes, while removing options to keep them as they were previously.

                                                                                                Somehow the saved cost to support some feature doesn’t seem to free up enough resources to build other things that entice users to stay.

                                                                                                All they do with their condescending arrogance on what their perfectly spherical idea of a standard Firefox user needs … is making people’s lives miserable.

                                                                                                They fired most of the people that worked on things I was excited about, and it seems all that’s left are some PR managers and completely out-of-touch UX “experts”.

                                                                                                1. 4

                                                                                                  As for most of the other grips, adding options and features to cater for the needs of a small portion of users has a maintenance cost. Maybe adding the option is only one line, but then a new feature needs to work with the option enabled and disabled. Removing options is just a way to keep the code lean.

                                                                                                  It seems to me that having useful features is more important than having “lean code”, especially if this “lean code” is frustrating your users and making them leave.

                                                                                                  I know it’s easy to shout stuff from the sidelines, and I’m also aware that there may be complexities I may not be aware of and that I’m mostly ignorant of the exact reasoning behind many decisions (most of us here are really, although I’ve seen a few Mozilla people around), but what I do know is that 1) Firefox as a product has been moving in a certain direction for years, 2) that Firefox has been losing users for years, 3) that I know few people who truly find Firefox an amazing browser that a joy to use, and that in light of that 4) keep doing the same thing you’ve been doing for years is probably not a good idea, and 5) that doing the same thing but doing it harder is probably an even worse idea.

                                                                                                  I also don’t think that much of this stuff is all that much effort. I am not intimately familiar with the Firefox codebase, but how can a bunch of settings add an insurmountable maintenance burden? These are not “deep” things that reach in to the Gecko engine, just comparatively basic UI stuff. There are tons of projects with a much more complex UI and many more settings.

                                                                                                  Hell, I’d argue that even removing the RSS was also a mistake – they should have improved it instead, especially after Google Reader’s demise there was a huge missed opportunity there – although it’s a maintenance burden trade-off I can understand it better, it also demonstrates a lack of vision to just say “oh, it’s old crufty code, not used by many (not a surprise, it sucked), so let’s just remove it, people can just install an add-on if they really want it”. This is also a contradiction with Firefox’s mantra of “most people use the defaults, and if it’s not used a lot we can just remove it”. Well, if that’s true then you can ship a browser with hardly any features at all, and since most people will use the defaults they will use a browser without any features.

                                                                                                  Browsers like Brave and Vivaldi manage to do much of this; Vivaldi has an entire full-blown email client. I’d wager that a significant portion of the people leaving Firefox are actually switching to those browsers, not Chrome as such (but they don’t show up well in stats as they identify as “Chrome”). Mozilla nets $430 million/year; it’s not a true “giant” like Google or Apple, but it’s not small either. Vivaldi has just 55 employees (2021, 35 in 2017); granted, they do less than Mozilla, but it doesn’t require a huge team to do all of this.

                                                                                                  And every company has limited resources; it’s not like the Chrome team is a bottomless pit of resources either. A number of people in this thread express the “big Google vs. small non-profit Mozilla”-sentiment here, but it doesn’t seem that clear-cut. I can’t readily find a size for the Chrome team on the ‘net, but I checked out the Chromium source code and let some scripts loose on that: there are ~460 Google people with non-trivial commits in 2020, although quite a bit seems to be for ChromeOS and not the browser part strictly speaking, so my guestimate is more 300 people. A large team? Absolutely. But Mozilla’s $430/million a year can match this with ~$1.5m/year per developer. My last company had ~70 devs on much less revenue (~€10m/year). Basically they have the money to spare to match the Chrome dev team person-for-person. Mozilla does more than just Firefox, but they can still afford to let a lot of devs loose on Gecko/Firefox (I didn’t count the number devs for it, as I got some other stuff I want to do this evening as well).

                                                                                                  It’s all a matter of strategy; history is littered with large or even huge companies that went belly up just because they made products that didn’t fit people’s demands. I fear Firefox will be in the same category. Not today or tomorrow, but in five years? I’m not so sure Firefox will still be around to be honest. I hope I’m wrong.

                                                                                                  As for your Debian comparison; an init system is a fundamental part of the system; it would be analogous to Firefox supporting different rendering or JS engines. It’s not even close to the same as “an UI to configure key mappings” or “a bunch of settings for stuff you can actually already kind-of do but with hacks that you need to explicitly search for and most users don’t know it exists”, or even a “built-in RSS reader that’s really good and a great replacement for Google Reader”.

                                                                                                  1. 2

                                                                                                    I agree with most of what you said. Notably the removal of RSS support. I don’t work for Mozilla and I am not a contributor, so I really can’t answer any of your questions.

                                                                                                    Another example of maintaining a feature would be Alsa support. It has been removed, this upsets some users, but for me, this is understandable as they don’t want to handle bug reports around this or the code to get in the way of some other features or refactors. Of course, I use Pulseaudio, so I am quite biased.

                                                                                                    1. 4

                                                                                                      I think ALSA is a bad example; just use Pulseaudio. It’s long since been the standard, everyone uses it, and this really is an example of “147 people who insist on having an überminimal Linux on Reddit being angry”. It’s the kind of technical detail with no real user-visible changes that almost no one cares about. Lots of effort with basically zero or extremely minimal tangible benefits.

                                                                                                      And ALSA is a not even a good or easy API to start with. I’m pretty sure that the “ALSA purists” never actually tried to write any ALSA code otherwise they wouldn’t be ALSA purists but ALSA haters, as I’m confident there is not a single person that has programmed with ALSA that is not an ALSA hater to some degree.

                                                                                                      Pulseaudio was pretty buggy for a while, and its developer’s attitude surrounding some of this didn’t really help, because clearly if tons of people are having issues then all those people are just “doing it wrong” and is certainly not a reason to fix anything, right? There was a time that I had a keybind to pkill pulseaudio && pulseaudio --start because the damn thing just stopped working so often. The Grand Pulseaudio Rollout was messy, buggy, broke a lot of stuff, and absolutely could have been handled better. But all of that was over a decade ago, and it does actually provide value. Most bugs have been fixed years ago, Poettering hasn’t been significantly involved since 2012, yet … people still hold an irrational hatred towards it 🤷

                                                                                                      1. 1

                                                                                                        ALSA sucks, but PulseAudio is so much worse. It still doesn’t even actually work outside the bare basics. Firefox forced me to put PA on and since then, my mic randomly spews noise and sound between programs running as different user ids is just awful. (I temporarily had that working better though some config changes, then a PA update - hoping to fix the mic bug - broke this… and didn’t fix the mic bug…)

                                                                                                        I don’t understand why any program would use the PA api instead of the alsa ones. All my alsa programs (including several I’ve made my own btw, I love it whenever some internet commentator insists I don’t exist) work equally as well as pulse programs on the PA system… but also work fine on systems where audio actually works well (aka alsa systems). Using the pulse api seems to be nothing but negatives.

                                                                                                2. 1

                                                                                                  Not sure if this will help you but I absolutely cannot STAND the default Firefox theme so I use this: https://github.com/ideaweb/firefox-safari-style

                                                                                                  I stick with Firefox over Safari purely because it’s devtools are 100x better.

                                                                                                3. 10

                                                                                                  There’s also the fact that web browsers are simply too big to reimplement at this point. The best Mozilla can do (barely) is try to keep up with the Google-controlled Web Platform specs, and try to collude with Apple to keep the worst of the worst from being formally standardized (though Chrome will implement them anyway). Their ability to do even that was severely impacted by their layoffs last year. At some point, Apple is going to fold and rebase Safari on Chromium, because maintaining their own browser engine is too unprofitable.

                                                                                                  At this point, we need to admit that the web belongs to Google, and use it only to render unto Google what is Google’s. Our own traffic should be on other protocols.

                                                                                                  1. 8

                                                                                                    For a scrappy nonprofit they don’t seem to have any issues paying their executives millions of dollars.

                                                                                                    1. 1

                                                                                                      I mean, I don’t disagree, but we’re still talking several orders of magnitude less compensation than Google’s execs.

                                                                                                      1. 5

                                                                                                        A shit sandwich is a shit sandwich, no matter how low the shit content is.

                                                                                                        (And no, no one is holding a gun to Mozilla’s head forcing them to hire in high-CoL/low-productivity places.)

                                                                                                    2. 1

                                                                                                      Product design can’t fix any of these problems because nobody is paying for the product. The more successful it is, the more it costs Mozilla. The only way to pay the rent with free-product-volume is adtech, which means spam and spying.

                                                                                                      1. 4

                                                                                                        Exactly why I think the problem requires a political solution.

                                                                                                    3. 8

                                                                                                      I don’t agree this is a vague ethical reason. Problem with those are concerns like deforestation (and destruction of habitats for smaller animals) to ship almond milk across the globe, and sewing as an alternative to poverty and prostitution, etc.

                                                                                                      The browser privacy question is very quantifiable and concrete, the source is in the code, making it a concrete ethical-or-such choice.

                                                                                                      ISTR there even being a study or two where people were asked about willingness to being spied upon, people who had no idea their phones were doing what was asked about, and being disconcerted after the fact. That’s also a concrete way to raise awareness.

                                                                                                      At the end of the day none of this may matter if people sign away their rights willingly in favor of a “better” search-result filter bubble.

                                                                                                      1. 11

                                                                                                        I don’t think they’re vague (not the word I used) but rather abstract; maybe that’s no the best word either but what I mean with it is that it’s a “far from my bed show” as we would say in Dutch. Doing $something_better on these topics has zero or very few immediate tangible benefits, but rather more abstract long-term benefits. And in addition it’s also really hard to feel that you’re really making a difference as a single individual. I agree with you that these are important topics, it’s just that this type of argument is simply not all that effective at really making a meaningful impact. Perhaps it should be, but it’s not, and exactly because it’s important we need to be pragmatic about the best strategy.

                                                                                                        And if you’re given the choice between “cheaper (or better) option X” vs. “more expensive (or inferior) option Y with abstract benefits but no immediate ones”, then I can’t really blame everyone for choosing X either. Life is short, lots of stuff that’s important, and can’t expect everyone to always go out of their way to “do the right thing”, if you can even figure out what the “right thing” is (which is not always easy or black/white).

                                                                                                        1. 1

                                                                                                          My brain somehow auto-conflated the two, sorry!

                                                                                                          I think we agree that the reasoning in these is inoptimal either way.

                                                                                                          Personally I wish these articles weren’t so academic, and maybe not in somewhat niche media, but instead mainstream publications would run “Studies show people do not like to be spied upon yet they are - see the shocking results” clickbaity stuff.

                                                                                                          At least it wouldn’t hurt for a change.

                                                                                                          1. 1

                                                                                                            It probably wasn’t super-clear what exactly was intended with that in the first place so easy enough of a mistake to make 😅

                                                                                                            As for articles, I’ve seen a bunch of them in mainstream Dutch newspapers in the last two years or so; so there is some amount of attention being given to this. But as I expended on in my other lengthier comment, I think the first step really ought to be making a better product. Not only is this by far the easiest to do and within our (the community’s) power to do, I strongly suspect it may actually be enough, or at least go a long way.

                                                                                                            It’s like investing in public transport is better than shaming people for having a car, or affordable meat alternatives is a better alternative than shaming people for eating meat, etc.

                                                                                                      2. 7

                                                                                                        I agree to an extent. Firefox would do well to focus on the user experience front.

                                                                                                        I switched to Firefox way back in the day, not because of vague concerns about the Microsoft hegemony, or even concerns about web standards and how well each browser implemented them. I switched because they introduced the absolutely groundbreaking feature that is tabbed browsing, which gave a strictly better user experience.

                                                                                                        I later switched to Chrome when it became obvious that it was beating Firefox in terms of performance, which is also a factor in user experience.

                                                                                                        What about these days? Firefox has mostly caught up to Chrome on the performance point. But you know what’s been the best user experience improvement I’ve seen lately? Chrome’s tab groups feature. It’s a really simple idea, but it’s significantly improved the way I manage my browser, given that I tend to have a huge number of tabs open.

                                                                                                        These are the kinds of improvements that I’d like to see Firefox creating, in order to lure people back. You can’t guilt me into trying a new browser, you have to tempt me.

                                                                                                        1. 10

                                                                                                          But you know what’s been the best user experience improvement I’ve seen lately? Chrome’s tab groups feature. It’s a really simple idea, but it’s significantly improved the way I manage my browser, given that I tend to have a huge number of tabs open.

                                                                                                          Opera had this over ten years ago (“tab stacking”, added in Opera 11 in 2010). Pretty useful indeed, even with just a limited number of tabs. It even worked better than Chrome groups IMO. Firefox almost-kind-of has this with container tabs, which are a nice feature actually (even though I don’t use it myself), and with a few UX enhancements on that you’ve got tab groups/stacking.

                                                                                                          Opera also introduced tabbed browsing by the way (in 2000 with Opera 4, about two years before Mozilla added it in Phoenix, which later became Firefox). Opera was consistently way ahead of the curve on a lot of things. A big reason it never took off was because for a long time you had to pay for it (until 2005), and after that it suffered from “oh, I don’t want to pay for it”-reputation for years. It also suffered from sites not working; this often (not always) wasn’t even Opera’s fault as frequently this was just a stupid pointless “check” on the website’s part, but those were popular in those days to tell people to not use IE6 and many of them were poor and would either outright block Opera or display a scary message. And being a closed-source proprietary product also meant it never got the love from the FS/OSS crowd and the inertia that gives (not necessarily a huge inertia, but still).

                                                                                                          So Firefox took the world by storm in the IE6 days because it was free and clearly much better than IE6, and when Opera finally made it free years later it was too late to catch up. I suppose the lesson here is that “a good product” isn’t everything or a guarantee for success, otherwise we’d all be using Opera (Presto) now, but it certainly makes it a hell of a lot easier to achieve success.

                                                                                                          Opera had a lot of great stuff. I miss Opera 😢 Vivaldi is close (and built by former Opera devs) but for some reason it’s always pretty slow on my system.

                                                                                                          1. 1

                                                                                                            This is fair and I did remember Opera being ahead of the curve on some things. I don’t remember why I didn’t use it, but it being paid is probably why.

                                                                                                            1. 1

                                                                                                              I agree, I loved the Presto-era Opera and I still use the Blink version as my main browser (and Opera Mobile on Android). It’s still much better than Chrome UX-wise.

                                                                                                            2. 4

                                                                                                              I haven’t used tab groups, but it looks pretty similar to Firefox Containers which was introduced ~4 years ahead of that blog post. I’ll grant that the Chrome version is built-in and looks much more polished and general purpose than the container extension, so the example is still valid.

                                                                                                              I just wanted to bring this up because I see many accusations of Firefox copying Chrome, but I never see the reverse being called out. I think that’s partly because Chrome has the resources to take Mozilla’s ideas and beat them to market on it.

                                                                                                              Disclaimer: I’m a Mozilla employee

                                                                                                            3. 4

                                                                                                              One challenge for people making this kind of argument is that predictions of online-privacy doom and danger often don’t match people’s lived experiences. I’ve been using Google’s sites and products for over 20 years and have yet to observe any real harm coming to me as a result of Google tracking me. I think my experience is typical: it is an occasional minor annoyance to see repetitive ads for something I just bought, and… that’s about the extent of it.

                                                                                                              A lot of privacy advocacy seems to assume that readers/listeners believe it’s an inherently harmful thing for a company to have information about them in a database somewhere. I believe privacy advocates generally believe that, but if they want people to listen to arguments that use that assumption as a starting point, they need to do a much better job offering non-circular arguments about why it’s bad.

                                                                                                              1. 4

                                                                                                                I think it has been a mistake to focus on loss of privacy as the primary data collection harm. To me the bigger issue is that it gives data collectors power over the creators of the data and society as a whole, and drives destabilizing trends like political polarization and economic inequality. In some ways this is a harder sell because people are brainwashed to care only about issues that affect them personally and to respond with individualized acts.

                                                                                                                1. 4

                                                                                                                  There is no brainwashing needed for people to act like people.

                                                                                                                  1. 1

                                                                                                                    do you disagree with something in my comment?

                                                                                                                    1. 3

                                                                                                                      In some ways this is a harder sell because people are brainwashed to care only about issues that affect them personally and to respond with individualized acts.

                                                                                                                      I’m not @halfmanhalfdonut but I don’t think that brainwashing is needed to get humans to behave like this. This is just how humans behave.

                                                                                                                      1. 2

                                                                                                                        Yep, this is what I was saying.

                                                                                                                        1. 1

                                                                                                                          things like individualism, solidarity, and collaboration exist on a spectrum, and everybody exhibits each to some degree. so saying humans just are individualistic is tautological, meaningless. everyone has some individualism in them regardless of their upbringing, and that doesn’t contradict anything in my original comment. that’s why I asked if there was some disagreement.

                                                                                                                          to really spell it out, modern mass media and culture condition people to be more individualistic than they otherwise would be. that makes it harder to make an appeal to solidarity and collaboration.

                                                                                                                          @GrayGnome

                                                                                                                          1. 1

                                                                                                                            I think you’re only seeing the negative side (to you) of modern mass media and culture. Our media and culture also promote unity, tolerance, respect, acceptance, etc. You’re ignoring that so that you can complain about Google influencing media, but the reality is that the way you are comes from those same systems of conditioning.

                                                                                                                            The fact that you even know anything about income inequality and political polarization are entirely FROM the media. People on the whole are not as politically divided as media has you believe.

                                                                                                                            1. 1

                                                                                                                              sure, I only mentioned this particular negative aspect because it was relevant to the point I was making in my original comment

                                                                                                                            2. 1

                                                                                                                              to really spell it out, modern mass media and culture condition people to be more individualistic than they otherwise would be. that makes it harder to make an appeal to solidarity and collaboration.

                                                                                                                              I think we’re going to have to agree to disagree. I can make a complicated rebuttal here, but it’s off-topic for the site, so cheers!

                                                                                                                              1. 1

                                                                                                                                cheers

                                                                                                                2. 3

                                                                                                                  I agree with everything you’ve written in this thread, especially when it comes to the abstractness of pro-Firefox arguments as of late. Judging from the votes it seems I am not alone. It is sad to see Mozilla lose the favor of what used to be its biggest proponents, the “power” users. I truly believe they are digging their own grave – faster and faster it seems, too. It’s unbelievable how little they seem to be able to just back down and admit they were wrong about an idea, if only for a single time.

                                                                                                                  1. 2

                                                                                                                    Firefox does have many features that Chrome doesn’t have: container tabs, tree style tabs, better privacy and ad-blocking capabilities, some useful dev tools that I don’t think Chrome has (multi-line JS and CSS editors, fonts), isolated profiles, better control over the home screen, reader mode, userChrome.css, etc.

                                                                                                                  1. 13

                                                                                                                    Nowadays, there is the simpler ProxyJump. Also, you can use ssh-agent ssh -o AddKeysToAgent=confirm -o ForwardAgent=yes login@somehost to confirm each use of the key by the agent. See https://vincent.bernat.ch/en/blog/2020-safer-ssh-agent-forwarding for details on this last one.

                                                                                                                    1. 8

                                                                                                                      This is such a weird take. Having a 3 year old version continue to work is the point of somethng like Debian stable. Nothing prevents upstream from releasing their own binaries if they like (using something like appimage if their dependencies are really gnarly, but even proprietary games seem to grt by fine on “Linux” without that) – I don’t rely on my distro togt me the latest versions but to get me working versions that will keep working past the whims of upstream.

                                                                                                                      Now, the game being also in Steam seems like a good idea to reach those Steam users out there, but people like me who play OpenTTD from debi stable will continue to happily do so.

                                                                                                                      1. 9

                                                                                                                        I think the point of the article is that Debian stable’s guarantees aren’t nearly as important for games. For e.g. Apache httpd, I really don’t want it to change except for security updates. But for OpenTTD, I probably would rather have the latest version, especially if I’m playing multiplayer.

                                                                                                                        What this article didn’t address was the usual solution to this problem (backports). I wish it had. I do find that sometimes I wish something was backported in Debian but it isn’t, whether because no one had time or was interested or whatever else.

                                                                                                                        1. 4

                                                                                                                          For Debian, browsers get exception because it is too hard to maintain them. It could be applied for games, but it is difficult to draw the line where this should stop. So, it’s easier to have very few exceptions and everything else should follow the rule.

                                                                                                                          1. 1

                                                                                                                            Newer versions can still have regressions or new instabilities. Especially very new versions (article also considers December too old!)

                                                                                                                            It is true some games don’t keep their multiplayer stable, but all the more reason to have an easy for everyone to get older version without changes!

                                                                                                                        1. 28

                                                                                                                          There’s an implied assumption in distros that old versions are stable and good. Unfortunately all packages are forced to conform to that, and it causes pain for packages that don’t fit this assumption.

                                                                                                                          Not every project can afford to maintain multiple old release branches and make it possible to backport fixes.

                                                                                                                          It’s super annoying for authors when users complain about bugs that have already been fixed years ago, and there’s no solution other than to tell users to stop using their distro.

                                                                                                                          1. 16

                                                                                                                            I wonder how much of this is the distro model being designed around being…. actually physical distributions. Debian in 1998 was for many an entire set of CDs, and all the little packages in it were assumed to be part of the operating system you were just slicing like a ham. It was both a freezing of the world in that state of time, and pretending it was all one big mass.

                                                                                                                            Likewise, how much did internet distribution change the assumptions that made the distros in the first place? Are they still valid ones? I’m thinking a lot about this and what my answer would be.

                                                                                                                            1. 4

                                                                                                                              I don’t think stable release cycles are tied to the physical distributions. It is assumed that users of a stable distro with a release cycle of 2 years just expect things to be stable for 2 years. If they need 5 years, they find a distribution with a 5 year release cycle. Distributions are often seen as responsible of distribution old software, but for most software, users just expect stability. People not interested in stability can use Arch or Debian Unstable.

                                                                                                                              The main problem is that for some piece of software, some users may want a more recent one. Debian answers this with backports (by Debian), Ubuntu with PPA (by random people). For desktop-type applications, there are distribution-agnostic methods, like Flatpak.

                                                                                                                              Releasing more often would be a pain as Debian (for example) would need to maintain several distributions in parallel. Currently, the maximum is two (during one year). With a release cycle of 6 months, this would mean 4 distributions simultaneously (old stable, stable, n-1 and n), like Ubuntu. We just don’t have the manpower for that (and no real demands either).

                                                                                                                              1. 2

                                                                                                                                Related to this, the package versions in a stable distro are known to work together. In the OpenTTD case this probably isn’t as big of a deal, but software packages in general are known to have problems when future versions of libraries are released. When you use, e.g. Debian stable, you’re assured that everything that worked yesterday will continue to work today.

                                                                                                                              2. 1

                                                                                                                                I don’t think so. I expect things on my LTS release to stay stable for some years and I know that they won’t be the latest and greatest. And games with multiplayer & co may just be unsuited for this. But they aren’t as relevant for stability as my file manager, login or displaymanager..

                                                                                                                              3. 5

                                                                                                                                This might be slightly off topic and might sound a bit like being a “fanboyism”, but I don’t mean it that way, because I hope that others will pick it up, so it isn’t a somewhat unique feature anymore.

                                                                                                                                The BSDs for historical reasons split base and ports/packages. But it kind of developed into a feature and great care is taken in all of them on what goes in and out of the base system. The base is of course supposed to be stable.

                                                                                                                                But then there is the ports which are not just “everything in there is rolling release”, but more fine grained. For projects where it makes sense there is different versions, for example different PostgreSQL versions. So one can freely choose.

                                                                                                                                But it goes further. OpenBSD and FreeBSD also have flavors so you also get to pick and choose for (just because it’s famous) Python.

                                                                                                                                And if you are self compiling you get to choose different variations, let’s say you wanna build an old Postgres, with LibreSSL, but with a new (supported) PostGIS, you can do so.

                                                                                                                                And on top of that for FreeBSD and NetBSD you get to choose if you want have the stable quarterly branches of the ports trees or the latest one, with the latest versions, which (I think largely cause they are usually not modified) very stable and fit for server usage. All because you have that stable base still

                                                                                                                                I think if I wouldn’t have used it for so long in very different production environments it would sound kind of messy to me, but it’s not like one somehow has to constantly make a decision. It works out very nice and one doesn’t usually stumble across issues (certainly less frequently than in Debian, where I think the main issue stems from packages being modified (heavily patched), split, etc.).

                                                                                                                                It would be great to see something similar being undertaken in the Linux world. There have been quite a lot of situations where I only went with FreeBSD because of the above.

                                                                                                                                There is no technical reason for this not existing, so it very much surprises me that it doesn’t exist. There have been people using pkgsrc on Linux, but it’s not so much the point to bring that into Linux, as it is to bring those concepts to Linux. I think bringing in pkgsrc can be hard, because a lot naturally is optimized for its main platforms and the pkgsrc distros at large simply were tiny one person shows that never reached enough mass.

                                                                                                                                So I am wondering if I’m the only one who’d sometimes really liked something like this existing. I think something like Gentoo (or pretty much anything else really) could still be used as a base for such an approach. Does such a project exist?

                                                                                                                                1. 1

                                                                                                                                  I think it’s a bit more complicated WRT BSDs, because FreeBSD is unifying ports/packages UX wise but keeping the same release policy/separation. They also have the luxury of developing a stable base, whereas Linux components are disparate and and separately developed. I think it’s a good thing (and a proven strategy) in the case of say, FreeBSD, where they keep binary compatibility going, because it provides a stable base to build off of. Windows and macOS go further and just put more components you’d need to rely on like the GUI or audio as part of a stable, ABI compatible base.

                                                                                                                                  1. 1

                                                                                                                                    As a long time Debian user and developer, I’d love us to move to a base/ports-like model, and have a leaner Base than what “main” is today.

                                                                                                                                  2. 3

                                                                                                                                    This is a fairly serious educational problem I agree. Issues in a distro version should never be reported to upstream, but to the distro.

                                                                                                                                    1. 3

                                                                                                                                      As an end user, what do I get out of reporting it to the distro and not the upstream if something breaks that doesn’t seem like a downstream issue? The triage can be useful, but not enough I’d think to report there first.

                                                                                                                                      Commercially, I do know what it’s like - I support a PHP distribution, but I think it has more merit than for say, the typical Linux distribution, because the proprietary platform we support it on isn’t well known by most PHP developers, there are necessary distribution differences, additional patches to make it work, etc. that means they get support from us - they usually pay for it though.

                                                                                                                                      1. 3

                                                                                                                                        You get the benefit of reporting against the version you actually run, and maybe getting the version you actually run fixed. Repoting to upstream in the best case causes the fix to go into a version you are not using for possibly a long time or ever, and worst/common case just annoys upstream.

                                                                                                                                        1. 1

                                                                                                                                          True, but how likely would I be able to get a fix in that case? If a bug is fixed in 0.9.3 and Debian ships 0.9.1, they don’t usually backport fixes like that unless it’s security, because it would break the entire point of stable.

                                                                                                                                          1. 1

                                                                                                                                            I suppose it depends if the maintainer agrees it is a bug. The point of stable is to work and not break, so if already broken a fix shouldn’t “break the point” but of course this will vary by maintainer

                                                                                                                                      2. 1

                                                                                                                                        It’s not an educational problem, IMO. That’s just shunting the problem to the user. It’s a UI problem; if there were some sort of standard bug-reporting platform that auto-included relevant info like distros, I don’t see why upstream devs couldn’t set an automatic rule like “bugs with Debian stable are automatically forwarded to the Debian packagers and the user is automatically sent a reply saying “hey your distro is old as eff and we recommend using something newer”.

                                                                                                                                        1. 2

                                                                                                                                          I mean, there is a standard bug reporting UI for debian (reportbug), can be run from either shell or as GUI. But I agree it needs to be more prominently featured in default desktop installs.

                                                                                                                                      3. 1

                                                                                                                                        and there’s no solution other than to tell users to stop using their distro.

                                                                                                                                        Or distribute the game/software as a static binary and tell users to update manually/bundle an auto updater.

                                                                                                                                        1. 4

                                                                                                                                          Static binaries help until you need NSS (for auth/NS), or more realistically for a game, to get to libGL.

                                                                                                                                      1. 7

                                                                                                                                        It is a fun read! Also, it is a good analogy for not fixing the root cause of a problem, a bit like this crontab restarting the webserver when the load is too high, masking the problem.

                                                                                                                                        1. 2

                                                                                                                                          I appreciate most of the arguments, but the counter-point around security is missing the spot. For distributions, it is far easier to apply a patch on a single package. Rebuilding or not is not really the difficulty. Now, if many applications are bundling/pinning specific versions, distributions need to patch each version. Some of these versions may be very old and the patch may be more difficult to apply. This is a lot of work. Distributions cannot just bump the dependency as it goes against the stability promise and introduce bugs and changes. Distributions have to support what they ship for around 5 years (because many users use distributions for this exact purpose) while developers usually like to support things for a few months.

                                                                                                                                          Unfortunately both sides do not want to move an inch. When packaging for Debian, I would appreciate being able to bundle dependencies instead of packaging each single dependency, but there must be some ways to guarantee we are not just multiplying the amount of work we need to provide in the future. However, this is not new. Even with C, many devs do not like distributions freezing their software for 5 years.

                                                                                                                                          1. 11

                                                                                                                                            The real “issue” from the distro perspective is that they’re now trying to package ecosystems that work completely differently than the stuff they’re used to packaging, and specifically ecosystems where the build process is tied tightly to the language’s own tooling, rather than the distro’s tooling.

                                                                                                                                            This is why people keep talking about distros being stuck on twenty-years-ago’s way of building software. Or, really, stuck on C’s way of building software. C doesn’t come with a compiler, or a build configuration tool, or a standard way to specify dependencies and make sure they’re present and available either during build or at runtime. C is more or less just a spec for what the code ought to do when it’s run. So distros, and everybody else doing development in C, have come up with their own implementations for all of that, and grown used to that way of doing things.

                                                                                                                                            More recently-developed languages, though, treat a compiler and build tool and dependencies/packaging as a basic requirement, and tightly integrate to their standard tooling. Which then means that the distro’s existing and allegedly language-agnostic tooling doesn’t work, or at least doesn’t work as well, and may not have been as language-agnostic as they hoped.

                                                                                                                                            Which is why so many of the arguments in these threads have been red herrings. It’s not that “what dependencies does this have” is some mysterious unanswerable question in Rust, it’s that the answer to the question is available in a toolchain that isn’t the one the distro wants to use. It’s not that “rebuild the stuff that had the vulnerable dependency” is some nightmare of tracking down impossible-to-know information and hoping you caught and patched everything, it’s that it’s meant to be done using a toolchain and a build approach that isn’t the one the distro wants to use.

                                                                                                                                            And there’s not really a distro-friendly thing the upstream developers can do, because each distro has its own separate preferred way of doing this stuff, so that’s basically pushing the combinatorial nightmare upstream and saying “take the information you already provide in your language’s standard toolchain, and also provide and maintain one additional copy of it for each distro, in that distro’s preferred format”. The only solution is for the distros to evolve their tooling to be able to handle these languages, because the build approach used in Rust, Go, etc. isn’t going away anytime soon, and in fact is likely to become more popular over time.

                                                                                                                                            1. 5

                                                                                                                                              The only solution is for the distros to evolve their tooling to be able to handle these languages

                                                                                                                                              The nixpkgs community has been doing this a lot. Their response to the existence of other build tools has been to write things like bundix, cabal2nix and cargo2nix. IIRC people (used to) use cabal2nix to make the whole of hackage usable in nixpkgs?

                                                                                                                                              From the outside it looks like the nix community’s culture emphasizes a strategy of enforcing policy by making automations whose outputs follow it.

                                                                                                                                              1. 4

                                                                                                                                                Or, really, stuck on C’s way of building software.

                                                                                                                                                I think it’s at least slightly more nuanced than that. Most Linux distributions, in particular, have been handling Perl modules since their earliest days. Debian/Ubuntu use them fairly extensively even in base system software. Perl has its own language ecosystem for building modules, distributing them in CPAN, etc., yet distros have generally been able to bundle Perl modules and their dependencies into their own package system. End users are of course free to use Perl’s own CPAN tooling, but if you apt-get install something on Debian that uses Perl, it doesn’t go that route, and instead pulls in various libxxx-perl packages. I don’t know enough of the details to know why Rust is proving more intractable than Perl though.

                                                                                                                                                1. 6

                                                                                                                                                  I don’t know enough of the details to know why Rust is proving more intractable than Perl though

                                                                                                                                                  There is a big difference between C, Perl, Python on the one side and Rust on the other.

                                                                                                                                                  The former have a concept of “search path”: there’s a global namespace where all libraries live. That’s include path for C, PYTHONPATH for Python and @INC (?) for Perl. To install a library, you put it into some blessed directory on the file system, and it becomes globally available. The corollary here is that every one is using the same version of library. If you try to install two different versions, you’ll get a name conflict.

                                                                                                                                                  Rust doesn’t have global search path / global namespace. “Installing rust library” is not a thing. Instead, when you build a piece of Rust software, you need to explicitly specify path for every dependency. Naturally, doing this “by hand” is hard, so the build system (Cargo) has a lot of machinery for wiring a set of interdependent crates together.

                                                                                                                                                  1. 2

                                                                                                                                                    there’s a global namespace where all libraries live

                                                                                                                                                    Yes, this is one of the biggest differences. Python, Perl, etc. come out of the Unix-y C-based tradition of not having a concept of an “application” you run or a “project” you work on, but instead only of a library search path that’s assumed to be shared by all programs in that language, or at best per-user unique so that one user’s set of libraries doesn’t pollute everyone else’s.

                                                                                                                                                    Python has trended away from this and toward isolating each application/project – that’s the point of virtual environments – but does so by just creating a per-virtualenv search path.

                                                                                                                                                    More recently-developed languages like Rust have avoided ever using the shared-search-path approach in the first place, and instead isolate everything by default, with its own project-local copies of all dependencies.

                                                                                                                                                    (the amount of code generation/specialization that happens at compile time in Rust for things like generics is a separate issue, but one that distros – with their ability to already handle C++ – should in theory not have trouble with)

                                                                                                                                                2. 4

                                                                                                                                                  This is why people keep talking about distros being stuck on twenty-years-ago’s way of building software. Or, really, stuck on C’s way of building software. C doesn’t come with a compiler, or a build configuration tool, or a standard way to specify dependencies and make sure they’re present and available either during build or at runtime. C is more or less just a spec for what the code ought to do when it’s run. So distros, and everybody else doing development in C, have come up with their own implementations for all of that, and grown used to that way of doing things.

                                                                                                                                                  More than that, I’d say they’re language-specific package managers around autotools+C.

                                                                                                                                              1. 7

                                                                                                                                                this is remarkable!

                                                                                                                                                for the sake of my understanding, what are the other popular options for installing a drop-in c/c++ cross compiler? A long time ago, I used Sourcery Codebench, but I think that was a paid product

                                                                                                                                                1. 7

                                                                                                                                                  Clang is a cross-compiler out of the box, you just need headers and libraries for the target. Assembling a sysroot for a Linux or BSD system is pretty trivial, just copy /usr/{local}/include and /usr/{local}/lib and point clang at it. Just pass a --sysroot={path-to-the-sysroot} and -target {target triple of the target} and you’ve got cross compilation. Of course, if you want any other libraries then you’ll also need to install them. Fortunately, most *NIX packaging systems are just tar or cpio archives, so you can just extract the ones you want in your sysroot.

                                                                                                                                                  It’s much harder for the Mac. The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use. I couldn’t see anything in the Zig documentation that explains how they get around this. Hopefully they’re not just violating Apple’s license agreement…

                                                                                                                                                  1. 3

                                                                                                                                                    Zig bundles Darwin’s libc, which is licensed under APSL 2.0 (see: https://opensource.apple.com/source/Libc/Libc-1044.1.2/APPLE_LICENSE.auto.html, for example).

                                                                                                                                                    APSL 2.0 is both FSF and OSI approved (see https://en.wikipedia.org/wiki/Apple_Public_Source_License), which makes me doubt that this statement is correct:

                                                                                                                                                    The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use.

                                                                                                                                                    That said, if you have more insight, I’m definitely interested in learning more.

                                                                                                                                                    1. 1

                                                                                                                                                      I remember some discussion about these topics on Guix mailing lists, arguing convincingly why Guix/Darwin isn’t feasible for licensing issues. Might have been this: https://lists.nongnu.org/archive/html/guix-devel/2017-10/msg00216.html

                                                                                                                                                    2. 1

                                                                                                                                                      The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use.

                                                                                                                                                      Can’t we doubt the legal validity of such prohibition? Copyright often doesn’t apply where it would otherwise prevent interoperability. That’s why we have third party printer cartridges, for instance.

                                                                                                                                                      1. 2

                                                                                                                                                        No, interoperability is an affirmative defence against copyright infringement but it’s up to a court to decide whether it applies.

                                                                                                                                                    3. 4

                                                                                                                                                      When writing the blog post I googled a bit about cgo specifically and the only seemingly general solution for Go I found was xgo (https://github.com/karalabe/xgo).

                                                                                                                                                      1. 2

                                                                                                                                                        This version of xgo does not seem to be maintained anymore, I think most xgo users now use https://github.com/techknowlogick/xgo

                                                                                                                                                        I use it myself and albeit very the tool is very heavy, it works pretty reliable and does what is advertised.

                                                                                                                                                        1. 2

                                                                                                                                                          Thanks for mentioning this @m90. I’ve been maintaining my fork for a while, and just last night automated creating PRs for new versions of golang when detected to reduce time to creation even more.

                                                                                                                                                      2. 3

                                                                                                                                                        https://github.com/pololu/nixcrpkgs will let you write nix expressions that will be reproducibly cross-compiled, but you also need to learn nix to use it. The initial setup and the learning curve are a lot more demanding that zig cc and zig c++.

                                                                                                                                                        1. 3

                                                                                                                                                          Clang IIRC comes with all triplets (that specify the target, like powerpc-gnu-linux or whatever) enabled OOTB. You can then just specify what triplet you want to build for.

                                                                                                                                                          1. 2

                                                                                                                                                            But it does not include the typical build environment of the target platform. You still need to provide that. Zig seems to bundle a libc for each target.

                                                                                                                                                            1. 2

                                                                                                                                                              I have to wonder how viable this will be when your targets become more broad than Windows/Linux/Mac…

                                                                                                                                                              1. 6

                                                                                                                                                                I think the tier system provides some answers.

                                                                                                                                                                1. 3

                                                                                                                                                                  One of the points there is that libc is available when cross-compiling.

                                                                                                                                                                  On *NIX platforms, there are a bunch of things that are statically linked into every executable that provide the things that you need for things like getting to main. These used to be problematic for anything other than GCC to use because the GCC exemption to GPLv2 only allowed you to ignore the GPL if the thing that inserted them into your program was GCC. In GCC 4.3 and later, the GPLv3 exemption extended this to any ‘eligible compilation process’, which allows them to be used by other compilers / linkers. I believe most *BSD systems now use code from NetBSD (which rewrote a lot of the CSU stuff) and LLVM’s compiler-rt. All of these are permissively licensed.

                                                                                                                                                                  If you’re dynamically linking, you don’t actually need the libc binary, you just need something that has the same symbols. Apple’s ld64 supports a text file format here so that Apple doesn’t have to ship all of the .dylib files for every version of macOS and iOS in their SDKs. On ELF platforms, you can do a trick where you strip everything except the dynamic symbol tables from the .so files: the linker will still consume them and produce a binary that works if you put it on a filesystem with the original .so.

                                                                                                                                                                  As far as I am aware, macOS does not support static linking for libc. They don’t ship a libc.a and their libc.dylib links against libSystem.dylib, which is the public system call interface (and does change between minor revisions, which broke very single Go program, because Go ignored the rules). As I understand correctly, a bunch of the files that you need to link a macOS or iOS program have a license that says that you may only use them on a Mac. This is why the Visual Studio Mac target needs a Mac connected on the network to remotely access and compile on, rather than cross-compiling on a Windows host.

                                                                                                                                                                  I understand technically how to build a cross-compile C/C++ toolchain: I’ve done it many times before. The thing I struggle with on Zig is how they do so without violating a particularly litigious company’s license terms.

                                                                                                                                                                  1. 2

                                                                                                                                                                    This elucidates a lot of my concerns better than I could have. I have a lot of reservations about the static linking mindset people get themselves into with newer languages.

                                                                                                                                                                    To be specific on the issue you bring up: Most systems that aren’t Linux either heavily discourage static libc or ban it - and their libcs are consistent unlike Linux’s, so there’s not much point in static libc. libc as an import library that links to the real one makes a lot of sense there.

                                                                                                                                                        1. 8

                                                                                                                                                          Tomorrow seems to be a very bad day for all those poor souls, who didn’t have time/resources to switch to py3 yet. Fortunately enough it can be easily fixed with pip<21 but it will definitely add additional grey hairs to some heads.

                                                                                                                                                          1. 7

                                                                                                                                                            As one of those poor souls, thanks. We have eight years of legacy code that Just Works and so seldom gets touched, and a major 3rd party framework dependency that hasn’t updated to Python 3 either. We just got permission and funding to form a new engineering sub-group to try to deal with this sort of thing, and upper management is already implicitly co-opting it to chase new shinies.

                                                                                                                                                            1. 9

                                                                                                                                                              Python 3.0 was released in 2008. I personally find it hard to feel sympathy for anyone who couldn’t find time in the last twelve years to update their code, especially if it’s code they are still using today. Even more so for anyone who intentionally started a Python 2 project after the 3.0 ecosystem had matured.

                                                                                                                                                              1. 9

                                                                                                                                                                Python 2.7 was released in 2010. Python 3.3 in 2012. Python 2.6 last release was in 2013. Only from this date people could easily release stuff compatible with both Python 2 and Python 3. You may also want to take into consideration the end of support date of some of the distributions shipping Python 2.6 and not Python 2.7 (like Debian Squeeze, 2016).

                                                                                                                                                                I am not saying that 8 years is too fast, but Python 3.0 release date is mostly irrelevant as the ecosystem didn’t use it.

                                                                                                                                                                1. 7

                                                                                                                                                                  Python 3.0 was not something you wanted to use; it took several releases before Python 3 was really ready for people to write programs on. Then it took longer for good versions of Python 3 to propagate into distributions (especially long term distributions), and then it took longer for people to port packages and libraries to Python 3, and so on and so forth. It has definitely not been twelve years since the ecosystem matured.

                                                                                                                                                                  Some people do enough with Python that it’s sensible for them to build and maintain their own Python infrastructure, so always had the latest Python 3. Many people do not and so used supplied Python versions, and may well have stable Python code that just works and they haven’t touched in years (perhaps because they are script-level infrastructure that just sits there working, instead of production frontend things that are under constant evolution because business needs keep changing).

                                                                                                                                                                  1. 4

                                                                                                                                                                    Some of our toolchain broke in the last few weeks. We ported to python3 ages ago, but chunks of infrastructure still support both, and some even still default to 2. The virtualenv binary in Ubuntu 18.02 does that; and that’s a still-supported Ubuntu version, and the default runner for GitHub CI.

                                                                                                                                                                    I think python2-related pain will continue for years to come even for people who have done the due diligence on their own code.

                                                                                                                                                                    1. 4

                                                                                                                                                                      Small tip regarding virtualenv: Since python 3.3 virtualenv comes bundled as the venv module in python, so you can just use python -m venv instead of virtualenv, then you are certain it matches the python version you are using.

                                                                                                                                                                      1. 1

                                                                                                                                                                        virtualenv has some nice features which do not exist for venv. One of the examples is activate_this.py script, which can be used for configuration of remote environment, similar to what pytest_cloud does.

                                                                                                                                                                        1. 1

                                                                                                                                                                          virtualenv has some nice features which do not exist for venv

                                                                                                                                                                          Huh, thanks for pointing that out. I haven’t been writing so much Python in the last few years, and I totally thought venv and virtualenv were the same thing.

                                                                                                                                                                    2. 4

                                                                                                                                                                      Consider, at a minimum, the existence of PyPy; PyPy’s own position is that PyPy will support Python 2.7 forever because PyPy is written in RPython, a strict subset of Python 2.7.

                                                                                                                                                                      Sympathy is not required; what you’re missing out on is an understanding that Python is not wholly under control of the Python Software Foundation. By repeatedly neglecting PyPy, the PSF has effectively forced them to create their own parallel Python 2 infrastructure; when PyPI finally makes changes which prevent Python 2 code from deploying, then we may see PyPy grow even more tooling and possibly even services to compensate.

                                                                                                                                                                      It is easy for me to recognize in your words an inkling of contempt for Python 2 users.

                                                                                                                                                                      1. 21

                                                                                                                                                                        Every time you hop into one of these threads, you frame it in a way which implies you think various entities are obligated to maintain a Python 2 interpreter, infrastructure for supporting Python 2 interpreters, and versions of third-party packages which stay compatible with Python 2, for all of eternity.

                                                                                                                                                                        Judging from that last thread, you seem to think I am one of the people who has that obligation. Could you please, clearly, state to me the nature of this obligation – is its basis legal? moral? something else? – along with its origin and the means by which you assume the right to impose it on me.

                                                                                                                                                                        I ask because I cannot begin to fathom where such an obligation would come from, nor do I understand why you insist on labeling it “contempt” when other people choose not to maintain software for you, in the exact form you personally prefer, for free, forever, anymore.

                                                                                                                                                                        1. 2

                                                                                                                                                                          Your sympathy, including any effort or obligation that you might imagine, is not required. I don’t know how to put it any more clearly to you: You have ended up on the winning side of a political contest within the PSF, and you are antagonizing members of the community who lost for no other reason than that you want the political divide to deepen.

                                                                                                                                                                          Maybe, to get some perspective, try replacing “Python 2” with “Perl 5” and “Python 3” with “Raku”; that particular community resolved their political divide recently and stopped trying to replace each other. Another option for perspective: You talk about “these threads”; what are these threads for, exactly? I didn’t leave a top-level comment on this comment thread; I didn’t summon you for the explicit purpose of flamewar.

                                                                                                                                                                          Finally, why not reread the linked thread? I not only was clearly the loser in that discussion, but I also explained that I personally am not permanently tied to Python 2, and that I’m trying to leave the ecosystem altogether in order to avoid these political problems. Your proposed idea of obligation towards me is completely imagined and meant to make you seem like a victim.

                                                                                                                                                                          Here are some quotes which I think display contempt towards Python 2 and its users, from the previous thread (including your original post) and also the thread before that one:

                                                                                                                                                                          If PyPy wants to internally maintain the interpreter they use to bootstrap, I don’t care one way or another. But if PyPy wants that to also turn into broad advertisement of a supported Python 2 interpreter for general use, I hope they’d consider the effect it will have on other people.

                                                                                                                                                                          Want to keep python 2 alive? Step up and do it.

                                                                                                                                                                          What do you propose they do then? Extend Python 2 support forever and let Python 2 slow down Python 3 development for all time?

                                                                                                                                                                          That’s them choosing and forever staying on a specific dependency. … Is it really that difficult for Python programmers to rewrite one Python program in the newer version of Python? … Seems more fair for the project that wants the dependency to be the one reworking it.

                                                                                                                                                                          The PyPy project, for example, is currently dependent on a Python 2 interpreter to bootstrap and so will be maintaining their own either for as long as PyPy exists, or for as long as it takes to migrate to bootstrapping on Python 3 (which they seem to think is either not feasible, or not something they want to do).

                                                                                                                                                                          He’s having a tantrum. … If you’re not on 3, it’s either a big ball of mud that should’ve been incrementally rewritten/rearchitected (thus exposing bad design) or you expected an ecosystem to stay in stasis forever.

                                                                                                                                                                          I’m not going to even bother with your “mother loved you best” vis a vis PyPy.

                                                                                                                                                                          You’re so wrapped up in inventing enemies that heap contempt on you, but it’s just fellow engineers raising their eyebrows at someone being overly dramatic. Lol contempt. 😂😂😂

                                                                                                                                                                          If I didn’t already have a long history of knowing other PyPy people, for example, I’d be coming away with a pretty negative view of the project from my interactions with you.

                                                                                                                                                                          What emotional word would you use to describe the timbre of these attitudes? None of this has to do with maintainership; I don’t think that you maintain any packages which I directly require. I’m not asking for any programming effort from you. Indeed, if you’re not a CPython core developer either, then you don’t have the ability to work on this; you are also a bystander. I don’t want sympathy; I want empathy.

                                                                                                                                                                          1. 6

                                                                                                                                                                            You have ended up on the winning side of a political contest within the PSF, and you are antagonizing members of the community who lost for no other reason than that you want the political divide to deepen.

                                                                                                                                                                            And this is where the problem lies. Your behavior in the previous thread, and here, makes clear that your approach is to insult, attack, or otherwise insinuate evil motives to anyone who disagrees with you.

                                                                                                                                                                            Here are some quotes which I think display contempt towards Python 2 and its users

                                                                                                                                                                            First of all, it’s not exactly courteous to mix and match quotes from multiple users without sourcing them to who said each one. If anyone wants to click through to the actual thread, they’ll find a rather different picture of, say, my engagement with you. But let’s be clear about this “contempt”.

                                                                                                                                                                            In the original post, I said:

                                                                                                                                                                            The PyPy project, for example, is currently dependent on a Python 2 interpreter to bootstrap and so will be maintaining their own either for as long as PyPy exists, or for as long as it takes to migrate to bootstrapping on Python 3 (which they seem to think is either not feasible, or not something they want to do).

                                                                                                                                                                            You quoted this and replied:

                                                                                                                                                                            This quote is emblematic of the contempt that you display towards Python users.

                                                                                                                                                                            I remain confused as to what was contemptuous about that. You yourself have confirmed that PyPy is in fact dependent on a Python 2 interpreter, and your own comments seem to indicate there is no plan to migrate away from that dependency. It’s simply a statement of fact. And the context of the quote you pulled was a section exploring the difference between “Python 2” the interpreter, and “Python 2” the ecosystem of third-party packages. Here’s the full context:

                                                                                                                                                                            Unfortunately for that argument, Python 2 was much more than just the interpreter. It was also a large ecosystem of packages people used with the interpreter, and a community of people who maintained and contributed to those packages. I don’t doubt the PyPy team are willing to maintain a Python 2 interpreter, and that people who don’t want to port to Python 3 could switch to the PyPy project’s interpreter in order to have a supported Python 2 interpreter. But a lot of those people would continue to use other packages, too, and as far as I’m aware the PyPy team hasn’t also volunteered to maintain Python 2 versions of all those packages.

                                                                                                                                                                            So there’s a sense in which I want to push back against that messaging from PyPy folks and other groups who say they’ll maintain “Python 2” for years to come, but really just mean they’ll maintain an interpreter. If they keep loudly announcing “don’t listen to the Python core team, Python 2 is still supported”, they’ll be creating additional burdens for a lot of other people: end users are going to go file bug reports and other support requests to third-party projects that no longer support Python 2, because they heard “Python 2 is still supported”, and thus will feel entitled to have their favorite packages still work.

                                                                                                                                                                            Even if all those requests get immediately closed with “this project doesn’t support Python 2 anymore”, it’s still going to take up the time of maintainers, and it’s going to make the people who file the requests angry because now they’ll feel someone must be lying to them — either Python 2 is dead or it isn’t! — and they’ll probably take that anger out on whatever target happens to be handy. Which is not going to be good.

                                                                                                                                                                            This is why I made comments asking you to consider the effect of your preferred stance on other people (i.e., on package maintainers). This is why I repeated my point in the comments of the previous thread, that an interpreter is a necessary but not sufficient condition for saying “Python 2 is still supported”. I don’t think these are controversial statements, but apparently you do. I don’t understand why.

                                                                                                                                                                            I also still don’t understand comments of yours like this one:

                                                                                                                                                                            Frankly, I think that you show your hand when you say “really important packages like NumPy/SciPy.” That’s the direction that you want Python to go in.

                                                                                                                                                                            Again, this is just a statement of fact. There are a lot of people using Python for a lot of use cases, and many of those use cases are dependent on certain domain-specific libraries. As I said in full:

                                                                                                                                                                            So regardless of whether I use them or not, NumPy and SciPy are important packages. Just as Jupyter (née IPython) notebooks are important, even though I don’t personally use them. Just as the ML/AI packages are important even though I don’t use them. Just as Flask and SQLAlchemy are important packages, even though I don’t use them. Python’s continued success as a language comes from the large community of people using it for different things. The fact that there are large numbers of people using Python for not-my-use-case with not-the-libraries-I-use is a really good thing!

                                                                                                                                                                            Your words certainly imply you think it’s a bad thing that there are, for example, people using NumPy and SciPy, or at least that you think that’s a bad direction for Python to go in. I do not understand why, and you’ve offered no explanation other than to hand-wave it as “contempt” and “denigration”.

                                                                                                                                                                            But really the thing I do not understand is this:

                                                                                                                                                                            You have ended up on the winning side of a political contest within the PSF

                                                                                                                                                                            You seem to think that “the PSF” and/or some other group of people or entities in the Python world are your enemy, because they chose to move to Python 3 and to stop dedicating their own time and resources to maintaining compatibility with and support for Python 2. The only way that this would make any sense is if those entities had some sort of obligation, to you or to others, to continue maintaining compatibility with and support for Python 2. Hence I have asked you for an explanation of the nature and origin of that obligation so that I can try to understand the real root of why you seem to be so angry about this.

                                                                                                                                                                            Admittedly I don’t have high hopes for getting such an explanation, given what happened last time around, but maybe this time?

                                                                                                                                                                            1. 4

                                                                                                                                                                              Your behavior in the previous thread, and here, makes clear that your approach is to insult, attack, or otherwise insinuate evil motives to anyone who disagrees with you.

                                                                                                                                                                              As Corbin has said themselves multiple times, they are not a nice person. So unfortunately you can’t really expect anything better than this.

                                                                                                                                                                    3. 2

                                                                                                                                                                      Why will tomorrow be a bad day? pip will continue to work. They’re just stopping releasing updates.

                                                                                                                                                                      1. 1

                                                                                                                                                                        From my OpenStack experience – many automated gates could go south, because they could do something like: pip install pip --upgrade hence dropping support for py2. I know, that whomever is involved in this conundrum, should know better and should introduce some checks. But I also know, that we’re all humans, hence prone to make errors.

                                                                                                                                                                        1. 2

                                                                                                                                                                          pip install pip --upgrade should still work, unless the pip team screwed something up.

                                                                                                                                                                          When you upload something to PyPI, you can specify a minimal support Python version. So Python 2.7 will get the latest version that still supports Python 2.

                                                                                                                                                                          And indeed, if you go to https://pypi.org/project/pip/ you will see “Requires: Python >= 3.6”, so I expect things will Just Work for most Python 2 users.

                                                                                                                                                                    1. 1

                                                                                                                                                                      The Vary header should always be sent. Otherwise, the non-301 response could be cached.

                                                                                                                                                                      1. 2

                                                                                                                                                                        Until a few years back, I was also running a dedicated Hetzner server. From an availability point of view, this is a bit a source of stress as everything is running on a single server who would get problems from time to time (most common being an hard disk failure, promptly fixed by Hetzner technical team). I am now using several VPS as it gives me redundancy. Sure, you don’t get as many memory and CPU for the same price.

                                                                                                                                                                        1. 4

                                                                                                                                                                          Yeah, I’m aware it’s putting a lot of eggs in one basket, however given most of the important services are either stateless or excessively backed up I’m not practically concerned.

                                                                                                                                                                          1. 2

                                                                                                                                                                            common being an hard disk failure

                                                                                                                                                                            This is why I’m using an KVM host with managed SSD RAID 10 and guaranteed CPU,Memory and Network*. Yeah you will have always some more performance on a bare metal system you own, but I didn’t have to work around a broken disk or system since 2012 on my personal host. I still have enough performance for multiple services and 3 bigger game systems + VoIP. The only downtime I had was for ~1h when the whole node was broken and my system got transferred to another host, but I didn’t had to do anything for it. That way I didn’t have any problems even on the services that need to run 24/7 or people will notice.

                                                                                                                                                                            *And I don’t mean managed server, that’d be far too expensive. Just something like this.