1. 2

    A nice idea, but if I’m going to go beyond the realm of the script of the shell I’m using, and pipe in and out of a fuller language, I’d turn to Ruby.

    1. 2

      Thanks. Ruby is nice too. This is for the JS programmers out there. Also, for all its drawbacks npm has a module for pretty much everything.

      Add: I think the strength of basho is in how it lets you write and chain expressions with minimal typing. If you used node instead, you could achieve some (not all) of the same things; but with a lot of additional keystrokes.

    1. 3

      My Model M here has the label “Enter” on both the main area key and the numpad key. They both act identically unless I set up alternate remappings of these keys in games or my window manager. This article seems very Mac-specific. *shrug*

      1. 2

        It’s pretty well established that line-of-code isn’t a good measure for progress.

        I don’t see why restating it is particularly noteworthy here.

        1. 1

          Yeah, it does seem to be a bit preaching-to-the-choir on a site like Lobsters.

        1. 5

          An interesting take. Glad to see Linux is still an option and really surprising that perceived performance between KDE and Gnome have flipped.

          • Surprising to hear that there isn’t a Google Drive client on Linux (as I recall there used to be one), don’t many engineers at Google use “Goobuntu”? Perhaps they don’t open source the client for public use.
          • I know that Steam works on both, do you find that your OS dictates what games you play most, or no?
          • OP didn’t mention the screen quality or eyesight issues, curious if there is a noticeable difference between the two? As I suspect there would be.
          1. 9

            Goobuntu (Ubuntu) was replaced by gLinux (Debian) a couple of years ago for maintainability reasons. They’re functionally the same though.

            The machines that we develop on is about what we think gets the programming job done, not as an indication of the target platform.

            My guess is that the numbers were crunched and found that Linux users would not have made up enough share to warrant a client. I’ve never missed it, I do all my office work directly in the browser, and we have company-wide disk snapshotting for backup purposes. On my laptop (which isn’t snapshotted) I use RSync.

            1. 1

              Ahh interesting, thanks for the update.

              The machines that we develop on is about what we think gets the programming job done, not as an indication of the target platform.

              Of course, but I’d imagine that some engineers would want to have native document sync with GDrive. I also use GDrive, but honestly found the syncing annoying when the usage flow is nearly always New tab > drive.google.com > search doc. But certainly someone on gLinux wanted to keep it? :shrug:

              What exactly are you rsync’ing against?

              1. 4

                Laptop (not snapshotted) > Desktop (snapshotted)

                But yeah, we just use the web interface for all docs writing stuff. For documentation (not documents), we have an internal Markdown renderer (think GitHub wiki with internal integrations). No one writes documents outside of a centralized system, and so has no need to back them up with a client.

            2. 6

              (I’m not OP) I recently started playing games on Linux via Steam. For reference, I’ve never been a Windows gamer – had been a console gamer up to that point. To answer your question:

              do you find that your OS dictates what games you play most, or no

              Pretty much. I play only what will work, so that means the game must either officially be supported under “Steam OS + Linux”, or work via Proton. But this is just me. Others are free to dual boot, which, of course, vastly broadens their spectrum of available games.

              1. 5

                I used to be a dual booter, but since Proton, so many games have been working on Linux that I stopped booting to Windows. Then at some point my windows installation broke and I never bothered to fix it.

                1. 3

                  That’s cool. However, I think we’re a ways off from totally being on par with native Windows. Several anti-cheat systems are triggered by running under Linux. And protondb shows that there are still many games that don’t run.

                  That said, things are improving steadily month by month, so that’s encouraging.

                  1. 2

                    That’s true, I didn’t mean to imply that all games I would like to play work on Proton now. But enough of them work now that instead of dealing with Windows for a game that doesn’t work on Proton, I usually just go and find something else that does.

                    If you have a group of gaming buddies, that obviously won’t work, though. It won’t be long before they get hooked up to a Windows-only game.

                2. 2

                  Same here, I find the biggest area where I need to switch back to windows is for multiplayer. I used to lan a lot and still have many of those contacts. I find a lot of games that have a host/client multiplayer, for example RTS games, have issues on linux even if the single-player works flawlessly. This means I have to keep dual boot available.

                  Even though linux does strongly influence which games I play, the range and variety is amazing and it is not reducing the quality or diversity of games I play at all. There are just a few windows only titles that I might play slightly more if they were available on linux.

                  While we are on the subject, what are people’s recommendations for a gaming distro? I am on Mint at the moment which is good, but I like to have options.

                  1. 1

                    I don’t know if I’d call it a gaming distro, but I have been using Gentoo for many years, and it seems to be doing just fine with Steam (which I just installed a couple months ago).

                    1. 1

                      Frankly, I’m not sure you need a gaming distro. I’ve had little issues running Steam and Wine (using Lutris) games on Void Linux, Debian, etc. (Mind you: always using Nvidia.)

                      1. 2

                        I actually phrased that really badly, thanks for the correction. I tried out a dedicated gaming distro and it was rubbish. Mint is a variation on Ubuntu.I was looking at Debian to try next.

                        It seems like the thing to look for is just something well supported with all the common libraries, so most big distros appear to be fine for gaming. The reason I am not entirely pleased with Mint is that they seem a bit too conservative in terms of adding new stuff to the package manager when it comes out. On the one hand that makes it more stable, but on the other games use a lot of weird stuff sometimes and it makes things a bit messy if you have to install things from outside the package manager.

                  2. 4

                    perceived performance between KDE and Gnome have flipped

                    Gnome Shell is huge and slow. A Canonical engineer (Ubuntu has switched from Unity to Gnome) has recently started to improve its performance with very good results but this also shows how terrible the performance was before: memory leaks, huge redraws all of the time and no clipping, … Now this needs to trickle down to users and the comments might change then.

                    PS: KDE has not gotten a lot more bloat or slowness over the years and I don’t know if Gnome will be faster and lighter or if both will be similar.

                    1. 2

                      The lack of a Google Drive client is shameful, but I tried Insync and it’s the best money I’ve ever spent on a Linux app. Much better than the Mac version of Google Drive which was super buggy

                    1. 2

                      A brief comparison from my perspective:

                      • FLOSS vs. not. This is my primary reason. I realize you can run FLOSS stuff on OSX, too, but I feel I’m supporting FLOSS more with Linux than with OSX.
                      • Linux (KDE) provides a much better day-to-day UI experience; window management, theming, customization
                      • With Linux, I (generally) don’t get updates force fed to me by the powers that be, compared to proprietary OSes like Windows and Mac.
                      • PC hardware is significantly cheaper for comparable performance (at least I assume that’s still the case).
                      • Macs have high-DPI displays, and that’s nice. I realize PCs can be high-DPI too, but, to date, all my computers have been 1080p at most.
                      • I really appreciate how physically solid Macbooks are. No real chance of warping, bending or cracking the case/frame. The hinge goes all the way across the device, and has withstood opening and closing for 4+ years now. In contrast, both of my PC laptops over the last 10ish years began to have problems due to general daily wear and tear of the hinge. My Thinkpad also has a slight crack in the case around one of the ports.
                      1. 5

                        Great article. I’m not likely to switch out of Apple’s ecosystem anytime soon, but it’s good to stay familiar with the upsides of your other options.

                        I think what separates me from other commenters here is that I don’t like to configure everything anymore, and I feel like macOS has the most complete default state / least effort to shape it into the environment I want. Some customizations are well worth it to me; I use the Dvorak layout, vim, a few menu bar apps like Karabiner, a syncing scheme for dotfiles, and per-project notes about any special setup needed (if I can’t just pack it all into Docker). But otherwise I’m using stock apps like Terminal, Mail, Safari, and Time Machine.

                        Every time I customize something major, I think later, was the benefit worth the cost of having to set this up again in the future, to recreate this environment or not feeling at home on another machine? Do I need to make notes or will I remember? This process intensifies because I’ve got two Macs, one for work and one for home, so if I don’t like a customization enough to bring it to both, I’ll probably just undo it.

                        Something I will take away from this is the Docker performance difference. I’m going to see if I can make some improvements given where Docker for Mac is weak.

                        Oh, for what it’s worth, the new keyboards since the 16” are excellent. Thank god I managed to skip the butterfly generation.

                        1. 7

                          I think what separates me from other commenters here is that I don’t like to configure everything anymore, and I feel like macOS has the most complete default state / least effort to shape it into the environment I want. […] Every time I customize something major, I think later, was the benefit worth the cost of having to set this up again in the future, to recreate this environment

                          I was a Mac desktop user from 2007 until two years ago. What really changed things for me was discovering NixOS. I don’t dread configuring my system anymore, since it is declaratively defined and I could switch to a new system and have it fully set up in just a few minutes. The same for servers, etc. Blow it away, have the same configuration within minutes.

                          Another large benefit is that I get fast machines for a fraction of the price, since the delta between “pro” Apple and non-Apple hardware has become very large. Last week I bought a new Ryzen 3700X machine with 32GB RAM and a Radeon RX580, just north of 1000 Euro. A Mac Pro with roughly the same GeekBench scores, the same amount of memory, and GPU costs 6-7(!) times that here.

                          1. 3

                            a new Ryzen 3700X machine with 32GB RAM and a Radeon RX580

                            Wow, are you me? I recently got a new computer with exactly those things. If you say you got a Gigabyte motherboard, I’m going to call “conspiracy”. :)

                            1. 1

                              Nope MSI X570-A Pro ;). Though I did consider a Gigabyte B450-based mainboard.

                            2. 1

                              I like the concept of NixOS; thanks for the pointer. And yeah I’m not in the market for a Mac Pro probably ever. That’s not my kind of workload.

                          1. 7

                            LibreOffice is such a big bloated project that it’s probably hard to find people interested in maintaining/developing it. It is by far the ebuild with the longest compilation time on Gentoo (even surpassing Firefox and Chromium), and even though I found a small GUI-bug last week, I’m not tempted to submit a bug for it, as I don’t feel up to debug it or even write a patch for it, and I hate submitting bugs without doing my homework.

                            Considering the topic at hand, open source development can work this way, and the Document Foundation should look into paid support contracts to support their work.

                            1. 6

                              (fellow Gentoo user here) Honestly, I find Chromium’s build time exceeds that of LibreOffice.

                            1. 29

                              Some years ago, I was arguing - on Hacker News, and on the W3C mailing list - against W3C endorsement of a DRM API.

                              I must admit that I’m impressed. Reddit seems to have discovered a use case for DRM that’s worse than those anticipated by myself and others in the anti-EME camp.

                              1. 2

                                Many argued against it, and yet it still passed.

                                Firefox, Chrome, Safari, Edge, Opera, and Vivaldi all include this DRM code.

                                This is one of the reasons why I don’t use them, except for testing.

                                Another major reason is that their UI is not even stuck in the past, but has actually regressed in usability and accessibility.

                                1. 2

                                  I don’t use them

                                  Well, that’s quite an expansive list. What do you use instead, then?

                                  1. 4

                                    I use several browsers day-to-day.

                                    For my default browser, which opens when I click a link in IRC, I use Links (GUI), because it’s extremely fast to open and has a relatively small attack surface.

                                    From a GUI perspective, I enjoy using SeaMonkey, especially because the traditional layout and feature-set of a browsing suite, including WYSIWIG editor you can open current page in.

                                    For most of my research/reading/reddit/lobsters/hn/etc browsing I use qutebrowser, which is by far the best browser I’ve ever used, and I’ve used many. It does have a learning curve, like many powerful tools, and it was worth the effort for me. With most sites, I can do everything without ever touching the mouse, which is important for my hands, arms, wrists, shoulders, and back.

                                    I also frequently browse with older classics such as IE3, IE6, Netscape 3, and Opera 3.x. I just find them enjoyable to use, pleasing aesthetically, and my forum works fine with them. I like Mosaic, but I’m still working out a few compatibility issues with that.

                                    There are many others I use on a semi-regular basis, just for testing and light browsing. Some of my favorites are NetSurf, OffByOne, Midori, Konqueror, w3m.

                                    There are literally hundreds of browsers out there, and they’re all worth trying. And if you don’t fuck up your HTML, most of them work pretty well.

                              1. 5

                                I hated the clutter of cables, and my tendency to unconsciously chew on them if they got anywhere near my face.

                                I am not sure if the author is reading these comments but maybe someone else can answer:

                                Is this the sum total of the objections to cables? I have always been a cable purist, and when I try to convince others that e.g. our current voip difficulties would be resolved by them switching to a cable they never seem able to properly articulate what their problem with cables is.

                                1. 4

                                  My main problem with cables is that, in the specific case of headphones and [worn] mics, they need housekeeping to keep out of the way of my arms and hands (which are typing, mousing, or handling a game controller). Furthermore, when I put devices down on my desk, the cables cause a risk of something getting pulled off the desk and dropping to the floor. I’ve added some non-slip material to some areas of my desk to help, but still.

                                  Despite all that, my headphones and mic are wired, but if I could have the best of both worlds, it would be great.

                                  1. 3

                                    they need housekeeping to keep out of the way of my arms and hand

                                    I’ve never had this issue. The biggest problem to me is that my headphone cable will sometimes get stuck under my chair.

                                  2. 1

                                    I don’t use wireless stuff, but I don’t like having to deal with cable management. /shrug

                                  1. 24

                                    Cables may be untidy if unmaintained, yes, and they definitely harm flexibility in some (or even many) cases. Similar to large open plan offices, it’s form over function though.

                                    I brief tour of the bluetooth spec (and corresponding FOSS implementations) will give you a few grey hairs I’m certain of that, there is so much cruft that I’m suprised anyone has made it work at all.

                                    Bluetooth, when it works, is “decent” but cables are ‘decent-er’.

                                    Headphone cables in particular, though, have their own reliability problems (one ear cut outs and cable “kinks” which kill connections)- but nothing is more frustrating than having a pair of headphones that just wont link to your phone or laptop- or having cut outs as you’re walking.

                                    WiFi has its faults but it’s more reliable by a wide factor than Bluetooth. Running ethernet cables is hard though, but once it’s done it’s kinda done, computers don’t move too much, even if you have a laptop it’s usually fairly static while working in my experience.

                                    Cabling everything, Playstation, TV, Laptops; leaves more of the spectrum left for your mobile devices anyway, so it’s a little win regardless.

                                    1. 10

                                      As I partially agree here but I really find, focusing on the headphone/iem part, that what make cable a real problem in term of management, simply that for most of those products they are not removable. When you tend to go to more high-end or specialized HIFI products, you find removable cables (and a crazy after-market for those…). If more products have come with removable cables, the wireless advantages would have been less impactful (IMHO yadayda) for headphones/iem.

                                      And cable management is clearly an art form when you have some constraints of space or placement of electrical sources.

                                      Bluetooth is a mess and the more codecs and rev you pill on it, the more it is a mess but the kisscool effect of “looks ma, no cable” had an upper hand for the end-user/consumer.

                                      1. 3

                                        Headphone cables in particular, though, have their own reliability problems

                                        While it’s actually quite cheap to make them well, it’s not quite as cheap as making them poorly, meaning that only high end gear tends to get the extra 10c added to the BOM. Consumers just don’t seem to buy for reliability in this (and many other) spaces.

                                        1. 19

                                          In my experience I’d love to buy for reliability, but it’s often so damn hard to find. These things wear out over years, and there’s so much $200 junk on the market that’s no better than the $20 junk. I use headphones a lot, and a couple times I tried to upscale from $20 ones to $80 ones that came well reviewed… and they broke just as quickly as the cheapo ones. So for years I just bought the same $20 headset that was comfy and had good sound quality, and expected to replace it every 1-2 years. It worked pretty well, until that model was discontinued. It wasn’t until recently that I decided to branch out and try again, and I now have a pair of headphones that actually comes with a parts list for ordering replacement components. It doesn’t have a microphone though, so… I’m back to trial and error searching for headsets. I’ll let you know in 5 years how it’s going.

                                          1. 12

                                            The long cycle time for getting information about reliability is a big part of the issue IMO; by the time you find out that a brand is reliable, there’s been a hostile takeover and most of the staff who made it good are gone.

                                            1. 7

                                              One of the problems is that a lot of reviews are written by professional reviewers who looked at it for 10 minutes and wrote down their initial impressions about it. There are exceptions, but a lot of reviews are just shallow and written by people who have never actually used the product. This isn’t necessarily completely without value if done well, but especially stuff like long-term reliability isn’t usually addressed.

                                              1. 2

                                                I had to wade through a lot of these buying a DSLR recently. Most of the reviews were nothing more than padded spec sheets.

                                              2. 6

                                                … and there’s so much $200 junk on the market that’s no better than the $20 junk.

                                                I absolutely agree and in fact this study underpins this statement (at least from the point of view of sound reproduction quality): No correlation between headphone frequency response and retail price

                                                PS: Sorry for being a bit off-topic.

                                                1. 1

                                                  The reason that there is no correlation “between frequency response” is because frequency response isn’t what you are paying for unless you are buying studio headphones (and Beats Studio or other gimmicky headphones don’t count) - aside from upper/lower bounds changing slightly as you pay more in ways that people will debate the value of until the end of time.

                                                  The price goes to tonal quality, dynamic range, and the sound stage provided in the audio among various other features. These things aren’t measured in this study.

                                                2. 5

                                                  I’m surprised you could find $20 headphones that sound as good as what you could get for $80.

                                                  Re: lack of mic: I buy my headphones and mic separately, with the mindset of having each do their own single responsibility well. Then I have a splitter (“combiner”?) that I plug both into, that lets me plug the pair into a standard stereo+mic jack. The obvious con is that I have more wires dangling on me, but I live with that.

                                                  1. 9

                                                    I’m sure they didn’t sound as good as what I could get for $80, but they sounded about as good as what I did get for $80. :-P

                                                    1. 3

                                                      Yeah, even with headphones that have a mic built-in, half the time they cheap out and omit the hardware mute, so you’re stuck fumbling for the software mute like some kind of chump. With a hardware mic you can pick out something decent that can be controlled on its own.

                                                      1. 1

                                                        I have a huge collection of headphones and my experience is that most headphones between $20-$60 sound about the same and you’re just paying for a brand name. That metric moves up to $40-$110 or so for wireless. Most $20 wireless headphones hardly even work, but the ones that do are probably comparable? It varies a lot more from $120-300, then up to about $700 it narrows a bit until about the $2000 range where things vary in really cool ways =^.^=

                                                        …or in other words, you can buy a $40 or $80 headphone and they’re probably about the same, and a lot of $20 sound like $80 ones in the wireless world because that $80 markup went to brand & maybe better bluetooth hardware, but likely not better audio.

                                                    2. 7

                                                      I found the perfect solution to this. Some brands make headphones that have a 3.5mm jack in them and just come with a double ended cable. If the cable degrades which even a good one will after a couple of years of constant use, you can replace it at very low cost. Additionally even a cheap 3.5mm cable is usually higher quality than the average built in headphone cable for some reason. When I get headphones with replaceable cables the cable seems to degrade much slower.

                                                      Now I have to figure out a way to replace/conserve the padding/upholstery which seems to disintegrate after a couple of years.

                                                      1. 3

                                                        Some brands make headphones that have a 3.5mm jack in them and just come with a double ended cable.

                                                        Yup. After having to replace the jack on a fixed-cable pair of headphones multiples times I decided to never buy fixed-cable headphones again.

                                                        Additionally even a cheap 3.5mm cable is usually higher quality than the average built in headphone cable for some reason. When I get headphones with replaceable cables the cable seems to degrade much slower.

                                                        Also my experience. I’ve had my current pair of headphones for around 3 years, and the cable is still working just fine.

                                                        Now I have to figure out a way to replace/conserve the padding/upholstery which seems to disintegrate after a couple of years.

                                                        The earphone pads are replaceable. I’m a big fan of memory foam ones, which are much less fatiguing to wear for long stretches of time.

                                                  1. 3

                                                    What about performing sentiment analysis on all those commit logs?

                                                    1. 2

                                                      Haha I’d think most commit messages would be a little too dry/robotic to get a good sentiment reading on? What do you think?

                                                      1. 2

                                                        I’m reminded of once reading about a team that set up their computers to take a webcam snapshot of the developer’s expression (face) when a git conflict occurred.

                                                        1. 1

                                                          Well… I was bored the other day which led me to just posting this… https://lobste.rs/s/0zxoap/suggested_improvements_for_tool

                                                        1. 6

                                                          I would think most of us went the other way.

                                                          1. 4

                                                            I use both Linux and OSX every [business] day. I guess maybe I have strange needs and standards or something, but I find the UX not even close. I wouldn’t be able to stand having OSX as my daily driver knowing I could be on Linux. I find KDE (with just the default window manager) much more comfortable and nice to work with. Having single-key access to multiple desktops is a must-have for me. Command-Tab-ing around my windows is just too 20th century for me. Plus the ability to specify extremely detailed per-window and per-application window behaviours and appearances (in KDE).

                                                          1. 13

                                                            It’s missing a crucial point:

                                                            • Be ready to give up good HiDPI screen support

                                                            I have a 4K monitor because it’s much more enjoyable to use for development, but it becomes a pure nightmare on Linux. I ended up installing an Hackintosh.

                                                            1. 8

                                                              Not necessarily. You can look at this as purely anecdotal, but my Dell Precision 5510, which I’ve had for about four years, has a HiDPI display on par with what I have on the MBP I use for work. Maybe this is down to the fact that I run Ubuntu on it, but I’m just using stock Intel and Nvidia drivers on it, depending on my use case.

                                                              1. 3

                                                                Just curious, what makes it a nightmare?

                                                                1. 10

                                                                  My most recent encounter with this is fractional scaling:

                                                                  I have a 27’’ 4k monitor. I find that at this resolution, 1.5x scaling works best as 2x is too big and 1x is too small. With MacOS or even Windows, this is not a problem at all, with Linux it’s a can of worms:

                                                                  • Xorg doesn’t natively support fractional scaling, Instead you have to rely on hacks or only scaling the fonts (which quite frankly, looks like shit). I never managed to make any of those hacks work reliably and consistently in all apps, eventually you’ll open that one app that uses Qt and you require yet another hack.

                                                                  • Wayland does support fractional scaling, too bad that not all apps support Wayland, most notably Firefox. I tried running Firefox with the experimental Wayland backend with fractional scaling on, and everything looked blurry, it’s just not there yet.

                                                                  Now, I’m sure for all of those issues there are 10 different workarounds to try and things to tweak to make things better, but I can’t be bothered to do any of that when in macOS (or even Windows) it Just Works™.

                                                                  1. 7

                                                                    Just the fact that the ArchWiki page on HiDPI has a comprehensive list of required hacks is another example of what I’m referring to.

                                                                    1. 3

                                                                      Have you tried with KDE? I’m curious because I’m planning on using HiDPI with 1.25x scaling.

                                                                      1. 2

                                                                        Yes, with KDE it’s slightly better but still not as good or smooth as Windows and macOS in my opinion.

                                                                      2. 2

                                                                        Latest versions of Gnome in Ubuntu 20 have fractional scaling and it’s been set and forget for me.

                                                                        1. 2

                                                                          Ubuntu ’s fractional scaling is using significant more CPU power compare to 2x scaling.

                                                                    2. 3

                                                                      I tried switching to Ubuntu recently (not for the first time) and ended up going the Hackintosh route, too. For me the breaking straw was not being able to adjust mouse wheel scroll settings in a way that would work everywhere and didn’t seem to come with caveats or be labelled as a hack.

                                                                      Setting up a Hackintosh certainly wasn’t without its hassles, but having got there I’m very happy with it. I also have a Macbook and an iPhone, so that is another motivation to stay in the Apple camp (vendor lock-in, I guess?)

                                                                      1. 1

                                                                        I’m surprised you had mouse wheel woes. Do you have a special or fancy mouse or something? Or want very specific wheel behaviour?

                                                                        1. 1

                                                                          I just have a normal mouse, and TBH I’m a little fuzzy on what happened now. It could very well have been that if I has been using Gnome (or KDE, whichever one I wasn’t using) then it wouldn’t have been a problem at all, but what I really wanted was MacOS anyway, so I just did the Hackintosh thing instead. I kind of took “well I can’t get this basic thing to work right” as an omen.

                                                                      2. 3

                                                                        I use Xubuntu 20.04 on my X1 carbon and desktop with a 27” 4K monitor. Both work totally fine with HiDPI. The main issue I’ve had was when I plugged my laptop into non-HiDPI monitors, I had to lower the DPI for the monitors and that made everything on the laptop small.

                                                                        1. 1

                                                                          The distros the article mentions are ElementaryOS, Pop!_OS, and Fedora Workstation. I believe all of those have had great native HiDPI support “out of the box” for a few years now.

                                                                          I totally get not loving the aesthetic of GNOME or whatever, especially bumped up at x2, but I think it’s cool that Linux lets us customize everything end to end to our heart’s desire. If we’re going for a more custom setup, totally agree that it can require doing some custom tweaks to get a good HiDPI experience and it won’t be completely automagic.

                                                                          I don’t think it’s fair to say it’s all a pure nightmare on Linux as a whole. Beginners can have a good experience, experts can have a good experience, crossing that valley can be painful.

                                                                          Also worth noting that most of the ArchWiki page on HiDPI refers to outdated workarounds that are no longer required, though not all.

                                                                          1. 0

                                                                            Honestly, yeah. I purposely bought a normal DPI display just to avoid this pain. It’s never gonna be perfect. Apps aren’t gonna scale right, even with fractional scaling. Other apps might scale right, but be blurry. It’s gonna drive you crazy.

                                                                            Just get over the loss aversion, buy a good normal DPI monitor, and get back to work. I’m 100% satisfied with normal DPI.

                                                                          1. 28

                                                                            I’m not meaning to dissuade the SourceHut developer(s) from using GraphQL, but I am still on the fence about GraphQL, having used it at $employer for, oh, 4-ish months now. GraphQL is a shift in thinking, and there are sometimes several ways to provide data for a given business need.

                                                                            In any system with models and DB tables, some tables/classes/types have more than one link to them (in an ERD). For example, an Article could have an Author, but it could also be in a MagazineIssue. When you have multiple pathways to reach a given entity, you (as the developer of a given use case, web page, whatever) have to decide which path you’re going to code in your query. Carrying on with the example schema: It could be currentUser -> articlesWritten; or it could be currentUser -> magazinesSubscribedTo -> issues -> articles. This kind of multiplicity of paths increases the complexity of the system, and makes the system more challenging to work with.

                                                                            Authorization with GraphQL is quite non-trivial. Suppose you had: Authors organize with Folders, in which they put Articles; and then Readers read Articles. A simple schema would be to have this heirarchy: authors -> folders -> articles. Folders would have-many articles, and articles would belong-to folders. But if you have a query that needs to get articles for a reader to read, if you stick with this simple schema, then you have to include folders in the query. But readers should not have access to or knowledge about the folder the author has put the article in. Or maybe the reader doesn’t care about who wrote what, and wants to get articles on a given topic or tag, like “politics”. Worse yet: a reader [UI] should not be able to query “upstream”, and get at article -> folder -> author -> author.email. “So don’t put upstream association links in your schema” you might say. But is it so simple? What if a pair of entities are not in a simple, obvious hierarchy where one is “higher” than the other. Say, a Person model with a friendsWith association.

                                                                            I grant that there may be some correct way(s) to go about things to deal with issues like the above, but GraphQL doesn’t come with guardrails to prevent problematic schemas, and I think it’s not only possible, but somewhat likely for a team new to GraphQL to plant footguns and landmines in parts of their schema.

                                                                            So, yeah. I’m still on the fence about this GraphQL thing.

                                                                            1. 24

                                                                              Indeed. At GitHub we had a VP who was All About GraphQL – we GraphQL’d everything, over a period of a couple years. By the time I left, there were RFCs circulating about how we’d undo all the damage done :|

                                                                              1. 23

                                                                                Could you describe some of the damage which had been done? What did the push for GraphQL break?

                                                                                1. 6

                                                                                  I’m curious about this as well.

                                                                                2. 18

                                                                                  Thanks for these comments. I would appreciate more specific feedback, if you have the time to provide some. You can use our equivalent of the GraphQL playground here:

                                                                                  https://git.sr.ht/graphql

                                                                                  Expand the text at the bottom to see the whole schema for git.sr.ht. This went through many design revisions, and I’m personally pretty satisfied with the results.

                                                                                  Also, specifically on the subject of authorization, the library we’re using provides good primitives for locking down access to specific paths under specific conditions, and combined with a re-application of some techniques we developed in Python, this is basically a non-problem for us. I’m still working on an improved approach to authentication, but none of the open questions have anything to do with GraphQL.

                                                                                  1. 2

                                                                                    Could I ask about this part?

                                                                                    GraphQL does not solve many of the problems I would have hoped it would solve. It does not represent, in my opinion, the ultimate answer to the question of how we build a good web API.

                                                                                    Which additional problems would you like to see solved? In which direction might the ultimate answer lie?

                                                                                    1. 9

                                                                                      I don’t think it solves pagination well. It should have first-class primitives for this. That’s the main issue.

                                                                                  2. 9

                                                                                    Same here, except we’ve had lackluster success with GraphQL for 18-ish months instead of 4.

                                                                                    I could see it being a big step up for data models that are oriented primarily around a graph, but I’m having a really hard time seeing how sourcehut’s would work this way, outside of the git commits themselves. I’d be interested in reading more about the dissatisfaction with REST, because everything we’ve done so far with GraphQL could have been done better with REST along with something like JSON-schema.

                                                                                    1. 2

                                                                                      For small projects, I’ve started just using json-rpc (what’s old is new again!) with a well defined schema (with validation). Nice side effect is the schema gets used as part of the documentation generation.
                                                                                      Saves lots of time noodling around with REST. I probably wouldn’t recommend it for large projects though, as it is a bit harder to version apis (a bit more “client fragile”).

                                                                                      I’ve used GraphQL on a few projects, and I thought it was ok. It certainly has its own set of problems.

                                                                                    2. 4

                                                                                      Caching and pagination are also issues I’ve seen with graphql, as well as many many failures around n+1 queries.

                                                                                      1. 1

                                                                                        About the n+1 problem:

                                                                                        Disclaimer: I have only dabbled with graphql and never used it professionally.

                                                                                        Let’s compare this to a REST API (or what would be a better reference point?). In a REST API, you also have the n+1 problem if your entities refer to other entities.

                                                                                        It is just more common to solve this on the client rather than on the server – which is worse due to higher roundtrips. Alternatively, you provide a denormalized view in your API that provides all data for, let’s say a screen or page, on the server. That means that you have to change your server in lockstep with the client – and also worry about versioning. In the server code you can either use a generalized solution (much like with GraphQL) or a specialized version because it only has to work with this one query. In my opinion, a generalized version is also usually the preferred option but it is nice to have the option for hand-optimization.

                                                                                        Therefore I think that solving the n+1 problem in GraphQL requires some thought - but also in a REST API. There are good enough solutions. On one hand, since you have more information in your schema, you actually have some more options than in REST. On the other hand, REST is more constrained so you can work with more specialized solutions.

                                                                                        Am I missing something important?

                                                                                        1. 1

                                                                                          Pagination is not really a problem, you just have to make a type specifically for pagination.

                                                                                          So for an Article you would have an ArticleConnection containing… yeah just use REST it’s way more sane.

                                                                                          1. 4

                                                                                            One thing to consider is that “just have to make a type” relies on the fact that GraphQL fields can have parameters. This introduces a challenge for caching, since those parameters become part of your cache key. If you’re trying to take advantage of a normalizing graph cache on your client, the cache engine needs to be aware of the pagination primitives.

                                                                                            Not advocating for REST, since you have a similar problem there. However, REST makes some static decisions about what fields are included and so the cache key is reasonably a URL instead of a GraphQL AST + variables map. If you decide you don’t want a normalizing cache, but do want a naive document cache ah la REST, managed by the browser, you can’t get that unless you have named queries stored on the server. Then you can query for /query/someName?x=1&y=2 sort of thing.

                                                                                            I guess my point is that nothing with GraphQL is “just” anything. You’re discarding a lot of builtin stuff from the browser and have to recreate it yourself through cooperation of client JavaScript and specialized server code. Whether or not that’s worthwhile really depends on what you’re building.

                                                                                        2. 4

                                                                                          When you have multiple pathways to reach a given entity, you (as the developer of a given use case, web page, whatever) have to decide which path you’re going to code in your query

                                                                                          Why is this a bad thing? Presumably you always fetch articles using the same ArticleResolver, this doesn’t take any extra effort to support on the backend. And the multiplicity of paths is the nature of graphs! I don’t see why it makes the front-end more complicated.

                                                                                          Authorization with GraphQL is quite non-trivial

                                                                                          The best advice I’ve seen for this is: don’t do authorization in GraphQL. You have some system which accepts and resolves GraphQL queries, it fetches data from other systems, those other systems should be doing the authorization. I haven’t had a chance to apply this advice in practice, I think that for some use-cases postgres row-level security would do wonders here, but it seems reasonable to me and it would solve most of your complaints. If your GraphQL resolve can’t even see author.email, then it’s no problem at all to include that link in your schema, it’ll only resolve if the user is allowed to see that email.

                                                                                          1. 3

                                                                                            The best advice I’ve seen for this is: don’t do authorization in GraphQL. You have some system which accepts and resolves GraphQL queries, it fetches data from other systems, those other systems should be doing the authorization. I haven’t had a chance to apply this advice in practice, I think that for some use-cases postgres row-level security would do wonders here, but it seems reasonable to me and it would solve most of your complaints. If your GraphQL resolve can’t even see author.email, then it’s no problem at all to include that link in your schema, it’ll only resolve if the user is allowed to see that email.

                                                                                            “just not resolve”: But how does that actually play out in practice?

                                                                                            Your query which requested 40 fields in a hierarchical structure gets 34 values back – and just has to deal with it. Your frontend might have Typescript types corresponding to some models in your system. So then some places, which query under certain contexts, get 38 values back, other places get 34, and so on. So does your Typescript now just have to have everything marked with a ? suffix (not required, null is permissible)? Suppose you have an Angular component which takes an Article type prop, or gets it from NgRX, or wherever. Now, your component cannot just populate its HTML template with all known fields of the Article type, like author name, article title, date published, etc. Sometimes that data will just not be there, “because auth”. So will it have to have *ngIf all over the place? Or just have {{ someField }} interpolations peppered throughout that just render as a blank space – sometimes? Should you have a different Angular component for each context that this type appears?

                                                                                            And this is all just talking about a simple single defined GraphQL type and its fields. Things get even more fun when you have 3+ levels of hierarchy, and arrays of results. What if the model hierarchy is A -> B -> C, and your currentUser has access to the queried A, and the queried C, but not the queried B? Or, because of lack of authorization, it only has access to a subset of the array of values of a given field. So, your frontend now displays a partial set of results – without writing any error in your error log. Or maybe an auth error on an array field makes it return no records at all. Or maybe a null instead of []. So Engineer Alice is expecting to see 10 records, just like she saw on Engineer Bob’s screen when they were pairing yesterday. But today, when she goes to work in her branch, she only sees 7 – and cannot easily see why. “Well, your devs are supposed to know your (whole) schema”, we might say. That’s ideal, but in a team beyond a certain size, that can’t be assured (and, beyond a certain size after that, shouldn’t be expected).

                                                                                            “Okay, so don’t make auth problems silent, make them noisy” we might say. Yes, I agree. So how noisy should these be? Should a single auth problem, in one single node in a 7-level hierarchy of 82 fields in a GraphQL query cause the entire query to return non-200? Teams have to decide about that, and some teams won’t agree to that approach. For example, at $current_employer, our (non-unanimous) agreement at the moment is to write to error logs, dispatch to $third_party_error_service, but return null and return success (200) for the query. But, in so doing, we are still having some engineers come into the Slack channel(s) and reach out for help because they don’t understand why their GraphQL isn’t working any more, or why such and such page used to return whatever yesterday, but is returning something else today.

                                                                                            I’m just trying to point out that GraphQL isn’t problem-free. There are warts and challenges that need to be acknowledged and addressed before success can be had.

                                                                                            1. 6

                                                                                              I put my comment out there knowing it was wrong and hoping someone would come along to tell me what I was wrong about, but I don’t think this particular criticism holds water.

                                                                                              You’re pointing out that if the frontend tries to access data it may or not be authorized to see it… may or not see it. The ? is essential complexity that is no fault of GraphQL, REST will have the same problem.

                                                                                              I think you’re arguing that:

                                                                                              • In GraphQL all of your data fetches happen at once, so it’s not possible to 403 just the missing pieces, or feasible to 403 the whole request. We agree here!
                                                                                              • It’s not feasible to expect every engineer to know how the schema works. I agree, and think that’s exactly why types were invented. If User.email is nullable then engineers using User should expect that there will sometimes not be an email. Going further, you might want to make some types self-documenting. If it’s expected they’ll often fail, due to lack of authorization, you might return a union type: ViewableUser | CloakedUser.
                                                                                              • Silently not returning data you can’t see is a recipe for confusion. That’s sometimes true, I agree! However, say you’re writing a task manager and the query asks for a list of all tasks. There, it’s pretty intuitive that the query would only return the tasks you’re allowed to see. I would expect that in most situations there’s an intuitive solution, authorization is a concept that we’ve all spent a lot of time getting used to.

                                                                                              There’s another technique you might use, which is to return the errors inline with the query. Return both the data and also the errors. If some field fails to resolve you might leave it null and also bundle an error in with the response explaining why the field is null.

                                                                                              Which of these techniques you go for will depend on the situation, and probably there are some situations where none of them are satisfying, but I don’t think it’s reasonable to expect any technology to be problem-free. You’re holding GraphQL up to a very high standard here!

                                                                                              1. 4

                                                                                                Context: After dablling, I think that GraphQL is usually better than a REST API – and I want to find out if I am wrong. I am fine if it doesn’t solve all problems but it shouldn’t make things worse.

                                                                                                Back to your comment: I think, it is normal that your data schema has to take authorization constraints into consideration. Agreed, that does not make it always easy.

                                                                                                If you cannot always return data, you have to make it optional. Most likely, you don’t have to do this field by field but put them into their own sub object.

                                                                                                This is also what you would need to do if you used a REST API with a schema. Or not?

                                                                                                Just an idea: If you want to make authentication restrictions more obvious, maybe you can use union types instead of optional fields? Similar to Either/Try in functional languages or Result Rust, you could either provide the result or the error that prevented it. Or instead of an error, a restricted view… Then clients having trouble see the reason for the trouble directly in their result but the schema gets more complicated.

                                                                                                1. 1

                                                                                                  If you cannot always return data, you have to make it optional. Most likely, you don’t have to do this field by field but put them into their own sub object.

                                                                                                  This is also what you would need to do if you used a REST API with a schema. Or not?

                                                                                                  Just an idea: If you want to make authentication restrictions more obvious, maybe you can use union types instead of optional fields? Similar to Either/Try in functional languages or Result Rust, you could either provide the result or the error that prevented it. Or instead of an error, a restricted view… Then clients having trouble see the reason for the trouble directly in their result but the schema gets more complicated.

                                                                                                  Frankly, I would have rathered that auth errors were noisy and fatal, making developers immediately aware of issues before things get to production. The counterargument from my team, though, was that they did not want to provide information to malicious actors about unauthorized things. So the decision was to make auth errors return 200s, and poke null holes in the payload fields, and/or snip subtrees out of the payload.

                                                                                                  Context: After dablling, I think that GraphQL is usually better than a REST API – and I want to find out if I am wrong. I am fine if it doesn’t solve all problems but it shouldn’t make things worse.

                                                                                                  Indeed, it’s much the same for me. I don’t have any big personal problem with GraphQL. I just try to assess things objectively, and avoid getting pulled along with hype waves without reason. I just haven’t had an overall positive experience yet with GraphQL. It’s been plus-and-minus.

                                                                                                  1. 1

                                                                                                    Thank you for your thoughtful replies :)

                                                                                                    The counterargument from my team, though, was that they did not want to provide information to malicious actors about unauthorized things.

                                                                                                    It’s is hard to judge without your schema but that sounds slightly weird. An attacker gets potentially more information by what fields are missing than a generic auth denied. But even if that’s not true, sounds like security by obscurity to me. [But yes, I am frequently wrong, and maybe there is a good reason for this in your case ;)]

                                                                                                    1. 1

                                                                                                      The counterargument from my team, though, was that they did not want to provide information to malicious actors about unauthorized things.

                                                                                                      It’s is hard to judge without your schema but that sounds slightly weird. An attacker gets potentially more information by what fields are missing than a generic auth denied. But even if that’s not true, sounds like security by obscurity to me. [But yes, I am frequently wrong, and maybe there is a good reason for this in your case ;)]

                                                                                                      Your point stands, but the nuance here is that it’s more like: If someone doesn’t have access to X, if you tell them it’s missing, they can’t infer existence by trial-and-erroring and seeing what gives error vs. what gives absence.

                                                                                            2. 1

                                                                                              Can you elaborate how you would solve the “graph” thing better with other API designs? E.g. in a REST design you would have the same potential path ways that you could use for querying? (Just in separate requests?)

                                                                                              1. 2

                                                                                                I don’t know if it’s better, but what I do is this: I actually am pretty far from a REST purist, insofar as I do not strictly adhere to a 1:1 mapping between the object hierarchy and the REST endpoints. Instead, I prefer steering towards having endpoints 1:1 with frontend pages (for GETs, at least). So, where there are multiple pathways, my endpoints give only the information needed for the page. For the previous example, a Reader needing Articles would only get an array of Articles back, and would not have any Folders or Authors come along for the ride in the returned payload. If the page needs to display, say, an Author name, then it would come in the [same] payload, too, except as an extra field right on the Articles in the payload.

                                                                                                I like the simplicity and straightforwardness of “1:1 with frontend pages”, even though it causes criss-crossing and mixing of models/types in a given payload. I leave the entity relationship strictness to the model/DB level (has-many, belongs-to, etc.).

                                                                                                1. 1

                                                                                                  Thank you for elaborating. I have used that pattern and it worked fairly well for me, too.

                                                                                                  You have to keep client/server in sync, though. I really like the flexibility of graphql here but I haven’t used it on a real life project.

                                                                                            1. 1

                                                                                              My ideal coding experience is having the path completely free of obstacles and detours between my thought of what I want to do and the actual making of that mental intangible into a reality. Anything and everything that either removes such obstacles (or doesn’t put them there in the first place), or makes that path as straight and direct as possible – I hold onto them, and defend them.

                                                                                              In the context of this idea, here a few things that I like: Ruby (note: not Rails), Vue, Cypress, (certain) webpack dev servers, in-browser dev tools.

                                                                                              And, in contrast, a(n incomplete) list of things I dislike: Angular, extreme linting rulesets, Docker (crafting, not usage), most cloud devops-y stuff (AWS, etc.), bitfields, [overuse of] state machines, slow app standup time, extremely slow page load times [in development], extremely long-running test suites, gargantuan monolithic classes/modules/whatever, tight coupling, silent coercion, silent failures, unhelpful errors, misleading errors, invisible state (e.g. yarn link), unclear mapping between UI element and source code file, MySQL CLI, poor or spartan documentation, having documentation instead of making something so easy to use it doesn’t need [as much] documentation, git rebase messing up long-running github PR code reviews, PRs that are [too] large, long-lived branches, insufficient test coverage, unrealistic deadlines, poor management of technical debt.

                                                                                              1. 1

                                                                                                What’s that monospace font you’re using? I’m on the lookout for good large x-height coding fonts.

                                                                                                (edit) I found Everson Mono, and it looks really close to what’s on your screen. Is it that?

                                                                                                1. 3

                                                                                                  Inconsolata with Operator Sans for the italics.

                                                                                                1. 4

                                                                                                  Maybe some masking tape would solve the trackpad problems.

                                                                                                  1. 2

                                                                                                    Or fingerless gloves? Heh.

                                                                                                  1. 9

                                                                                                    I can and will retrain my hand placement habits. After all, this touchbar-keyboard-trackpad combo is forcing many people to learn to place their hands in unnatural positions to accommodate these poorly designed peripherals.

                                                                                                    It is amazing to me what people put up with to use these devices. I generally find the issue with accidentally touching the trackpad so severe that I only use laptops with trackpoint and the first thing I do on my device is to disable the trackpad completely.

                                                                                                    1. 15

                                                                                                      Part of why I ended up becoming a programmer is frustration with a touchpad. It led me to keyboard-only UIs, which lead me to Arch/XMonad, which led me to Haskell, which confused me but led me to Python, which… <10 years later> I have a career as a software engineer :)

                                                                                                      1. 2

                                                                                                        Part of why I got into programming more passionately was excitement when the apple trackpad came out ten years ago. It led me to think about possibilities beyond keyboard-centric UIs. It led me to make zany things. While I’ve never succeeded professionally as a full-blown software engineer it made me appreciate how hard developing great experiences for humans are.

                                                                                                      2. 5

                                                                                                        At work, we actually had to modify a piece of software to deliberately ignore most of the input from recent mac touchpads. The application is multi-touch capable, which on some of the hardware it runs on is really useful. However, on mac, the combination of the oversized touchpad and the fact that it doesn’t map to the screen (it’s mapped to a smaller area which follows the cursor around, so nobody really knows what they’re touching) meant that macbook users were constantly touching things with their tentacles which they didn’t mean to touch.

                                                                                                        1. 1

                                                                                                          macbook users were constantly touching things with their tentacles which they didn’t mean to touch.

                                                                                                          So, uh, what exactly do you do for work?

                                                                                                        2. 1

                                                                                                          Honestly, I liked trackpoints for several years, but after getting a Thinkpad with both trackpoint and trackpad, I have firmly settled on preferring trackpads for this reason: I can accurately point at things faster than with the trackpoint. I do use the trackpoint on rare occasion, but only when I need fine control with something, like moving in a very small screen area, or scrolling only a tiny bit.

                                                                                                          I acknowledge that many other people around the Internet have a problem with accidental palm touches, but, for some reason, that’s never been a problem for me. Then again, I haven’t used Windows in the last several years (only Linux and OSX), so maybe that’s the reason?

                                                                                                          1. 2

                                                                                                            The Thinkpad has those two big buttons at the top of the trackpad. They require force, so you won’t accidentally press them, and they’re placed about where my thumb wants to rest when I use the keyboard.

                                                                                                        1. 14

                                                                                                          Here’s the thing: I submit that the more important unit is the semantic chunk (“code morpheme”?), rather than the character, and it’s the former that we should be watching for excess of per line. As I tell people: I think code is more readable when it’s more vertical than horizontal. I almost always try to string code morpheme sequences line by line, rather than side by side.

                                                                                                          Some pseudocode examples to illustrate:

                                                                                                          # Instead of this:
                                                                                                          some_method(some_arg: some_value, another_arg: another_value, arg3: val3)
                                                                                                          
                                                                                                          # I'd write this:
                                                                                                          some_method(
                                                                                                            some_arg: some_value,
                                                                                                            another_arg: another_value,
                                                                                                            arg3: val3,
                                                                                                          )
                                                                                                          
                                                                                                          # Instead of this:
                                                                                                          some_object.chained_method1(arg).chained_method2(arg2, arg3).chained_method4
                                                                                                          
                                                                                                          # I'd write this:
                                                                                                          some_object
                                                                                                          .chained_method1(arg)
                                                                                                          .chained_method2(arg2, arg3)
                                                                                                          .chained_method4
                                                                                                          
                                                                                                          # Instead of this:
                                                                                                          if (comparable > some_threshold || something.in?(some_array)) && obj.is_foo?
                                                                                                            do_something
                                                                                                          end
                                                                                                          
                                                                                                          # I'd write this:
                                                                                                          if (
                                                                                                            comparable > some_threshold ||
                                                                                                            something.in?(some_array)
                                                                                                          ) && obj.is_foo?
                                                                                                            do_something
                                                                                                          end
                                                                                                          

                                                                                                          Very few morphemes per line, and almost always combining on one line things that themselves make up one group (rather than more than one group). What the reader enjoys from this style is that they don’t have to visually trace the morphemes (or groups) both horizontally and vertically; they only encounter the semantic entities in one direction, along one path. I think this makes the code easier to understand and reason about.

                                                                                                          So, as regards “line length linting”: when you write according to the above guideline, your lines tend to be shorter than 100 characters.

                                                                                                          1. 5

                                                                                                            This is the convention favored by tools like black as well, and I find my quality of life has improved since I embraced them. Even though philosophically I am “fine” with 120-width lines, I haven’t found that this leads to very many “what?!” formatting outcomes – and if it does I can often refactor to a local variable instead. For example, in the if block above I might add a local:

                                                                                                            foo_is_relevant = comparable > some_threshold || something.in?(some_array)
                                                                                                            
                                                                                                            if foo_is_relevant && obj.is_foo?
                                                                                                              do_something
                                                                                                            end
                                                                                                            
                                                                                                            1. 2

                                                                                                              Yep, Extract Variable is a good tool to have in your developer’s toolbox.

                                                                                                              Black makes code review faster by producing the smallest diffs possible.

                                                                                                              Another benefit of more-vertical code. (Though I cede that it’s not too hard to get diffing tools to ignore whitespace and highlight mid-line changes.)

                                                                                                          1. 13

                                                                                                            I see this kind of thing in the industry time and time again. I have always been bewildered at the tremendously high degree of tolerance my industry peers seem to have for suffering and frustration in day-to-day work. Time and time again, I have seen and experienced horrifically poor UX with some given $tech, $tool or $framework which the industry has accepted as okay (or heck, even good or great) for what I can only assume is fallacious reasoning like popularity, appeal to authority, or tradition. Some examples: OAuth, JWT, [devving with] PayPal, AngularJS, yarn, Docker.

                                                                                                            When I find some tech that is easy to use, which just gets out of the way so I can just get stuff done, I really enjoy and cherish it. A couple examples: Ruby (N.B.: not Rails), and Vue.

                                                                                                            1. 1

                                                                                                              It’s mostly that these kinds of things don’t show up on the radar when you’re looking for a place to work, and once you’re there you find out that the tooling is torturous and, if it happens that you cannot improve it, you either put up with it or bail out.