Threads for grawlinson

    1. 5

      Re: JWTs

      You want PASETO.

      1. 2

        You may want it but you aren’t getting it from any of the places that force you to go and dig into JWT stuff.

        Don’t use those then? I can only imagine the blessed life one would lead that can make these decisions and have a working alternative.

      2. 2

        Is there any particular reason why PASETO hasn’t taken off? I just see a lot of cargo cult programmers banging on about JWTs when the issues have been well publicized for a while now.

        1. 3

          JWT got first mover advantage and it’s already tied into auth libraries, etc. Plus it has JSON in the name :)

        2. 3

          What @zie said.

          Also the IETF likes to stonewall things that compete with their incumbent designs.

          JOSE has the JOSE-WG in IETF. Their response to JWT insecurity is “let’s publish a best practices RFC”.

    2. 2

      Is there something new I’m missing? WSL has been out for awhile. And dual-boot has been around longer and not dependent on Microsoft.

      1. 17

        from the reactions to this link elsewhere, the notable thing seems to be that Microsoft is publishing documentation on how to download and dualboot Linux.

        1. 7

          A lot of the current MS client strategy revolves around Azure Attach. .NET is open source so that you can develop anywhere, but if you want to deploy then Azure is the obvious choice. WSL2 exists so that people can create Docker / OCI containers on Windows and deploy them into the cloud, and if they’re doing it on Windows then it’s easier to sell them Azure (in theory). Once they deploy Linux containers in Azure, you can sell them other Azure services.

          1. 2

            Do you use WSL or dual-boot? If you don’t mind me prying.

            1. 2

              I used WSL to run ssh and vim and did everything else in a FreeBSD VM. Now that I’ve left Microsoft, I no longer have to use Windows and the associated large productivity hit. I now work on a Mac and use UTM for a FreeBSD VM and Podman to run FreeBSD and Linux containers.

              1. 3

                I see. That’s a loss for them. Not that many like you with as deep of an understanding or broad a grasp of what is going on in open source ecosystem.

                1. 1

                  There are a few others. Stephen Walli, in particular, has a very good understanding of how the company should be engaging with open source. Unfortunately, not enough people listen to him.

      2. 1

        Be nice if they cleaned up their habit of nuking the boot partition and overwriting it with MSFT’s bootloader.

        1. 2

          They haven’t ever done that on EFI-based systems, as far as I know. They still change the default boot option, but any reasonable EFI implementation has a way to select which boot option is default in the firmware setup, and the Windows install leaves the Linux boot option intact.

          1. 1

            any reasonable EFI implementation has a way to select which boot option is default

            Yeah, you’d think that, wouldn’t you?

            It didn’t on this thing:

            https://www.theregister.com/2022/11/08/tuxedo_stellaris_amd_gen_4/

        2. 1

          That is one of the points I made in my article about this:

          https://www.theregister.com/2023/10/11/microsoft_documents_installing_linux/

    3. 3

      This project has an ambitious goal of creating a framework for writing NixOS router configurations

      Ambitious indeed.

      For something stable to use in production (with far more advanced features), there is VyOS which runs on damn near everything. The documentation/examples are pretty good for a OSS project too.

    4. 2

      I just checked chrome://settings/adPrivacy and it appears that everything was already off. Maybe I disabled this some time ago? Anyone else able to check their settings?

      1. 7

        They will absolutely turn these settings back on in any/all future updates. They’ve certainly done it before.

        1. 1

          They have? Source? It’s not that I don’t believe you, I’m just curious.

          1. 3

            I don’t know if this is what @grawlinson was thinking of, or even if it’s the only example, but my mind immediately went to the time Google got fined millions of dollars for tracking people’s locations even though they had turned the feature off.

      2. 3

        you also have to trust that those settings have any real meaning at runtime…

        1. 3
          1. 1

            Don’t forget that google has a strong financial incentive to push this.

            1. 5

              I have not forgotten that. I just think that having a fake switch would be particular ly bold.

              1. 1

                It was bold to remove a public “don’t be evil” statement.

    5. 2

      Is systemd-nspawn something that’s likely to be around in five or ten years? I don’t follow systemd development too closely, so I don’t have a sense if this is common, widely used code or systemd’s take on a Docker competitor.

      I found the article pretty compelling, and look forward to trying out the technique.

      1. 3

        nspawn has been around for a long time. It’s what we use as an isolated chroot for building Arch Linux packages.

    6. 7

      I made this site IPv6-only in hopes of thwarting repost bots that send toxic comments my way. For an IPv4 version, see https://legacy.cyrnel.net/solarwinds-hack-lessons-learned/

      1. 11

        Thanks. I have working IPv6 here but the v6 version doesn’t work for me - I get connection refused errors. It looks as if you have a valid AAAA record (and no A record), so my browser is trying to connect via v6, but failing.

        1. 4

          I have the same issue on mobile, from my ipv6 network.

          1. 2

            Could it be possible we’re experiencing the effects of peering disputes? https://adminhacks.com/broken-IPv6.html

            Although probably more likely that I just didn’t configure something right…

            1. 3

              It does look like something might not be configured right..

              9:18:25 brd@m:~> curl -I https://cyrnel.net/solarwinds-hack-lessons-learned/ curl: (7) Failed to connect to cyrnel.net port 443 after 72 ms: Connection refused

              9:18:27 brd@m:~> ping -6 cyrnel.net PING6(56=40+8+8 bytes) 2602:b8:xxxxxx --> 2603:6081:ae40:160::abcd 16 bytes from 2603:6081:ae40:160::abcd, icmp_seq=0 hlim=52 time=69.674 ms 16 bytes from 2603:6081:ae40:160::abcd, icmp_seq=1 hlim=52 time=72.403 ms 16 bytes from 2603:6081:ae40:160::abcd, icmp_seq=2 hlim=52 time=72.432 ms 16 bytes from 2603:6081:ae40:160::abcd, icmp_seq=3 hlim=52 time=71.435 ms

              HTH, HAND!

              1. 4

                Ah I only opened port 80 in the firewall for the IPv6 version. Should be fixed for next time, thanks for the debugging help!

                1. 1

                  Works for me, thanks!

              2. 2

                yeah same, I can ping it, but I can’t curl or browse it (handy and PC) - other ipv6 stuff works fine

      2. 5

        Thanks! My ISP (local branch of My Republic) has zero plans/intent to implement IPv6, so it’s appreciated that there’s a legacy method. :)

    7. 8

      FreeBSD’s tar supports --zstd and --lz4[1], both of which are great for when you want fast compression and decompression. 55% compression seems pretty low from gzip, that’s about what I get on random data with zstd and lz4 with settings where I/O remains the bottleneck.

      I haven’t used gzip for a long time. Where I care about compression ratio at the expense of everything else, bzip2 and then xz displaced gzip. Where I want compression without impacting throughput, lz4 took over. These days, zstd has a great range of options and gives a better compression-ratio:speed tradeoff than gzip for everything I’ve tested.

      [1] The article mentions that GNU tar supports external tools, the nice thing about the built-in support is that tar x on FreeBSD detects all of these compression headers and does the right thing, without needing to pass z, Z, J, and so on when decompressing.

      1. 10

        55% compression […] what I get on random data

        Your random number generator is very broken!

        ;)

      2. 2

        Really the only advantage of tgz is compatibility, only bz2 comes close and it’s slow. So tgz exists only for slinging, especially many small files as the “solid” compression will be advantageous compared to zip files (the other ubiquitous format, even more so as AFAIK Windows still does not have built-in explorer support for tgz).

        1. 4

          it’d be cool if more gnu userland tools started supporting libarchive. it’s made things so much easier to work with on the bsd side: https://libarchive.org/

          1. 2

            Or if Linux distros would just ship bsdtar instead of GNU tar by default. It’s far more pleasant to use and one of the things that keeps annoying me when I’m in a Linux terminal.

            1. 1

              bsdtar (libarchive) is a dependency of pacman, so we ship it by default in Arch. I agree with you that it is far more pleasant to use!

        2. 3

          Funny enough, Microsoft announced today that they are adding support for tar, gz, etc. through libarchive. Wouldn’t have believed this was ever going to happen 10 years ago!

          1. 2

            Who wouldn’t have believed this? Anybody who didn’t read the Halloween Documents in the 1990’s.

            Embrace, Extend, Extinguish.

          2. 1

            It’s really not clear from that article whether this is support built into Explorer (the UI). They’ve been shipping some sort of tar for a while in the terminal.

    8. 1

      I contributed ~60 commits to a Java FOSS project over a 2 week period. It was a major refactor that would’ve made life a lot easier for new contributors. Passed CI, building locally (on Linux & macOS), all that jazz.

      But when it came to users actually compiling it, it wouldn’t work on Windows. So all these commits got rolled back because gradle really sucks at ensuring everything works cross platform.

      1. 5

        I’ve written bugs which prevents code from working on some platform or another in many languages and many build systems. You should probably specify what exactly Gradle did wrong and what it should’ve done instead if you want this to be a good critique of Gradle.

      2. 2

        What was the failure? This sounds really strange to me.

    9. 1

      That’s a really good overview of what being a package maintainer entails without going into too much detail. Kudos!

    10. 1

      Client side checks. Amateur hour.

      1. 2

        You know who else does client side checks, but really shouldn’t? Supermicro with some of their BMC web interfaces. Nobody is immune to amateur hour.

    11. 9

      I keep grumbling about this in different contexts, but I wish people would stop writing push-model lexers. This one is another instance. The lexer is responsible for deciding the type of the token. This pushes a lot of complexity into the parser for situations where there is some context sensitivity. You have to check against identifier and all of the context-dependent token types.

      A pull-model lexer takes the expected token type as an argument and returns either no token or a valid token as an argument. This makes it trivial to handle ambiguous token types, because the parser (which has parse state) is able to request the most specialised token type that makes sense in the current context.

      As a minor nit, it looks as if the token returns a null-terminated C string, which requires a copy, so it is easy to leak memory. If this is not a copy, then it lacks a length and so the parser has to re-lex to find the end. I prefer the Token type to contain a source location pair. I typically use something inspired by clang here, where the source location is a 32-bit integer that uses 1 bit as a discriminator to differentiate between values that encode a source, column, and file tulle in 31 bits, or a 31-bit index into a table of source locations that don’t fit in this encoding. You can then expose APIs for getting the length of a token and copying it as a string (into a caller-provided buffer). These can be static inline functions. This looks like it requires a single C string as an input, so you’d need something extra for building an include stack and your source locations can just be offsets into the stream.

      In general, it’s a good idea to allow the input to be pulled in in chunks, rather then the whole thing provided at once. In C, I’d write this as a structure with a pointer, a start location, a length, an internal buffer, and a callback to update the pointer to point to the requested location (and another void* for stream context).

      1. 1

        I haven’t found many pull-model lexers, are you aware of any that would be great to learn from?

        The only one I’ve really looked at is pulldown-cmark.

        1. 1

          Most real compilers end up writing an ad-hoc one. I’ve not found a general-purpose tool for writing them, so I’ve tended to write ad-hoc ones in various places.

    12. 11

      This feels less like “Lisp is useful for devops” than “Here are three devops tools that use Lisp”. My area knowledge isn’t that great, but I’ve never heard of any of these three, so I assume they aren’t terribly common. If one is likely to find different, non-Lisp tools solving the same problems in the wild, and there’s no killer advantage to using these Lisp-based tools, then it doesn’t seem to me that the existence of these tools actually makes it true that Lisp is useful for devops.

      I suspect that the author was more interested in sharing Lisp and these tools than making a specific claim about the relative utility of Lisp to a devops engineer, so I’m being overly nitpicky. But the post would be more compelling with at least a paragraph offering some advantages Lisp might have for devops, like the utility of declarative or functional languages for configuration. The lone comment on the post mentions Guix, which I have heard of, and which does press those specific advantages.

      1. 2

        Great feedback, thank you!

        1. 5

          If you’re looking for article ideas, I would love to see an article that takes a standard and realistic Chef / Puppet / Docker setup and is fully replaced by bass. I think that might make it easier to get behind.

          1. 2

            Definitely seconded, plaze halp.

      2. 1

        And a fourth tool: newLISP

        1. 3

          Please don’t use newlisp for anything, ever.

          1. 5

            Why not? Is it because it hasn’t seen a new release for a while?

        2. 1

          What makes newLISP well suited for dev ops tasks?

          1. 2

            I’d say the will of the creator to make it feel more like a scripting language. At the time I first encountered it, it seemed promising enough for some stuff, but I never got the chance to test it in production.

            1. 2

              I use it all the time. newlisp is fun. It feels like LISP that has the good stuff from Perl and C and shell scripting pulled into it.

    13. 3

      I’ve been using yarr which is in the same vein as miniflux, but even more minimalist. (Cons: it doesn’t work without Javascript) I’m now considering switching to miniflux due to how barebone it is.

      But if you’re looking for a barebone RSS reader, I think yarr should be considered.

      1. 2

        I’ve been looking for a replacement for rawdog (http://offog.org/code/rawdog/) since it’s Python 2 only. Yarr looks good in that it uses sqlite but the front-end looks overwrought. Miniflux looks good, too, but I don’t have any other need for a full-blown RDBMS so I’m hesitant to run Postgresql just for that app.

        1. 1

          You can probably run miniflux + postgresql on fly.io. 256 MB x 2 isn’t much, but more than enough in this case.

        2. 1

          Porting rawdog to Python3 honestly might not be too difficult. All the libs are there by now, and I’ve had quite good success rates with 2to3 and such.

          1. 2

            There seems to be an active fork here.

            Last commit was a few days ago, so it’s more promising than doing it all yourself. :)

      2. 1

        Looks nice! Maybe I’ll give it a go. I love miniflux but there are a few small things that bother me. Does yarr work well on mobile layouts too?

        1. 3

          I don’t read RSS feeds on mobile. So I tried it for the first time on my phone browse, and it looks really nice and mobile-friendly. So to answer your question, yes it does work on mobile layouts too :)

          1. 1

            Thank you for checking! I tried to find docs but couldn’t find any.

    14. 10

      Second, the software distribution - docker definitely made things easier to ship.

      This is quite the understatement.

      Linux’s approach of making everything dynamically linked and depend on everything else, combined with the general complexity explosion we’ve seen in software means Linux has degraded to the point where you basically can’t run software on it.

      Static linking has solved this problem for longer than it has even been a problem, but for GNU flavoured historical reasons you can’t use it for most software. So instead people have reinvented it but worse, and the default way to ship software is to ship a tarball but worse containing a disposable operating system that only runs your app.

      You still need a host OS to run the on the hardware itself, which will have a half-life measured in months before it self destructs. You can push this out if you never, ever touch it, but even as someone who has repeatedly learned this lesson the hard way (still using Firefox 52, iOS 13, etc) I still can’t keep myself from occasionally updating my home server, which is generally followed by having to reinstall it.

      1. 9

        It really only holds when you’re talking about software which hasn’t been packaged by your host OS tho, right?

        If I want to run something that’s in apt, it’s much, much easier to install using apt.

        1. 5

          I find it’s easier to bring up a PostgreSQL instance in a Docker container, ready to go, than to install and configure it from apt. Both are pretty easy though.

          1. 3

            I’m on the opposite side of this matter: I have a dev db, I put everything in it, single version, configured once, run since then. When I played with docker and considered how useful it could be I decided to not go that direction, because for my use-case, docker didn’t seem added value.

          2. 2

            The difference is that you have to learn how apt works if you run an apt-based system. If you learned to use Docker for some other reason (probably for work, because why else would you?) that’s not as widely applicable.

            1. 2

              But learning apt and learning docker, it’s still a huge difference.

              If you want to do an extensive customization, you still have to learn apt to fiddle with the things in the image itself, plus a lot of docker things on top.

            2. 2

              that’s not as widely applicable.

              actually, you might argue that docker (and podman) are more applicable because what you learn there can be used on any distro running docker, whereas only knowing how to use apt limits you to only distros that use apt…

        2. 3

          Not at all, in the last year or so I’ve had two installs with almost nothing on them (htop process list comfortably fits on 1 page) self destruct (boot into unusable state/refuse to boot) on their equivalents of apt-get upgrade.

          1. 6

            I’d recommend trying to understand what exactly happened and what’s failing when you run into situations like that, especially if it happened more than once. Things don’t normally self destruct. Sure, you can run into a bug that renders the system unbootable, but those are pretty rare. A significant part of the world computing runs on Linux and it runs for years. If your experience is “will have a half-life measured in months before it self destructs”, it may be worth learning why it happens to you.

            1. 4

              Wellllll… Debian systems don’t self-destruct on apt upgrade, but there are many other downstream variants that still use apt but don’t believe in old-fashioned ideas like … making sure things actually work before releasing.

              1. 1

                Debian systems don’t self-destruct on apt upgrade

                At least, not if you upgrade them regularly. I’ve hit a failure mode with older Debian systems because apt is dynamically linked and so when the package / repo format changes you end up not being able to upgrade apt. This isn’t a problem on FreeBSD, where pkg is statically linked and has a special case for downloading a new version of the statically linked binary that works even if the repo format changes.

            2. 2

              Frankly, why would I?

              15 years ago I probably would have. Nowadays I understand my time is too valuable for this. When I spend my time to learn something there are so many wonderful and useful ideas in the world to immerse myself in. Understanding why my almost completely vanilla OS nuked itself for the nth time after I used it normally is not one of them.

              Windows and Mac both have comfortable access to the good parts of Linux through WSL/docker (WSL is by far the most unreliable thing on my PC despite not even needing to be a complete OS) while also not dropping the ball on everything else. For the one machine I have that does need to be Linux, the actual lesson to learn is to stop hitting myself and leave it alone.

              Things don’t normally self destruct. Sure, you can run into a bug that renders the system unbootable, but those are pretty rare.

              In other circles:

              1. 2

                Frankly, why would I?

                For me that’s: because I can do something about it, as opposed to other systems. For you the bad luck hit on Linux. I’ve had issues with updates on Linux, Windows, Macs. Given enough time you’ll find recurring issues with the other two as well. The big difference is that I can find out what happened on my Linux boxes and work around that. When Windows update service cycles at 100% CPU, manually cleaning the cache and the update history is the only fix (keep running into that on multiple servers). When macos after an update can’t install dev tools anymore, I can’t debug the installers.

                In short: everything is eventually broken, but some things are much easier to understand and fix. For example the first link is trivially fixable and documented (https://wiki.archlinux.org/title/Pacman/Package_signing#Upgrade_system_regularly)

                1. 4

                  To largely rehash the discussion on https://lobste.rs/s/rj7blp/are_we_linus_yet, in which a famous tech youtuber cannot run software on Linux:

                  Given enough time you’ll find recurring issues with the other two as well.

                  This is dishonest, the rate and severity of issues you run into while using Linux as intended are orders of magnitude worse than on other OS. In the above, they bricked their OS by installing a common piece of third-party software (Steam). Software which amusingly ships with its own complete Linux userspace, another implementation of static linking but worse, to protect your games from the host OS.

                  because I can do something about it, as opposed to other systems

                  This is untrue, Windows at least has similarly powerful introspection tools to Linux. But even as someone who ships complex software on Windows (games) I have no reason to learn them, let alone anyone trying to use their computer normally.

                  For example the first link is trivially fixable and documented

                  In this case you can trivially fix it, you can also trivially design the software such that this never happens under normal conditions, but the prevailing Linux mentality is to write software that doesn’t work then blame the user for it.

                  1. 3

                    This is dishonest, the rate and severity of issues you run into while using Linux as intended are orders of magnitude worse than on other OS.

                    It’s not dishonest. This is my experience from dealing with large number of servers and few desktops. Including the ability to find actual reasons/solutions for the problem in Linux, and mostly generic “have you tried dism /restorehealth, or reinstalling your system” answers for Windows.

                    This is untrue, Windows at least has similarly powerful introspection tools to Linux.

                    Kind of… ETL and dtrace give you some information about what’s happening at the app/system boundary. But they don’t help me at all in debugging issues where the update service hangs in a busy loop or logic bugs. You need either a lot of guesswork or the source for that one. (or reveng…)

        3. 2

          Meanwhile, the host OSes are refusing to properly package programs written in modern programming languages like Rust because the build system doesn’t look enough like C with full dynamic linking.

          1. 11

            What do you mean by this?

            I’m a package maintainer for Arch Linux and we consistently package programs written in post-C languages without issue.

            Via collaboration and sharing with other distributions, we (package maintainers) seem to have this well under control.

          2. 3

            I mean, maybe some distros, but you seem to think all do? That’s incorrect :)

    15. 3

      Very interesting, thanks for sharing! Really cool that you took the time to write a technical paper about it as well. For anyone interested in the subject of Lisp for game development I can highly recommend this article about Naughty Dog’s use of a proprietary Lisp for their games: http://www.codersnotes.com/notes/disassembling-jak/

      1. 2

        I’m not actually the author! that’d be Shinmera.

        1. 1

          Oops, my bad! Thanks for the clarification

      2. 1

        Here’s a link to Open GOAL, the ongoing attempt to reverse engineer this particular language.

        1. 1

          That’s really cool. TBH it feels like GOAL and its implications, ie the viability to use a “highly dynamic” language like Lisp for game development, has really flown under the radar. Hopefully this project can bring more attention to the fact that you don’t need to build your game in C++ 😉

    16. 2

      I seem to recall that there we an announcement from a pijul author that pijul was in maintenance mode and that he was working on a new VCS. I can’t find mention of this VCS now though. Does anyone remember this?

      1. 2

        I’m the author, this is totally wrong.

        1. 1

          Thank you for the clarification. I’m not sure how I ended up remembering something that never happened.

      2. 2

        I seriously doubt this is the case. There’s been a huge amount of work on pijul and it’s related ecosystem.

        If I’m wrong, I’d love to know otherwise.

    17. 4

      Is it comfortable to use the thumb to move all the time? I ask cause I have some pain to my thumbs after texting too much on my phone…

      I personally use a vertical mouse, and it changed my life. Used to have chronic wrist inflammations, they’re gone now.

      1. 6

        I use a kensington expert trackball for that reason. It was very alien at first, but now I love it.

        1. 4

          Same here, I am addicted to using the ring to scroll. I find it much easier on my wrist, but to be honest i have both a mouse and this guy which i’ll alternate between during the day.

          1. 3

            Ya same setup here, I use a regular mouse for gaming since I just can’t get used to using a trackball for that… but use the trackball for everything else. The kensington’s ring scroll is the bomb!

            1. 1

              I’m looking for a trackball to buy but I heard bad things about the kensington’s scroll ring. Can any of you confirm if it’s easy to scroll accidentally or not, or if it has any other flaws?

              1. 1

                I don’t think I’ve ever accidentally scrolled the ring.. Maybe with bad posture it’s easier to? But after looking at mine and just now trying to get it to scroll accidentally… I just don’t see an obvious way to do that with how I place my hand on it when in use. 🤷‍♂️

      2. 4

        I got thumb tendinitis from using one. I use a vertical mouse now, super happy.

        1. 1

          Vertical mice make my shoulder seize up something fierce, but I’m really happy with an old CST L-Trac finger trackball. It’s funny how wildly people’s ergonomic needs can vary.

          1. 1

            CST L-Trac here too! I bought one based only on the internets and I wish it was a bit smaller. Definitely something to try out if you can, especially if your hands ain’t super big. Bought another for symmetry so I don’t end up in a rat race finding something as good but just a bit more fitting.

            And there were the accessories aspect!

            CST’s business is now owned by someone else who I don’t think have the back/forward-button accessory. I kinda regret not having got those. ISTR checking out what they had and it was lame.

            What I’d really like to see are some specs and community creations for those ports, like horizontal scroll wheels, but I think Linux doesn’t really support that anyway.

      3. 4

        Having used an extensive range of input devices (regular mice, vertical mice, thumb trackballs, finger trackballs, touchpads, drawing tablets, and mouse keys), my thoughts on this are as follows:

        Regular mice are the worst for your health. Vertical mice are a bit better, but not that much. Thumb balls are a nice entry into trackballs, but you’ll develop thumb fatigue and it will suck (thumb fatigue can make you want to rip your thumb off). Finger balls don’t suffer from these issues, but often come in weird shapes and sizes that completely nullify their benefits. The build quality is usually also a mess. Gameball is a good finger trackball (probably the best out there), and even that one has issues. I also had a Ploopy and while OK, mine made a lot of noise and I eventually sold it.

        Touchpads are nice on paper, but in practice I find they have similar problems to regular mice, due to the need for moving your arm around. Drawing tablets in theory could be interesting as you can just tap a corner and the cursor jumps over there. Unfortunately you still need to move your arms/wrist around, and they take up a ton of space.

        Mouse keys are my current approach to the above problems, coupled with trying to rely on pointing devices as little as possible. It’s a bit clunky and takes some getting used to, but so far I hate it the least compared to the alternatives.

        QMK supposedly supports digitizer functionality (= you can have the cursor jump around, instead of having to essentially move it pixel by pixel), but I haven’t gotten it to work reliably thus far. There are also some issues with GNOME sadly.

        Assuming these issues are resolved, and you have a QMK capable keyboard, I think this could be very interesting. In particular you could use a set of hotkeys to move the cursor to a fixed place (e.g. you divide your screen in four areas, and use hotkeys to jump to the center of these areas), then use regular movement from there. Maybe one day this will actually work :)

        1. 1

          you could use a set of hotkeys to move the cursor to a fixed place (e.g. you divide your screen in four areas, and use hotkeys to jump to the center of these areas),

          isn’t it what keynav does? Never succeeded to get used to it though, couldn’t abandon my mouse.

      4. 2

        I use an Elecom Deft Pro where the mouse is in the middle of the mouse. I generally use my index & middle finger to move the ball. For me, I find it more comfortable than a normal mouse or one with the ball on the side (thumb operated).

      5. 1

        everyone is probably different but I have a standard trackball mouse (Logitech, probably older version of this post) and it’s very comfortable. The main thing is to up the sensitivity a lot. Your thumb is precise, so little movement is needed!

        No good for games, perfect for almost everything else.

        (I have used fancy trackballs that a coworker has. It’s terrible for me, I do not get it at all even when trying for hours on end)

      6. 1

        Anything you overdo is bad for you.

        I swap between a trackpad, a mouse and an M570 every few days.

    18. 1

      I wish I’d have known of this tool when I was dissecting multi MB json dumps.

      1. 3

        I can’t remember if it’s here or the orange site (or both), but this list has been making the rounds recently. Quite pleased with a few of these.

    19. 1

      I don’t see how this would ever happen, given that manufacturers love obsoleting things as soon as the blueprints are finalised.

    20. 16

      The diagramming support is one of the things I miss most after moving from gitlab to github.

      1. 8

        Interesting! Didn’t know GitLab flavored Markdown supported that (and more).

        https://docs.gitlab.com/ee/user/markdown.html#diagrams-and-flowcharts

        1. 10

          Gitea also supports mermaid diagrams.

      2. 5

        Curious… why bother moving forges from open-core to closed-source?

        1. 4

          GitHub has a lot more social features. I’ve had a clue of projects on GitHub get issues and pull request with no marketing. So people are finding and using things.

          I’ve considered if I should set up mirrors of my GitLab projects on GitHub for the marketing effects.

          1. 2

            The social features are one of my biggest turn-offs, but you’re not the first to voice that opinion.

            1. 6

              I pretend that I don’t care about the social features. I really like that you can follow releases and the reaction option is kinda nice (so people can voice their support on new releases, without any comment noise).

              I don’t follow anyone, because that’s just adding a ton of stuff into my feed. But honestly it makes me happy when somebody does give me a “star” and I think it’s ok to have this vague indicator of credibility for projects.

              But I do actually search on github first when I’m looking for some kind of software I don’t find directly by using DDG. So the network effect is definitely there. Same goes for inter-project linking of issues or commits. And I won’t be surprised if moving my crates.io source to gitlab would decrease the amount of interaction as much as moving it to my private gogs/gitea instance.

          2. 2

            I’m curious what the social features are? I’ve used GitHub since it came out, but have never specifically noticed this

            1. 3

              follows. watches. stars. networks. there might be more. Github has been on my radar since it came up and these have long annoyed me. I think one of their early slogans was “social coding” and it was irritating then. Some people really like it though.

        2. 3

          For me it was largely about switching of culture of my work, shortly followed by me switching companies.

          Personally if I were to start again I think I would utilize gitlab again. While occasionally their solutions lack the polish and simplicity of github, The “whole package” of their offerings is really nice. The only consistent downside is performance of certain UI elements.