Threads for pl

  1. 7

    I once ran shutdown --help on a machine that was serving over 100 users in an office (when people logged into UNIX machines and ran terminal based applications). It just shut down. I say ‘just’ - the process took over 10 minutes. It then took another 40 minutes to boot again.

    That was the day I stopped just trying stuff to see what happens.

    Just kidding, of course I still do!

    1. 2

      [ big oof ]

      I thought this was a bad time

      > git stash save help
      Saved working directory and index state On main: help
      > git stash list
      stash@{0}: On main: help
      

      But shutdown --help is worse. :( Hard earned learning. :)

      1. 1

        Sounds like me trying to see if I can what install does on esxi by running install --help, which turns out is a Shell script that’s responsible for setting up the entire installation by starting some ncurses interface. Thankfully after reading the source, kill -9 was all that saved the day.

      1. 7

        In addition to that printing help pages to stderr is so annoying. It’s all such a beautiful mess.

        1. 9
          $ foo --help
          <10 pages of help>
          
          $ foo --help | less
          <10 pages of help with less drawn ontop>
          
          $ foo --help 2>&1 | less
          <success!>
          

          :/

          Now what was I looking for again?

          1. 2

            help going to stderr always makes me very angry!

            1. 2

              I have the same reaction when it happens to me… but I still do it when writing these kinds of tools. Sorry.

              Two reasons:

              1. Help text in stdout really messes with piped output, as sjamaan has beaten me to pointing out
              2. I usually default to showing help text whenever the program encounters unexpected or malformed flags. Partly to catch people typing “help” in unexpected ways
              1. 4

                For me, usage is different than help. Usage is a short message saying the options are wrong and here’s a small summary of the syntax, help is 10 pages long and exhaustive. I’m fine with usage going to stderr, but not help.

                1. 3

                  I agree, the use case that absolutely should go to stdout is when you call help directly, so it is easy to pass to a pager, e.g.:

                  $ fdisk --help | less
                  

                  In this case there are no errors, so why would you write to stderr?

                  1. 1

                    This is very fair and it probably tells you something about the size of CLI apps I normally write that usage and help are usually the same thing!

            2. 6

              It’s helpful (har har) when you’re piping the output of a command to another command, and it doesn’t understand one of the flags (because of version or Unix flavour differences) and prints its help. Otherwise you’ll get very weird results where the next command in the pipeline is trying to parse the help output of the first command.

              1. 1

                I do actually think that’s the ideal outcome. If I’ve misused a flag I’d like the entire pipeline to fall over. I might not be able to trust bash to emit an error message but a complete nonsense output would be a clearer sign that something strange has happened than output that is subtly off, and if I’m lucky then whatever fragments of help text make it to the end might even include the name of the command I got wrong.

              2. 1

                Oh good heavens. It blows my mind how frequently I run into apps that do this. How do so many people do this without noticing how annoying it is?

              1. 7

                Hey, that’s actually not a bad tip (I’m not 100% sure it’s worthy of its own post, but it’s definitely not worth flagging). My main concern is:

                None of the viruses in npm are able to run on my host when I do things this way.

                This is assuming a lot of the security of docker. Obviously, it’s better than running npm as the same user that has access to your SSH/Azure/AWS/GCP/PGP keys / X11 socket, but docker security isn’t 100%, and I wouldn’t rely on it fully. At the end of the day, you’re still running untrusted code; containers aren’t a panacea, and the simplest misconfiguration can render privilege escalation trivial.

                1. 3

                  the simplest misconfiguration can render privilege escalation trivial.

                  I’m a bit curious which configuration that’d be?

                  1. 2

                    not OP, but “--privileged” would do it. or many of the “--cap-add” options

                    1. 1

                      Not 100% sure here but lots of containers are configured to run as root, and file permissions are just on your disk right? so a container environment lets you basically take control of all mounted volumes and do whatever you want.

                      This is of course only relevant to the mounted volume in that case, though.

                      I think there’s also a lot of advice in dockerland which is the unfortunate intersection of easier than all alternatives yet very insecure (like most ways to connect to a github private repo from within a container involves some form of exposing your private keys).

                    2. 1

                      This is assuming a lot of the security of docker

                      Which has IMO a good track record. Are there actually documented large scale exploits of privilege escalation from a container in this context? Or at all?

                      Unless you’re doing stupid stuff I don’t think there’s a serious threat with using Docker for this use case.

                    1. 7

                      RIIR apparently stands for Rewrite It In Rust

                      1. 1

                        I’ve kind of moved to Portugal and I’m going to explore the area a bit, also migrating to Linode Hosted Kubrenetes LKE to host a GitLab instance. Terraform/Web/DNS is already working but I’m a bit concerned about hosting mails on there, no idea whether I can configure reverse DNS in Linode to point at a Load Balancer and I’m also not super hesitant to contact support yet so I’ll be allowed sending mails.

                        1. 5

                          If you are already into web technologies, maybe this (relatively obscure) project of an HTML5 Wayland compositor might be an interesting way to approach wayland in general: https://github.com/udevbe/greenfield

                          1. 1

                            Super interesting, thanks!

                          1. 21

                            Someone shared a visualization of the sorting algorithm on ycombinator news.

                            PS: Really don’t enable sound its loud and awful

                            1. 8

                              Yeah this page is cool, and it shows that this “naive sort” (custom) is close but not identical to insertion sort, which is mentioned at the end of the paper.

                              And it also shows that it’s totally different than bubble sort.

                              You have to click the right box and then find the tiny “start” button, but it’s useful.


                              I recommend clicking these four boxes:

                              • quick sort
                              • merge sort
                              • custom sort (this naive algorithm)
                              • bubble sort

                              Then click “start”.

                              And then you can clearly see the difference side by side, including the speed of the sort!

                              Quick sort is quick! Merge sort is better in the worst case but slower on this random data.

                              1. 1

                                cycle sort is pretty interesting too!

                                1. 1

                                  I thought (this is 15 year old memories) that what made merge sort nice is that it isn’t particularly dependent on the data, so the performance isn’t really affected if the data is nicely randomized or partially sorted or whatever, whereas quicksort’s performance does depend to some extent on properties of the input sequence (usually to its benefit, but occasionally to its detriment).

                                2. 7

                                  If you are playing with this website: when you change your selected sorts, press “stop” before you press “start” again. Otherwise both sorts will run at the same time, undoing each other’s work, and you will wind up with some freaky patterns.

                                  This comment is brought to you by “wow I guess I have absolutely no idea how radix sort works.”

                                  1. 7

                                    Yeah the radix sort visualization is cool!

                                    The intuition is if you have to sort 1 million numbers, BUT you know that they’re all from 1 to 10. What’s the fastest way of sorting?

                                    Well you can do it in guaranteed linear time if you just create an array of 10 “buckets”. Then make a single pass through the array, and then increment a counter in the corresponding bucket.

                                    After that, print out each number for the number of times it appears in its bucket, like

                                    [ 2 5 1 ... ]   ->
                                    1 1 2 2 2 2 2 3 ...
                                    

                                    etc.

                                    I think that lines up with the visualization because you get the “instant blocking” of identical colors. Each color is a value, like 1 to 10. (Watching it again, I think it’s done in 3 steps, like they consider the least significant digits first, then the middle digits, then the most significant digits. It’s still a finite number of distinct values.)

                                    There are variations on this, but that’s the basic idea.

                                    And it makes sense that it’s even faster than QuickSort when there are a limited number of distinct values. If you wanted to sort words in a text file, then Radix sort won’t work as well. There are too many distinct values.

                                    It’s not a general purpose sorting algorithm, which is why it looks good here.

                                    1. 4

                                      Oh, yeah — I meant that I started radix sort while the custom sort was still running, and it just kind of scrambled the colors insanely, and it took me a few minutes of thinking “dang and here I thought radix sort was pretty simple” before I realized they were both running at the same time :)

                                  2. 1

                                    Nice visualisation, though it does make some algorithms (selection sort) look better than they are!

                                  1. 2

                                    I do agree that it’s definitively a problem that if one thing goes wrong in your state, but you urgently need to provision something else within the same terraform module/state this can become a true problem (thankfully didn’t run into such yet though - but I should become more prepared for that).

                                    To be honest, these sorts of problems are also confronting you in other environments - for example I couldn’t switch-env on my NixOS installation because emacs-wayland wouldn’t compile anymore, making it impossible to apply the rest.

                                    What this article doesn’t mention is terragrunt, for example with terragrunt my org splits VPC and EKS into multiple modules/states and they are still linked properly to each other by defining output/input + ferraform. This will be needed not only because you’re going to be confronted with the problem scenario described by qovery, but also because terraform will become very slow the moment you’re reaching a critical amount of resources. For provisioning production through CI this is fine by me, but development can become a real pain if you’re going to need 90 seconds to apply your module.

                                    1. 2

                                      TIL: git-web--browse - Git helper script to launch a web browser

                                      1. 1

                                        Unfortunately my current pipelines are still stuck with kaniko in GitLab-CI, which is quite slow - even two minutes “build” time for a bigger rails image and all layers cached, the moment you have nodejs it becomes even slower - this is mostly due to biggest penalty being file io. But it does its job. Passing secrets works via --build-args since they aren’t stored in the image manifest (in contrary to docker which will persist the args). Since it needs some a bit of scripting I wrote for the other devs a template that they can reuse, bonus is I can get to introduce additional changes to all future image builds in case I want to add improvements later.

                                        1. 2

                                          I worked on a lot of optimizations to gitlab and gitlab CI over the years. To bring jobs under a minute, you need to do A LOT of local caching. That involves maintaining a data vol locally and mount it onto the job container

                                        1. 9

                                          This looks like a simpler version of tilde.team. Awesome!

                                          1. 2

                                            and without IRC which is muuuch more accessible and privacy friendly

                                          1. 12

                                            Docker has some suboptimal design choices (that clearly hasn’t stopped its popularity). Yes, the build process, with the line continuation ridden Dockerfile format that makes it impossible to comment what each thing is there for, with implicit transactions that buries temporary files in the layers, and the layers themselves, that behave nothing like an ideal dependency tree, is one thing, but that’s fixable. What makes me sad are the fundamental design choices that can’t be satisfyingly fixed by adding stuff on top, such as being a security hole by design and containers being stateful and writable, and therefore inefficient to share between processes and something you have to delete afterwards.

                                            What is a more ideal way to build an image? For a start, run a shellscript in a container and save it. The best part is that you don’t need to copy the resources into the container, because you can mount it as a readonly volume. You need to implement rebuilding logic yourself, though, but you can, and it will be better. Need layers? Just build one upon another. Even better, use a proper build system, that treats dependencies as a tree, and then make an image out of it.

                                            As for reimplementing Docker the right way from the ground up, there is fortunately no lack alternatives these days. My attempt, selfdock is just one.

                                            1. 8

                                              As for reimplementing Docker the right way from the ground up, there is fortunately no lack alternatives these days.

                                              What about nixery?

                                              Especially their idea on “think graphs not layers” is quite an improvement over previous projects.

                                              1. 4

                                                I spent some time talking about the optimisations Nixery does for layers in my talk about it (bit about layers starts at around 13:30).

                                                An interesting constraint we had to work with was the restriction on the maximum number of layers permitted in an OCI image (which, as I understand it, is an implementation artefact from before) and there’s a public version of the design doc we wrote for this on my blog.

                                                In theory an optimal implementation is possible without that layer restriction.

                                                1. 2

                                                  Hey! Thanks for sharing and also thank you for your work, true source of inspiration :)

                                              2. 2

                                                My attempt, selfdock is just one.

                                                This looks neat. But your README left me craving for examples.

                                                Say I want to run or distribute my python app on top of this. Could you provide an example of the equivalent to a docker file?

                                                1. 4

                                                  Thanks for asking! The idea is that instead of building, distributing and using an image, you build, distribute and use a root filesystem, or part of it (it can of course run out of the host’s root filesystem), and you do this however you want (this isn’t the big point, however).

                                                  To start with something like a base image, you can undocker a docker image:

                                                  docker pull python:3.9.7-slim
                                                  sudo mkdir /opt/os/python:3.9.7/
                                                  docker save python:3.9.7-slim | sudo undocker -i -o /opt/os/myroot
                                                  

                                                  Now, you have a root filesystem. To run a shell in it:

                                                  selfdock --rootfs /opt/os/myroot run bash
                                                  

                                                  Now, you are in a container. If you try to modify the root filesystem from a container, it’s readonly – that’s a feature!

                                                  I have no name!@host:/$ touch /touchme
                                                  touch: cannot touch '/touchme': Read-only file system
                                                  

                                                  When you exit this process, the reason for this feature starts to show itself: The process was the container, so when you exit it, it’s gone – there is no cleanup. Zero bytes written to disk. Writing is what volumes are for.

                                                  To build something into this filesystem, replace run with build, which gives you write access. The idea is as outlined above, to mount your resources readonly and running whatever:

                                                  selfdock --rootfs /opt/os/myroot --map $PWD /mnt build pip install -r /mnt/requirements.txt
                                                  

                                                  … except that if it modifies files owned by root, you need to be root. As the name implies, selfdock doesn’t just give you root.

                                                  Then, you can run your thing:

                                                  selfdock --rootfs /opt/os/myroot --vol $PWD/data /home python app.py
                                                  

                                                  Note that we didn’t specify user- and group ID to run as – it just does (anything else would be a security hole). This is important for file permissions, especially when giving write access to a volume as above. But since the root filesystem is readonly, you can run thousands of instances out of it, and the overhead isn’t much more than spawning a process. The big point here is not in the features, but in doing things correctly.

                                                  1. 2

                                                    That sounds very similar to what systemd-nspawn offers. Once you deal with unpacked root filesystems it may be another solution to look at.

                                                    1. 1

                                                      So, it has even more resemblance to chroot but with more focus on isolation and control of resource usage, IIUIC.

                                                      A bit of feedback, if I may. The whole requirement of carrying files around, will put people off. Including myself. I refrain from using docker because of the gigantic storage footprint any simple thing requires. But the reason it is so popular is that it abstracts away the binary blobs. People run docker commands and let it do its thing, they don’t need to fiddle with or even know about the images which are stored on their hard drive. It was distributed with dockerhub connectivity by default. So people only worry about their docker files and refer to images as a URL or even just a slug if the image is in docker hub.

                                                      Similarly, back in the day, many chroot power users had a script to copy a basic filestructure to a folder and run chroot. I think most people would want this. Even if inconsciently. A command that does the complicated parts with a simple porcelain.

                                                1. 5

                                                  One big misconception that made it into the “It’s not perfect but better than nothing” is that layering is only accumulative, once a layer has certain files - the final image will contain this data, even if you remove them in a later RUN command. Using dive will show you where your mistakes are. Yikes, even the mentioned hadolint would show that issue I believe.

                                                  I’ve seen someone using gitlab-ci/bazel/debootstrap/saltstack to build containers without writing Dockerfiles, but still using a layering & caching mechanism. Took several months to implement, but was definitively quite some gem once it worked - we even managed using same output to provision bare-metal servers afterwards.

                                                  The thing that powers distroless is bazelbuild/rules_docker, it’s quite neat but seeing how nixery and nix/os solves everything in a coherent fashion makes me how others will ever succeed as an alternative - so I’m really excited about Ariadne’s mentioning of building “distroless for Alpine Linux

                                                  Overall, I think for writing regular Dockerfiles (for Python applications) - pythonspeed.com gives best overview of common pitfalls.

                                                  1. 4

                                                    Output of hadolint. I don’t get why would you give so strong opinionated advice and then break your own advice in the same article?

                                                    -:4 DL3008 warning: Pin versions in apt get install. Instead of `apt-get install <package>` use `apt-get install <package>=<version>`
                                                    -:4 DL3015 info: Avoid additional packages by specifying `--no-install-recommends`
                                                    -:4 DL3009 info: Delete the apt-get lists after installing something
                                                    -:5 DL3059 info: Multiple consecutive `RUN` instructions. Consider consolidation.
                                                    -:5 DL3013 warning: Pin versions in pip. Instead of `pip install <package>` use `pip install <package>==<version>` or `pip install --requirement <requirements file>`
                                                    -:5 DL3042 warning: Avoid use of cache directory with pip. Use `pip install --no-cache-dir <package>`
                                                    -:6 DL3059 info: Multiple consecutive `RUN` instructions. Consider consolidation.
                                                    -:6 DL3013 warning: Pin versions in pip. Instead of `pip install <package>` use `pip install <package>==<version>` or `pip install --requirement <requirements file>`
                                                    -:6 DL3042 warning: Avoid use of cache directory with pip. Use `pip install --no-cache-dir <package>`
                                                    -:15 DL3003 warning: Use WORKDIR to switch to a directory
                                                    -:19 DL3059 info: Multiple consecutive `RUN` instructions. Consider consolidation.
                                                    -:20 DL3059 info: Multiple consecutive `RUN` instructions. Consider consolidation.
                                                    
                                                  1. 3

                                                    “Less && and && \please. This one isn’t your fault but sometimes looking at complicated Dockerfiles makes my eyes hurt.”

                                                    He’s never ran into the fun of exceeding the max number of layers in a container I see.

                                                    1. 6

                                                      What helps in terms of readability is setting proper shell options within the Dockerfile.

                                                      So putting SHELL ["/usr/bin/env", "bash", "-euo", "pipefail", "-c"] on top can help in these cases, depending on who is consuming your Docker image you might want to revert that to the previous SHELL afterwards though.

                                                      1. 1

                                                        feel you

                                                      1. 4

                                                        I’d be really excited about rewriting Terraform, something that’s not Nix but also not Lisp and has some better state/secret/API endpoint functionalities. I feel Terraform is lacking many features, some of which can be be filled in by Terragrunt and but things like API logs, applying/reverting/… would be incredible useful to add.

                                                        1. 1

                                                          Quite frequently with great variety of things, wonder how much this relates to my hyperfocus/adhd.

                                                          1. 11

                                                            I can understand people assuming apt exists on the system because:

                                                            • Most of the times they’ll be correct
                                                            • People that doesn’t have apt will probably know how to find the equivalent packages in their equally bad linux package manager of choice.

                                                            Can understand, too, people using a SQLite implementation for Go that doesn’t depend on CGo, because CGo has it’s own issues.

                                                            Everything is hot garbage, doesn’t matter if you’re in Ubuntu or not. Don’t expect to have a gazillion of scripts that install all the dependencies in every package manager imaginable, none of those is good enough to deserve that much of attention, it won’t happen. At least apt is popular.

                                                            That’s a reason Docker is so popular: It’s easier to work over an image with an specific package manager that will not change between machines. Doesn’t matter the distribution or the equally bad linux package manager of choice as long as you are on Linux and have Docker. And Dockerfiles end up being a great resource to know all the required quirks that allow the code to work.

                                                            And finally:

                                                            First, please stop putting your binaries in /app, please, pretty-please? We have /usr/local/bin/ for that.

                                                            Never. Linux standard paths are a tragedy and will actively avoid them as much as possible. It’s about choices and avoiding monoculture, right?

                                                            1. 10

                                                              Linux standard paths are a tragedy and will actively avoid them as much as possible. It’s about choices and avoiding monoculture, right?

                                                              No, it’s about writing software that nicely integrates with the rest of the chosen deployment method/system, not sticking random files all over the user’s machine.

                                                              1. 22

                                                                In this example /app is being used inside the docker image. It is most definitely not sticking random files all over the users machine.

                                                                1. 4

                                                                  This hits a slightly wider point with the article - half the things the author is complaining about aren’t actually to do with Docker, despite the title.

                                                                  The ones that are part of the Docker build… don’t necessarily matter? Because their effects are limited to the Docker image, which you can simply delete or rebuild without affecting any other part of your system.

                                                                  I understand the author’s frustration - I’ve been through trying to compile things on systems the software wasn’t tested against, it’s a pain and it would be nice if everything was portable and cross-compatible. But I think it’s always been a reality of software maintenance that this is a difficult problem.

                                                                  In fact, Docker could be seen as an attempt to solve the exact problem the author is complaining about, in a way which works for a reasonable number of people. It won’t work for everyone and there are reasons not to like it or use it, but I’d prefer to give people the benefit of the doubt instead of ranting.

                                                                  Speaking of ranting, this comment’s getting long - but despite not really liking the tone of the article, credit to the author for raising issues and doing the productive work as well. That’s always appreciated.

                                                                  1. 3

                                                                    OP here. aww thank you! Yes as noted in the disclaimer at the top, I was very frustrated! hopefully I ported it all, now trying to clean up some code so I can make patches.

                                                                    According to commenters on the Yellow Site, it’s not wise to “just send a patch” or “rant”, they way it’s better to open an issue first. Which honestly I still don’t understand. Care someone explain that to me?

                                                                    As a open-source maintainer, I like only two types of issues. 1) here’s a bug, 2) here’s a feature request and how to implement. But if someone made an issue saying “Your code is not running on latest version of QNX”, I would rather see them “Here’s a patch that makes the code run on QNX”.

                                                                    Regardless, I tried an experiment and opened a “discussion issue” in one of the tools, hoping for the best.

                                                                    1. 3

                                                                      According to commenters on the Yellow Site, it’s not wise to “just send a patch” or “rant”, they way it’s better to open an issue first. Which honestly I still don’t understand. Care someone explain that to me?

                                                                      Receiving patches without prior discussion on the scope/goals is potentially something frustrating since basic communication can easily avoid unnecessary extra work for both maintainers but also contributors. Maybe a feature is already being worked on? Maybe they’ve had prior conversations on the topic that you couldn’t have seen? Maybe they simply don’t have the time to review things at the moment? Or maybe they won’t be able to maintain a certain contribution?

                                                                      Also for end-users, patches without a linked issue can be a source of frustration. Usually the MR contains discussion on the code/implementation details and issues conversation around the goals of the implementation.

                                                                      Of course that always depends, if you’re only contributing minor improvements/changes - a discussion is often not needed.

                                                                      Or in other words, as posted on ycombinator news:

                                                                      Sending patches directly is a non-collaborative approach to open source. Issues are made for discussion, and PRs for resolutions; as a matter of fact, some projects state this explicitly, in order to waste maintainers’ time with unproductive PRs.

                                                                  2. 3

                                                                    Exactly

                                                                    1. 2

                                                                      This is only a mediocre example, because with go there should only be one binary (or a handful of them) - but yes, if you put your software in a container, I am very happy if everything is in /app and if I want to have a look I don’t have to dissect /etc/, /usr/local and maybe /var. If the container is the sole point of packaging one app, I see no downside to ignoring the FHS and putting it all together. There’s a reason that most people do that for custom-built software (as opposed to “this container just does “apt-get install postgresql-server”, then I would expect the stuff to be there where the distro puts it)

                                                                1. 27

                                                                  No, that is not a monoculture, this is you using a niche OS in the world of another “niche” OS.

                                                                  As a sane person, I fetched the code from GitHub using fetch, extracted the tarball and ran make. Nothing happens. Let’s see the Makefile, shall we?

                                                                  Just because you’re meaning to get other peoples code running on your specific environment, simply doesn’t mean you’ll be greeted with the most straightforward way possible. You should always read such code before executing it, or rather even documentation instead of navigating simply on assumptions how everything is supposed to be. We don’t live in 2005 anymore.

                                                                  Everyone’s time is limited and if you want to develop FreeBSD to also host more applications, then take care of that yourself. There is no reason why others should have to take care of that, besides being friendly, I could even imagine some people will possible still disregard them because they don’t know the “true” way one supports applications on FreeBSD because the devs have never been using FreeBSD in their life.

                                                                  Docker an amazing and valid way of distributing and building code, maybe instead of dumping on other developers try supporting Docker on FreeBSD.

                                                                  Having same/similar development environments as all other contributors will rule out a great amount of issues, allowing you to actually develop features and improve stability. Good example for that is Steam and Linux support, there has been way greater extend of bugs on Linux hosts, even though it were fewer people running the code - but everyone’s *nix has to be slightly different - making development much harder at times.

                                                                  First, please stop putting your binaries in /app, please, pretty-please? We have /usr/local/bin/ for that.

                                                                  For what reason? /app is a convention in the world of container images, it avoids typing long paths and will usually also contain other code. What you’re asking for is usualy rather /opt/... and not /usr/local/bin/. Also why bother with /usr/local/bin if all what you’re doing is distributing a single, self-contained executable in a container image.

                                                                  […] please put yourself in my shoes. You’ve been looking around for a simple monitoring solution […]

                                                                  No, you should put yourself into other peoples shoes. You are acting rude, disrespectful and narrow minded with such blog postings. If you’re writing this way you’ll rather make people avoid FreeBSD even more. If you want to use something that supports less tools, but maybe those better, then stick with the defaults as others do (nagios?).

                                                                  How about you actually open an issue before submitting a patch or even worse, write a rant. There hasn’t been a single request in Gatus about *BSDs, same for statping until now.

                                                                  1. 6

                                                                    For the record, I (the author) opened that issue in Statping and the other projects for adding BSD support and I’ve patched all of them in the last 10 days :)

                                                                    1. 3

                                                                      Yes I know, sorry that I haven’t made that clear in my comment. I appreciate your work, though I stance with my comment regarding that such rants are rather negative light onto FreeBSD. Your issue was much friendlier, though giving less context.

                                                                  1. 4

                                                                    This model series was literally the Thinkpad most of us teens at the local hackerspace had. It was affordable, had a reasonable battery lifetime, was quite heavy for it’s size but could run Linux fairly well and is even powerful to run Sauerbraten.

                                                                    For Wi-Fi we’d figure out that running sudo iwlist wlp4s0 scan in a loop would make connections not suddenly stop working, a miracle when you don’t know how to patch a Bios and can’t afford other network cards either.

                                                                    In the end I sold 1 Bitcoin I got gifted at a Bitcoin user group and bought a Thinkpad x230, I still use it as a homeserver.