1.  

    Every time someone creates a new Markdown variant, a kitten dies :(

    1.  

      You monster!

    1. 2

      Wacky dual-head display stuff is one of the main things that drove me to just sticking with GNOME plus some UI tweaks instead of spending hours crafting my own bespoke desktop environment. GNOME 2 from waaay back in the day had excellent multi-monitor support for the time (even better than windows and Mac) and GNOME 3 had its issues over the past few years but is now pretty tolerable for my day-to-day stuff.

      1. 3

        Congrats on writing a Dockerfile.

        A few suggestions:

        • Specify which Debian you want. Latest will change.
        • apt-get update && apt-get install -y nodejs npm
          • Doing this in three steps is inefficient and can cause problems.
        1. 6

          Even better, don’t write a Dockerfile at all. Use one of the existing official Node images which allow you to both specify what Debian and what Node version you want.

          1. 1

            I tried this but I didn’t get a shell, it would be nice to get it working.

            1. 4

              Those images have node set as the CMD, which means it will open the node REPL instead of a shell. You can either do docker run -it node:16-buster-slim /bin/bash to execute bash (or another shell of your choice) instead, or you can make a Dockerfile using the node image as your FROM and add an ENTRYPOINT or CMD instead to eliminate the need to invoke the shell.

              1. 3

                Incidentally, to follow up as I remembered to write this, one reason that it’s common for images to use CMD in this way is that it makes it easier to use docker run as sort-of-drop-in replacements of uncontained CLI tools.

                With an appropriate WORKDIR set, you can do stuff like

                alias docker-node='docker run --rm -v $PWD:/pwd -it my-node-container node

                alias docker-npm='docker run --rm -v $PWD:/pwd -it my-node-container npm

                and you’d be able to use them just like they were node/npm commands restricted to the current directory, more or less. It wouldn’t preserve stuff like cache and config between runs, though.

            2. 1

              I have to agree with this. I tend toward “OS” docker images (debian and ubuntu usually) for most things because installing dependencies via apt is just too damn convenient. But for something like a node app, all of your (important) deps are coming from npm anyway so you might as well use the images that were made for this exact use case.

            3. 3

              what problems?

              1. 3

                It creates 3 layers instead of one. You can only have 127 layers in a given docker image so it’s good to combine multiple RUN statements into one where practical.

                1. 3

                  Also the 3 layers will take unnecessary space. You can follow the docker best practices and remove the cache files and apt lists afterwards - that will ensure your container doesn’t have to carry them at all.

                2. 2

                  Check out the apt-get section in the best practice guide: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

              1. 19

                I was more into Python at the time, but I read _why’s Poignant Guide to Ruby just for the entertainment. I know quite a few people who got their successful development careers started from that guide. (Usually coming from helpdesk roles, or systems/network administration.) And I exist in a relatively small bubble, so I’m sure the number of lives he markedly improved is well into the tens or hundreds of thousands.

                I wish there were more funny and inspirational guides not just for programming, but all technical topics.

                1. 18

                  You might enjoy Julia Evans’ zines! Not quite what you’re describing, but seems closer than most other reference material.

                1. 13

                  I wonder why the kernel community seems to have structural issues when it comes to filesystem - btrfs is a bit of a superfund site, ext4 is the best most people have, and ReiserFS’s trajectory was cut short for uh, Reasons. Everything else people would want to use (i.e. ZFS, but also XFS, JFS, AdvFS, etc.) are hand-me-downs from commercial Unix vendors.

                  1. 13

                    On all of the servers I deploy, I use whatever the OS defaults to for a root filesystem (generally ext4) but if I need a data partition, I reach for XFS and have yet to be disappointed with it.

                    Ext4 is pretty darned stable now and no longer has some of the limitations that pushed to me XFS for large volumes. But XFS is hard to beat. It’s not some cast-away at all, it’s extremely well designed, perhaps as well or better than the rest. It continues to evolve and is usually one of the first filesystems to support newer features like reflinks.

                    I don’t see why XFS couldn’t replace ext4 as a default filesystem in general-purpose Linux distributions, my best guess as to why it hasn’t is some blend of “not-invented-here” and the fact that ext4 is good enough in 99% of cases.

                    1. 3

                      It would be great if the recent uplift of xfs also added data+metadata checksums. It would be perfect for a lot of situations where people want zfs/btrfs currently.

                      It’s a great replacement for ext4, but not other situations really.

                      1. 1

                        Yes, I would love to see some of ZFS’ data integrity features in XFS.

                        I’d love to tinker with ZFS more but I work in an environment where buying a big expensive box of SAN is preferable to spending time building our own storage arrays.

                        1. 1

                          I’m not sure if it’s what’s you meant, but XFS now has support for checksums for at-rest protection against bitrot. https://www.kernel.org/doc/html/latest/filesystems/xfs-self-describing-metadata.html

                          1. 2

                            This only applies to the metadata though, not to the actual data stored. (Unless I missed some newer changes?)

                            1. 1

                              No, you’re right. I can’t find it but I know I read somewhere in the past six months that XFS was getting this. The problem is that XFS doesn’t do block device management which means at best it can detect bitrot but it can’t do anything about it on its own because (necessarily) the RAIDing would take place in another, independent layer.

                        2. 3

                          I don’t see why XFS couldn’t replace ext4 as a default filesystem in general-purpose Linux distributions

                          It is the default in RHEL 8 for what it’s worth
                          https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_file_systems/assembly_getting-started-with-xfs-managing-file-systems

                        3. 2

                          Yep. I use xfs for 20 years now when I need a single drive FS and I use zfs when I need multiple drive FS. The ext4 and brtfs issues did not increase my confidence.

                        1. 11

                          STARTTLS always struck me as a terrible idea. TLS everywhere should be the goal. Great work.

                          1. 6

                            Perhaps this is partially the result of a new generation of security researchers gaining prominence, but progressive insight from the infosec industry has produced a lot of U-turns. STARTTLS was obviously the way forward, until it wasn’t, and now it’s always been a stupid idea. Never roll your own crypto, use reliable implementations like OpenSSL! Oh wait, it turns out OpenSSL is a train wreck, ha ha why did people ever use this crap?

                            As someone who is not in the infosec community but needs to take their advice seriously, it makes me a bit more wary about these kinds of edicts.

                            Getting rid of STARTTLS will be a multi-year project for some ISPs, first fixing all clients until they push implicit TLS (and handle the case when a server doesn’t offer implicit TLS yet), then moving all the email servers forward.

                            Introducing STARTTLS had no big up-front costs …

                            1. 9

                              Regarding OpenSSL I think you got some bad messaging. The message is not “don’t use OpenSSL”. The real message was “all crypto libraries are train wrecks and need more funding and security auditing”. But luckily OpenSSL has improved a lot, and you should still use a well-tested implementation and not roll your own crypto and OpenSSL is not the worst choice.

                              Regarding STARTTLS I think what we’re seeing here is that there was a time when crypto standards valued flexibility over everything else. We also see this in TLS itself where TLS 1.2 was like “we offer the insecure option and the secure option, you choose”, while TLS 1.3 was all about “we’re gonna remove the insecure options”. The idea that has gained a lot of traction is that complexity breeds insecurity and should be avoided, but that wasn’t a popular idea 20-30 years ago when many of these standards were written.

                              1. 2

                                The message is not “don’t use OpenSSL”. The real message was “all crypto libraries are train wrecks and need more funding and security auditing”. But luckily OpenSSL has improved a lot, and you should still use a well-tested implementation and not roll your own crypto and OpenSSL is not the worst choice.

                                100%

                                I prefer libsodium over OpenSSL where possible, but some organizations can only use NIST-approved algos.

                            2. 3

                              Agreed. It always felt like a band-aid as opposed to a well thought out option. Good stuff @hanno.

                              1. 3

                                Your hindsight may be 20/20 but STARTTLS was born in an era where almost nothing on the Internet was encrypted. At that time, 99% of websites only used HTTPS on pages that accepted credit card numbers. (It was considered not worth the administrative and computing burden to encrypt a whole site that was open to the public to view anyway.)

                                STARTTLS was a clever hack to allow opportunistic encryption of mail over the wire. When it was introduced, getting the various implementations and deployments of SMTP servers (either open source or commercial) even to work together in an RFC-compliant manner was an uphill battle on its own. STARTTLS allowed mail administrators to encrypt the SMTP exchange where they could while (mostly) not breaking existing clients and servers, nor requiring the coordination of large ISPs and universities around the world to upgrade their systems and open new firewall ports.

                                Some encryption was better than no encryption, and that’s still true today.

                                That being said, I run my own mail server and I only allow users to send outgoing mail on port 465 (TLS). But for mail coming in from the Internet, I still have to allow plaintext SMTP (and hence STARTTLS support) on port 25 or my users and I would miss a lot a messages. I look forward to the day that I can shut off port 25 altogether, if it ever happens.

                                1. 2

                                  Your hindsight may be 20/20 but STARTTLS was born in an era where almost nothing on the Internet was encrypted.

                                  I largely got involved with computer security/cryptography in the late 2000’s, when we suspected a lot of the things Snowden revealed to be true, so “encrypt every packet securely” was my guiding principle. I recognize that wasn’t always a goal for the early Internet, but I was too young to be heavily involved then.

                                  Some encryption was better than no encryption, and that’s still true today.

                                  Defense against passive attackers have value, but in the face of active attackers, opportunistic encryption is merely security theater.

                                  I look forward to the day that I can shut off port 25 altogether, if it ever happens.

                                  Hear hear!

                                  1. 2

                                    Defense against passive attackers have value, but in the face of active attackers, opportunistic encryption is merely security theater.

                                    That’s not quite true, it still provides an audit trail. The goal of STARTTLS, as I understand it, is to avoid trying to connect to a TLS port, potentially having to wait for some arbitrary timeout if a firewall somewhere is set to drop packets rather than reject connections, and then retry on the unencrypted path. Instead, you connect to the port that you know will be there and then try to do the encryption. At this point, a passive attacker can’t do anything, an active attacker can strip out the server’s notification that STARTTLS is available and leave the connection in plaintext mode. This kind of injection is tamper-evident. The sender (at least for mail servers doing relaying) will typically log whether a particular message was sent with or without STARTTLS. This logging lets you detect which messages were potentially leaked / tampered with at a later date. You can also often enforce policies that say things like ‘if STARTTLS has ever been supported by this server, refuse if it isn’t this time’.

                                    Now that TLS support is pretty-much table stakes, it is probably worth revisiting this and defaulting to connecting on the TLS port. This is especially true now that most mail servers use some kind of asynchronous programming model so trying to connect on port 465 and waiting for a timeout doesn’t tie up too many resources. It’s not clear what the failure mode should do though. If an attacker can tamper with port 25 traffic, they can also trivially drop everything destined for port 465, so trying 465 and retrying on 25 if that fails is no better than STARTTLS (actually worse - rewriting packets is harder than dropping packets, one can be done by inspecting the header the other requires deep-packet inspection). Is there a DNS record that can tell connecting mail servers to not try port 25? Just turning off port 25 doesn’t help because an attacker doing DPI can intercept packets for port 25 and forward them over a TLS connection that it establishes to 465.

                              1. 3

                                Basically this is a guide to setting up Mailu: https://mailu.io/1.7/

                                Mailu looks interesting but I’m running all of the same major pieces myself on a $5 VPS. (Plus some other services.) It only has 1G of memory so I can’t really afford the overhead of docker. But I’ll definitely give it some serious thought when it’s time to rearrange the deck chairs.

                                I assume the author’s definition of self-hosted is “running everything off a raspberry pi hanging off residential cable internet” because any reasonable VPS provider is going to have clean IP blocks and will help you if their IPs are blacklisted anywhere. Thus negating the need to rely on a third-party outgoing relay.

                                A good chunk of the article is spent configuring a backup SMTP server and syncing IMAP… I feel like the author didn’t know that SMTP is already a store-and-forward system… Yes, you can have multiple incoming SMTP servers for high-availability but unless you’re a big company or an ISP you probably don’t need them. If your mail server is down, any RFC-compatible system will queue and retry delivery, usually for up to 24 hours.

                                Also fetching mail from the backup server via IMAP seems bonkers to me… again, SMTP is store-and-forward, the backup server can simply be a relay that delivers to your main server and just holds it in a queue when the main server can’t be reached.

                                1. 5

                                  Mailu looks interesting but I’m running all of the same major pieces myself on a $5 VPS. (Plus some other services.) It only has 1G of memory so I can’t really afford the overhead of docker.

                                  What overhead? It’s just a glorified fork with chroot?

                                  1. 4

                                    I feel like the author didn’t know that SMTP is already a store-and-forward system

                                    I did in fact know this 🙂 But it’s not enough for my scenario. To expand, my personal self-hosted setup runs off of a machine in my house. It’s possible that if I were outside of the country and something catastrophic happened, it could be offline for an indeterminate amount of time. Possibly weeks. MTAs will stop retrying after some time and bounce the mail. So for my scenario, having a reliable backup MX is crucial.

                                    I had an interesting discussion on Reddit about exactly this, with some anecdotes about how common MTAs handle downed MX servers: https://reddit.com/r/selfhosted/comments/ogdheh/setting_up_reliable_deliverable_selfhosted_email/h4itjr5

                                    the backup server can simply be a relay that delivers to your main server and just holds it in a queue when the main server can’t be reached.

                                    An SMTP relay with a infinite retention time would be a good way to achieve this as well. Though, with Google Workspaces already set up on all my domains, I didn’t want to spend additional effort reconfiguring all my MX records to point to a separate SMTP server, let alone paying for/setting up such a service. So this bonkers IMAP setup was the way for me!

                                    1. 1

                                      Historically, spammers would target lower priority MX because they would often not have the anti spam measures configured. It looks like in your scenario you won’t get your own anti spam measures applied, but you will get Google’s, whether you want it or not.

                                    2. 2

                                      It only has 1G of memory so I can’t really afford the overhead of docker.

                                      I’ve forgotten how much memory my VPS has but it’s not likely more than 2G and I think we are running a couple of Docker containers. Is the daemon really RAM hungry?

                                      1. 3

                                        I checked on one of my VPSs and it uses 50MB. I don’t think that that is too bad. Could be less, sure, but not the end of the world.

                                      2. 1

                                        It only has 1G of memory so I can’t really afford the overhead of docker.

                                        I’ve run multiple docker containers on a host with 1G of memory. Other than the daemon itself, each container is just a wrapper for cgroups.

                                      1. 1

                                        I would do naughty things for a tool that can also show the age of dirs/files in addition to just the size. Say I find a database dump taking up 32 GB of disk on a shared server. If it’s 5 days old, I’d probably leave it alone. If it’s 5 years old, 99% chance I can delete it without hesitation. But I currently have to jump to another terminal window just to find that out. It would make my day-to-day job a ton easier if I could just d it right then and there.

                                        1. 3

                                          version 2.16 has the -e flag to have it read mtimes during scanning - you have to press ‘m’ once it loads to actually display / sort by them.

                                        1. 2

                                          I wrote something quite similar to this around a year ago. I wanted a personal wiki for my notes. I had been using Dokuwiki for at least a decade or so, but wanted something smaller, simpler, and with first-class Markdown support.

                                          I had started out wanting the pages to just be files on the filesystem for rock-solid future-proofing but between page history, indexing, full-text search, and all the corner cases involved I eventually discovered that I would basically be re-implementing Dokuwiki, which clashed with my goal of smaller and simpler.

                                          It turned out just INSERTing the documents into an sqlite3 database and indexing them with the built-in FTS5 feature was pretty trivial. The only other major thing I had to bolt on was a Markdown parser modified to understand [[wiki links]].

                                          I don’t have the code posted anywhere because it’s nowhere near respectable enough for public consumption.

                                          1. 2

                                            I’ve patched the markdown-it.py parser to understand intra links for the web GUI I wrote for bibliothecula, in case you want to check it out for inspiration.

                                          1. 38

                                            “The Gang Builds a Mainframe”

                                            1. 28

                                              Ha ha! I don’t think the mainframe is really a good analogue for what we’re doing (commodity silicon, all open source SW and open source FW, etc.) – but that nonetheless is really very funny.

                                              1. 7

                                                It makes you wonder what makes a mainframe a mainframe. Is it architecture? Reliability? Single-image scale-up?

                                                1. 26

                                                  I had always assumed it was the extreme litigiousness of the manufacturer!

                                                  1. 3

                                                    Channel-based IO with highly programmable controllers and an inability to understand that some lines have more than 80 characters.

                                                    1. 1

                                                      I think the overwhelming focus of modern z/OS on “everyone just has a recursive hierarchy of VMs” would also be a really central concept, as would the ability to cleanly enforce/support that in hardware. (I know you can technically do that on modern ARM and amd64 CPUs, but the virtualization architecture isn’t quite set up the same way, IMVHO.)

                                                      1. 2

                                                        I remember reading a story from back in the days when “Virtual Machine” specifically meant IBM VM. They wanted to see how deeply they could nest things, and so the system operator recursively IPL’d more and more machines and watched as the command prompt changed as it got deeper (the character used for the command prompt would indicate how deeply nested you were).

                                                        Then as they shut down the nested VMs, they accidentally shut down one machine too many…

                                                        1. 2

                                                          Then as they shut down the nested VMs, they accidentally shut down one machine too many…

                                                          This sounds like the plot of a sci-fi short story.

                                                          1. 3

                                                            …and overhead, without any fuss, the stars were going out.

                                                    2. 1

                                                      I’d go with reliability + scale-up. I’ve heard there’s support for like, fully redundant CPUs and RAM. That is very unique compared to our commodity/cloud world.

                                                      1. 1

                                                        If you’re interested in that sort of thing, you might like to read up on HP’s (née Tandem’s) NonStop line. Basically at least two of everything.

                                                      2. 1

                                                        Architecture. I’ve never actually touched a mainframe computer, so grain of salt here, but I once heard the difference described this way:

                                                        Nearly all modern computers from the $5 Raspberry Pi Zero on up to the beefiest x86 and ARM enterprise-grade servers you can buy today are classified as microcomputers. A microcomputer is built around one or more CPUs manufactured as an integrated circuit. This CPU has a static bus that connects the CPU to all other components.

                                                        A mainframe, however, is built around the bus. This allows not only for the hardware itself to be somewhat configurable per-job (pick your number of CPUs, amount of RAM, etc), but mainframes were built to handle batch data processing jobs and have always handily beat mini- and microcomputers in terms of raw I/O speed and storage capability. A whole lot of the things we take for granted today were born on the mainframe: virtualization, timesharing, fully redundant hardware, and so on. The bus-oriented design also means they have always scaled well.

                                                  1. 1

                                                    Yeah, but can it run Doom?

                                                    1. 2

                                                      Doom needed four floppies. Doom 2 took 5.

                                                      1. 1

                                                        Perhaps it could be loaded into memory from a DAT cassette.

                                                        1. 1

                                                          Most of the Doom data is game assets and levels though, I think? You might be able to squeeze the game engine and a small custom level in 400k.

                                                          1. 1

                                                            Yes it is. But the engine itself is about 700k. The smallest engine (earliest releases) were a little over 500k. You could probably build a smaller one with modern techniques and a focus on size though.

                                                            1. 2

                                                              Good news, ADoom for Amiga is only 428k! Bad news, Amigas only have double density FDDs so you only have 452k for the rest of the distro.

                                                      1. 1

                                                        I’ve been using Linux on the desktop on a daily basis for over 20 years and fully agree with most of the author’s other points. Ever since GNOME 3 was first released, it feels to me like the devs have been aiming to make GNOME a tablet-like experience on the desktop, which is not something I ever wanted. Improvements to the experience seem to be entirely experimental and without direction or cohesive vision. Pleas for options and settings to make GNOME behave similarly to conventional (read: tried and true) desktop idioms fall on deaf ears.

                                                        I stuck with XFCE for a long time, but for the past couple of years tried seriously to make peace with GNOME. First it was Ubuntu’s take, which I found palatable with a handful of extensions. I then switched over to Pop OS but again had to plaster over the deficiencies with extensions. But having to wrangle extensions just to get a usable desktop just isn’t my idea of a good time.

                                                        Earlier this week I decided to give KDE (or is it “Plasma” now?) another try and have so far been pretty happy with it. It seems like they have recently stripped back a lot of the fluff while being able to keep all of the customization that I require. I gave up on KDE in the past due to outright buggy behavior and crashes in common workflows but haven’t hit anything serious yet. Crossing my fingers that it stays that way for a while.

                                                        1. 4

                                                          A lot of the criticism I see leveled against Gnome centers around the way they often make changes that impact the UX but don’t allow long-time users to opt-out of these new changes.

                                                          The biggest example for me: There was a change to the nautilus file manager where it used to be you could press a key with a file explorer window open, and it would jump to the 1st file in the currently open folder whose name starts with that letter or number. They changed it so it opens up a search instead when you start typing. The “select 1st file” behaviour is (was??) standard behavior in Mac OS / Windows for many many years, so it seemed a bit odd to me that they would change it. It seemed crazy to me that they would change it without making it a configurable option, and it seemed downright Dark Triad of them that they would make that change, not let users choose, and then lock / delete all the issue threads where people complained about it.

                                                          It got to the point where people who cared, myself included, started maintaining a fork of nautilus that had the old behavior patched in, and using that instead.

                                                          What’s stopping people who hate the new & seemingly “sadistic” features of gnome from simply forking it? Most of the “annoying” changes, at least from a non-developer desktop user’s perspective, are relatively surface level & easy to patch.

                                                          1. 3

                                                            Wow, I thought I was the only one who thought that behavior was crazy. Since the early 90’s, my workflow for saving files was: “find the dir I want to save the file in,” then “type in the name.” In GNOME (or GTK?) the file dialog forces me to reverse that workflow, or punishes me with extra mouse clicks to focus the correct field.

                                                            I have never wanted to use a search field when trying to save a file.

                                                            1. 4

                                                              Wow, that thread is cancer.

                                                              1. 5

                                                                To me it just looks like 99% of all Internet threads where two or more people hold slightly differing positions and are just reading past each other in an effort to be right. At least there’s a good amount of technical discussion in there, as flame wars go, this is pretty mild.

                                                            1. 1

                                                              I’m not very active in this space anymore but my impression is that most people moving on from the “make an LED blink” level of expertise end up on the Arduino VSCode extension (https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-arduino) or Platform.io (https://platformio.org/)

                                                              1. 2

                                                                This problem, and others, can be solved by using official distribution images instead of those provided by the Raspberry Pi project. I’m using the official Fedora 33 ARM64 (aarch64) image for example, works perfectly on my Raspberry Pi 3B+ and has the exact same packages (including kernel!) as the x86_64 version of Fedora.

                                                                See https://fedoraproject.org/wiki/Architectures/ARM/Raspberry_Pi

                                                                1. 3

                                                                  Do the distro-originated images come with all the same raspberry-pi configuration, hardware drivers, and firmware gubbins as Raspbian? That’s the main reason I run Raspbian, aside from it having more or less guaranteed support when things break and I need to do some googlin’ to fix it.

                                                                  1. 2

                                                                    Generally speaking? No.

                                                                    Raspbian is the only distro that provides truly first class support for the pi’s hardware.

                                                                    Graphics support is becoming more widespread at least, and there are bits and bobs of work happening in various distros.

                                                                    But from what I’ve seen most distros are optimizing for a good desktop experience on the pi.

                                                                    1. 1

                                                                      At least on Fedora you get a kernel very close to upstream Linux, also for the Pi, so no crazy stuff and everything I use works out of the box (LAN, WIFi). That is the reason why the Raspberry Pi 4 for example still doesn’t work in Fedora, requires more stuff to be properly upstreamed: https://github.com/lategoodbye/rpi-zero/issues/43

                                                                  1. 1

                                                                    I hadn’t realized there was an 8GB Pi 4 – the announcement post notes the SoC could support up to 16GB, and the limit is how much they can get in a single package. An 8GB option for the keyboard-shaped Pi 400 (for a few more bucks, of course) would be interesting too.

                                                                    1. 2

                                                                      I’ve had an 8 GB version since spring/summer at least and it’s a wonderful cheap low-power Linux box for a variety of development duties and experiments. Main downside is that it absolutely needs a decent heat sink if you’re doing anything CPU-intensive or else the CPU speed gets throttled.

                                                                    1. 1

                                                                      Unrelated to the news, but LWN content is nearly impossible to read on a phone…very annoying

                                                                      1. 4

                                                                        I think is mostly because it’s an email. Otherwise, that’s usually OK (not great).

                                                                        I wonder if they would accept help to fix that.

                                                                        1. 3

                                                                          I think it’d be hard trying to intelligently format a 72-column fixed plain text email into something that isn’t a VT100. It’d probably be easier if it was rich text (or at least designed to reflow) in the first place.

                                                                        2. 2

                                                                          I’m using wallabag to bookmark the content and read on my phone, usually much later. I also think that lwm works ok with Firefox readability view.

                                                                          1. 1

                                                                            Thanks for the suggestion. I will give it a try although I’m using Firefox less frequently these days

                                                                          2. 1

                                                                            Not sure which phone you have but mine is able to display the original article just fine in horizontal mode. Or in either orientation with Firefox reader view.

                                                                            1. 1

                                                                              You can always switch to the desktop version.

                                                                            1. 15

                                                                              My experience with Bash is: “avoid it at any cost”. Unless you are writting very OS specific stuff, you should always avoid writting bash.

                                                                              Bash efficiency is a fallacy, it is never the case. Bash is sticky, it will stay with you until it transforms into a big black-hole of tech-debt. It should never be used in a real software project.

                                                                              After years of Bash dependency we realized that it was the biggest point of pain for old and new developers in the team. Right now Bash is not allowed and new patches introducing new lines of Bash need to delete more than what they introduce.

                                                                              Never use Bash, never learn to write Bash. Keep away from it.

                                                                              1. 4

                                                                                What do you use instead?

                                                                                1. 8

                                                                                  Python. Let me elaborate a little bit more.

                                                                                  We are a Docker/Kubernetes shop, we started building containers with the usual, docker build/tag/push, plus a test in between. We had 1 image, one shell script did the trick.

                                                                                  We added a new image, and the previous one gained a parameter which existed in a JSON file which was captured using jq (first dependency added). Now we had a loop with 2 images being built tested and pushed.

                                                                                  We added 1 stage: “release”. Docker now had build tag push, test, tag push (to release). And we added another image, the previous images gained more parameters, something was curled from the public internet, the response piped into jq. A version docker build-arg was added to all of the images, this version was some sort of git describe.

                                                                                  2 years later, the image building and testing process was a disaster. Impossible to maintain, all errors captured after the images were released, the logic to build the ~10 different image types were spread into multiple shell scripts, CI environment definitions, docker build-args. The images required very strict order of operations to build: first run script build then run script x then tag something… etc.

                                                                                  Worst of all, we had this environment almost completely replicated to be able to build images locally (when building something in your own workstation) and remotely in the CI environment.

                                                                                  Right before the collapse, I requested to management 5 weeks to fix this monstrosity.

                                                                                  1. I captured all the logic required to build the images (mostly parameters needed)
                                                                                  2. I built a multi-stage process that would do different kind of tasks with images (build, tag, push)
                                                                                  3. I added a Dockerfile template mechanism (based on jinja2 templates)
                                                                                  4. Wrote definitions (a pipeline) of the process or lifecycle of an image. This would allow us to say, “for image x, build it, push it into this repo” or “for this image, in this repo, copy it into this other repo”
                                                                                  5. I added multiple builder implementations: the base one is Docker, but you can also use Podman and I’m planning on adding Kaniko support soon.
                                                                                  6. I added parallelized builds using multi-processing primitives.

                                                                                  I did this in Python 3.7 in just a few weeks. The most difficult part was to migrate the old tightly coupled shell-scripting based solution to the new one. Once this migration was done we had:

                                                                                  1. The logic to build each image was defined in an inventory file (yaml, not great, but not awfull)
                                                                                  2. If anything needs to be changed, it can be changed on a “description file”, not in shell scripts
                                                                                  3. The same process can be run locally and in the CI environment, everything can be tested
                                                                                  4. I added plenty of unit tests to the Python codebase. Monkeypatching is crucial to test when you have things like docker build in the middle, although this can be fixed by running test using the noop builder implementation.
                                                                                  5. Modularized the codebase: parts of the generic image process pipeline are confined to its own Python modules. Everything that’s application dependant lives on our repo, and uses the other modules we build. We expect those Python modules to be reused in future projects.
                                                                                  6. It is not intimidating to make changes, people are confident about the impact of their changes, meaning that they feel encouraged to make changes, improving productivity**

                                                                                  Anyway, none of this could be achieved by using Bash, I’m pretty sure about it.

                                                                                  1. 13

                                                                                    It sounds to me like your image pipeline was garbage, not the tool used to build it.

                                                                                    I’ve been writing tools in bash for decades, and all of them still run just fine. Can’t say the same for all the python code, now that version 2 is officially eol.

                                                                                    1. 3

                                                                                      bash 3 broke a load of bash 2 scripts. This was long enough ago that it’s been largely forgotten.

                                                                                      1. 1

                                                                                        I agree with you, the image pipeline was garbage, and that was our responsibility of course. We can write the same garbage in Python no doubt.

                                                                                        Bash however, does not encourage proper software engineering, definitely, and it makes software impossible to maintain.

                                                                                  2. 1

                                                                                    I can confirm this. I’ve had to replace a whole buildsystem made in bash with cmake roughly 2 years ago and bash still contaminates many places it should not be involved in with zero tests.

                                                                                  1. 25

                                                                                    Negatively: Drinking when I got stressed. Now I drink all the time, to the point where it’s an unreasonable portion of my outgoing expenditure and I’ll usually pour myself something to take the edge off before standup. If I could offer any advice to anyone reading; please only drink alcohol during fun, social occasions.

                                                                                    1. 10

                                                                                      When I was a cable guy, the only outlet I had was drinking. 4 out of 5 mornings I had a hangover, was still buzzed, or even drunk. My (horrible, universally hated) boss reprimanded me for it multiple times a month. The only thing that stopped me was quitting that job in June.

                                                                                      With some help, (a week in the hospital and a lung injury) I’ve also quit smoking cigarettes and avoid nicotine. I now have a very nice and infinitely more affordable green tea habit.

                                                                                      I drink still, avoid keeping liqour around, and ceased my habit of staying drunk or getting shitfaced regularly. Stress kills, folks.

                                                                                      1. 5

                                                                                        Thanks for sharing. I think avoiding keeping liquor around is a good point I hadn’t really considered, by now it’s part of the furniture. Maybe I’ll give my liquor shelf to my parents.

                                                                                      2. 11

                                                                                        A relative taught me these rules when I was a kid:

                                                                                        • Never drink on an empty stomach.
                                                                                        • Never drink alone.
                                                                                        • Drink for the taste, not for the effect.

                                                                                        Works for me.

                                                                                        1. 6

                                                                                          I’ve heard these rules a couple of times, and, to me, they always sound patronizing. It feels on par with telling an addict to “just stop”. How can the advice work when you want to drink on an empty stomach, alone, and for the effect, and it’s out of your control?

                                                                                          1. 16

                                                                                            These aren’t guidelines for an alcoholic, they’re guidelines to prevent one from becoming an alcoholic.

                                                                                            1. 9

                                                                                              Sorry, I realized my first comment was a little intense.

                                                                                              I understand this. I just don’t think they very good guidelines – they’re more of a description of “patterns of people who aren’t alcoholics”. I think what makes someone an alcoholic is a very complex, and often genetic thing. For some, these rules are essentially impossible to follow from the get-go. Additionally, someone can choose to break all these rules all the time, and still not become an alcoholic.

                                                                                              1. 2

                                                                                                I get your point, but if it’s genetic, then a list of rules won’t make a difference one way or the other.