Threads for bityard

  1. 1

    The hilarity of this (and it is hilarious) is kinda ruined by the first few frames of the GIF showing the (assumed) developer in underwear (even though it is blurry for some reason).

    1. 1

      Add pants or blur out everything from the waist down for extra safety on Zoom calls.

      ;)

    1. 11

      I ran out of patience to finish reading the article due to the exasperated tone throughout and the author’s steadfast insistence that they know better than decades of system programmers that came before them, but I’ll state what I think should be the obvious:

      If you want to build something on a system with a legacy design, you shouldn’t be too surprised that you have to use legacy tools and interfaces to get the job done.

      The author states several times that the people who wrote C and C-based OSes made bad choices and invented bad designs, which is simply not true. Those people were not idiots, they were designing things according to existing constraints, concerns, and goals all colored by the state of the art at the time. Where the state of the art was generally, “Oh, you want that program written for this computer to run on that other computer too? Have fun re-writing the whole thing from scratch!”

      The person who wrote this article is missing entire decades of context, and without that context, it’s very easy to dismiss mistakes in design as obvious oversights or incompetence. I look forward to the day that someone looks at the author’s code a few years from now and says, “wow, what an flaming pile of yuck!”

      1. 12

        Granted, hindsight is 20/20, and we should not chastise past effort just because they lacked our hindsight.

        But.

        Hindsight is 20/20, how about exploiting this? While it is normal for legacy systems to suck by current standard, they still suck by current standards. Insisting that we’d be nice to legacy designers turn our attention away from the fact that they lacked our hindsight. Insisting that legacy systems used to be good, hides the fact that they are now bad.

        If we want to have a chance of disentangling ourselves from legacy crap, we first need to recognise that it is legacy crap, and build up the emotional energy necessary to take action and try & make it better. I don’t care that past giants did the best they could. The best they could is no longer enough, and we should stand on their shoulders and do better.

        1. 4

          The best they could is no longer enough, and we should stand on their shoulders and do better.

          Nobody is saying we shouldn’t! The way I see it, the problems here are obvious to those paying attention. Complaining about the problems, whether the tone is dispassionate or angry, doesn’t actually help form a solution, and becoming angry over the problems helps even less by emotionally exhausting everyone.

          As @andyc alluded to in their sibling comment, this is a common pattern in computing. Much like the C ABI creates a form of crufty legacy glue between applications, HTTP and TCP have formed a similar bottleneck in networking. Every couple weeks another internet loud-person comes to the realization in anger that the reason why so much stuff gets piped over HTTP is because it’s the least common denominator let through by middleboxes. And as much as I’m sympathetic when another person comes to this well-known conclusion in anger, it doesn’t change the reality: I can’t use SCTP because of Middleboxes; I’m stuck on port 443 because of Middleboxes; Latency is really high on my video call because I’m NATed/Middleboxes. You can be angry at the middleboxes or accept/try to work with reality, it’s your choice.

          1. 9

            The way I see it, the problems here are obvious to those paying attention. Complaining about the problems, whether the tone is dispassionate or angry, doesn’t actually help form a solution

            Not everyone is paying attention. Complaining raises awareness, which is a necessary step towards forming a solution. If no one complains, few will ever know. If no one knows, no one will care. If no one cares, the problem does not get fixed.

            Important problems need to be complained about.

            Much like the C ABI creates a form of crufty legacy glue between applications, HTTP and TCP have formed a similar bottleneck in networking.

            There’s a huge difference between C and HTTP. C is basically the only way for languages to talk to each other in the same process. It’s bad and crufty and legacy, but it’s also all we have.

            HTTP on the other hand is not the only thing we have. We have IP. We have UDP. We have TCP. And those middle boxes are forcing me to use complex HTTP tunnels where I could have sent UDP packets instead. In many cases this kills performance to such an extent that some programs that would have been possible with UDP, simply cannot be done with the tunnel. And bottleneck wise, IP, TCP, and UDP are much narrower than HTTP.

            You can be angry at the middleboxes or accept/try to work with reality, it’s your choice.

            I’m not sure you realise how politically charged this statement is. What you just wrote suspiciously sounds like “There is no alternative”. Middle boxes aren’t like gravity. Humans put them there, and humans can remove them. If they’re a problem, complaining about them can raise awareness, and hopefully cause people to make better middle boxes.

            On the other hand, if everyone thinks middle boxes are “reality”, and that the only choice is to work with them, that will make it so. I can’t have that, so I’ll keep complaining whenever I have to do some base64-JSON->deflate->HTTP insanity just to copy some Protobuf from one machine to another (real story).

            1. 2

              Not everyone is paying attention. Complaining raises awareness, which is a necessary step towards forming a solution. If no one complains, few will ever know. If no one knows, no one will care. If no one cares, the problem does not get fixed.

              The people with the knowledge and ability to fix the situation, or to provide workarounds, are often the people who are aware of the problem. I firmly believe that the endless complaining in technical circles on the internet doesn’t actually help raise awareness to folks unaware of or uninterested in the issue; once you become aware you understand the issue fairly quickly. I’ve always viewed it as a form of venting rather than an honest attempt to fix things, all about healing the self and not about fixing the problem.

              Reality has a surprising amount of detail and I can guarantee you I can find a domain expert in any domain who can breathlessly fire off a list of things broken about their domain. Yet you or I who are in no position to change those things nor really have much more than a surface-level interest in them don’t need to be aware of every one of those problems. If every problem was shouted from every rooftop, I’m pretty sure humanity would go deaf.

              I’m not sure you realise how politically charged this statement is. What you just wrote suspiciously sounds like “There is no alternative”. Middle boxes aren’t like gravity. Humans put them there, and humans can remove them. If they’re a problem, complaining about them can raise awareness, and hopefully cause people to make better middle boxes.

              I don’t mean to draw any parallel to world politics, though humans being human there will always be overlap. That being said, I think understanding why it’s not trivial to remove middleboxes from the equation is a very important part of understanding the problem here and exactly why I find so many rants unhelpful. The reality is that hardware manufacturers are trying to cut costs and hire cheap, understaffed development teams who make crappy middleboxes which are then used by ISPs who will attempt to use a middlebox forever until it either breaks or them or someone threatens them with legal action, because margins are so low. This is exacerbated by the ecosystem of ISPs in an area. There’s more, it’s a complicated topic, but all of that gets lost if you’re angrily ranting. It’s helpful to understand the incentives/problems that created this broken state so we don’t inadvertently create another set of broken incentives when the time/opportunity comes to fix them. In networking that time is around the corner as QUIC/HTTP3 is increasingly being proposed as the way forward to allow the sorts of applications that the old Internet envisioned. Understanding the problem well here is key so we don’t run into yet more ossification.

              On the other hand, if everyone thinks middle boxes are “reality”, and that the only choice is to work with them, that will make it so. I can’t have that, so I’ll keep complaining whenever I have to do some base64-JSON->deflate->HTTP insanity just to copy some Protobuf from one machine to another (real story).

              Accepting that something is “reality” doesn’t stop folks from trying to improve the situation. You don’t constantly need to beat the drum of how broken something is just to fix it. Accepting reality is to also empathize with the past decisions that brought us into the current state. For example, I think IPv6 should be adopted by everyone and everywhere; NAT is a silly crutch stopping middleboxes from having to leave IPv4 addresses (and raising the value of existing blocks owned by certain entities, but I digress.) But writing a long, angry rant about how NAT sucks doesn’t help anyone; it doesn’t help my family overcome their NATs nor does it help me come up with a less complicated network topology. In the meantime Wireguard, Zerotier, and Yggdrassil are taking matters into their own hands and helping bring the full internet back despite middleboxes. That doesn’t mean I’ll ever stop pushing netops to support IPv6 nor will I stop pushing netops to let non-TCP and non-UDP traffic through their middleboxes. But there’s something to be said about actually solving a problem and not just complaining about it. In fact, I’d say that trying to solve a problem despite the broken state of the problem is perhaps the strongest statement on how broken things are. “Look at thing!” I say, “it sucks so much I had to route around it”.

              Having said that, I run up against my own screed. Programmers love to complain and rant, moreso than any other domain that I’m familiar with. I need to accept the reality that this hasn’t changed in the past and will not change going forward. Still, I voice my opinion from time-to-time about the fact. Overall I’m happy that this site has a rant tag because I can filter out the rants from my headspace and only view them when I want to (like now.)

              1. 3

                The people with the knowledge and ability to fix the situation, or to provide workarounds, are often the people who are aware of the problem.

                Political leaders are often the only folks who can make short term decisions on foreign policy. Yet their decisions are often influenced by what they believe their people will think of their decision. If they anticipate that a given decision will be unpopular, they are more likely to not do it. Thus, discussing foreign policy in a bar or on online forums does influence foreign policy. The effect is very very diffuse, but it’s real.

                People with knowledge and ability to fix the situation, if they’re not psychopaths, are likely to empathise with whatever they believe the “normies” would feel about it, making it a similar situation to politicians.

                Accepting that something is “reality” doesn’t stop folks from trying to improve the situation.

                The choice of words is important. You didn’t just say “accept reality”, you also said “work with reality”, which generally implies not only accepting what reality is, but also accepting that you’re powerless to change it. Directed at someone else, it also tend to chastise them for being idealistic fools.

                1. 1

                  Thus, discussing foreign policy in a bar or on online forums does influence foreign policy. The effect is very very diffuse, but it’s real.

                  This is where we disagree. You think it’s real but I think it’s not. I think the world is full of people being unhappy by things and without a concerted political front you’ll just be that person on their soapbox ranting at crowds; the silent majority ignores the soapbox ranter. Anyway this is straying out of technology into politics so I’ll stop here.

                  Directed at someone else, it also tend to chastise them for being idealistic fools.

                  Fools no, but idealistic, yes. I know that’s anathema here on Lobsters where everyone wants to resonate with their code and have their personal values reflected in and pushed by their work, but I’m comfortable with that not being the case for myself. I’m very happy not having opinions about most things and accepting that there’s a Chesterton’s Fence to most issues in reality.

            2. 2

              Yes definitely, I agree we should try to do better but not denigrate the work of the past …

              Although in thinking about this more, I think there is a pretty important difference between networking in software. The incentives are mixed in both cases, but I’d say:

              • In networking the goal is to interoperate … so people make it happen, even the companies trying to make money.
              • In software interoperability is often an anti-goal. There is a big incentive to create walled gardens

              But yeah overall I really hope everyone writing software thinks about the system as a whole, the ecosystem, and how to interoperate. Not just the concerns of their immediate work

          2. 11

            The person who wrote this article is missing entire decades of context, and without that context, it’s very easy to dismiss mistakes in design as obvious oversights or incompetence. I look forward to the day that someone looks at the author’s code a few years from now and says, “wow, what an flaming pile of yuck!”

            I can’t express enough how much I agree with this. C was designed for a specific problem and solved it well. Now, 40 years, later people complain about its deficiencies, yet barely question the fact that we (as software developers) haven’t come up with any usable and widely accepted alternative to binary interfaces. Apparently this industry isn’t as innovative as it likes to perceive itself…

            1. 5

              Yes. Today we have byte addressable 2s complement machines, but back when C was first designed? There were computers with addressable units from 9 to 66 bits in size and the C compilers that K&R put out were retargetted (by others) for such machines. By the time 1989 rolled around, the standards committee didn’t want to break any existing C code, so we got the standard we got. It was a different time back then.

          1. 10

            https://numbr.dev - web version of soulver. Calculator with currency rates.

            1. 1

              This is pretty neat! I may never need a spreadsheet again.

              Edit: I was going to contribute some trivial grammar fixes (in the Tips & Tricks) but it doesn’t look like the full app is on GitHub, is that true or am I just missing something? And there’s no index.js?

              1. 1

                Thanks! Right now only core is published. UI is still in development so will release it later)

            1. 5

              doas in its effort to be a sudo replacement without all the bloat, neglects to implement this.

              It wasn’t neglected at all. Insults are the definition of useless bloat.

              1. 23

                This is what is known as “dry humor”

                1. 2

                  Yeah, I thought that was obvious when I was writing it, but apparently not. Clarified my own opinion on it in another comment.

                  1. 0

                    Touché

                  2. 5

                    Author here. Agreed, it’s entirely useless. I threw this post together for fun one day a few years ago, and it’s meant entirely as tongue in cheek. I don’t use it myself because it’s pointless, and one extra thing I’d need to set up on a new system for no gain.

                  1. 1

                    Every time someone creates a new Markdown variant, a kitten dies :(

                    1. 3

                      You monster!

                    1. 2

                      Wacky dual-head display stuff is one of the main things that drove me to just sticking with GNOME plus some UI tweaks instead of spending hours crafting my own bespoke desktop environment. GNOME 2 from waaay back in the day had excellent multi-monitor support for the time (even better than windows and Mac) and GNOME 3 had its issues over the past few years but is now pretty tolerable for my day-to-day stuff.

                      1. 3

                        Congrats on writing a Dockerfile.

                        A few suggestions:

                        • Specify which Debian you want. Latest will change.
                        • apt-get update && apt-get install -y nodejs npm
                          • Doing this in three steps is inefficient and can cause problems.
                        1. 6

                          Even better, don’t write a Dockerfile at all. Use one of the existing official Node images which allow you to both specify what Debian and what Node version you want.

                          1. 1

                            I tried this but I didn’t get a shell, it would be nice to get it working.

                            1. 4

                              Those images have node set as the CMD, which means it will open the node REPL instead of a shell. You can either do docker run -it node:16-buster-slim /bin/bash to execute bash (or another shell of your choice) instead, or you can make a Dockerfile using the node image as your FROM and add an ENTRYPOINT or CMD instead to eliminate the need to invoke the shell.

                              1. 3

                                Incidentally, to follow up as I remembered to write this, one reason that it’s common for images to use CMD in this way is that it makes it easier to use docker run as sort-of-drop-in replacements of uncontained CLI tools.

                                With an appropriate WORKDIR set, you can do stuff like

                                alias docker-node='docker run --rm -v $PWD:/pwd -it my-node-container node

                                alias docker-npm='docker run --rm -v $PWD:/pwd -it my-node-container npm

                                and you’d be able to use them just like they were node/npm commands restricted to the current directory, more or less. It wouldn’t preserve stuff like cache and config between runs, though.

                            2. 1

                              I have to agree with this. I tend toward “OS” docker images (debian and ubuntu usually) for most things because installing dependencies via apt is just too damn convenient. But for something like a node app, all of your (important) deps are coming from npm anyway so you might as well use the images that were made for this exact use case.

                            3. 3

                              what problems?

                              1. 3

                                It creates 3 layers instead of one. You can only have 127 layers in a given docker image so it’s good to combine multiple RUN statements into one where practical.

                                1. 3

                                  Also the 3 layers will take unnecessary space. You can follow the docker best practices and remove the cache files and apt lists afterwards - that will ensure your container doesn’t have to carry them at all.

                                2. 2

                                  Check out the apt-get section in the best practice guide: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

                              1. 19

                                I was more into Python at the time, but I read _why’s Poignant Guide to Ruby just for the entertainment. I know quite a few people who got their successful development careers started from that guide. (Usually coming from helpdesk roles, or systems/network administration.) And I exist in a relatively small bubble, so I’m sure the number of lives he markedly improved is well into the tens or hundreds of thousands.

                                I wish there were more funny and inspirational guides not just for programming, but all technical topics.

                                1. 18

                                  You might enjoy Julia Evans’ zines! Not quite what you’re describing, but seems closer than most other reference material.

                                1. 13

                                  I wonder why the kernel community seems to have structural issues when it comes to filesystem - btrfs is a bit of a superfund site, ext4 is the best most people have, and ReiserFS’s trajectory was cut short for uh, Reasons. Everything else people would want to use (i.e. ZFS, but also XFS, JFS, AdvFS, etc.) are hand-me-downs from commercial Unix vendors.

                                  1. 13

                                    On all of the servers I deploy, I use whatever the OS defaults to for a root filesystem (generally ext4) but if I need a data partition, I reach for XFS and have yet to be disappointed with it.

                                    Ext4 is pretty darned stable now and no longer has some of the limitations that pushed to me XFS for large volumes. But XFS is hard to beat. It’s not some cast-away at all, it’s extremely well designed, perhaps as well or better than the rest. It continues to evolve and is usually one of the first filesystems to support newer features like reflinks.

                                    I don’t see why XFS couldn’t replace ext4 as a default filesystem in general-purpose Linux distributions, my best guess as to why it hasn’t is some blend of “not-invented-here” and the fact that ext4 is good enough in 99% of cases.

                                    1. 3

                                      It would be great if the recent uplift of xfs also added data+metadata checksums. It would be perfect for a lot of situations where people want zfs/btrfs currently.

                                      It’s a great replacement for ext4, but not other situations really.

                                      1. 1

                                        Yes, I would love to see some of ZFS’ data integrity features in XFS.

                                        I’d love to tinker with ZFS more but I work in an environment where buying a big expensive box of SAN is preferable to spending time building our own storage arrays.

                                        1. 1

                                          I’m not sure if it’s what’s you meant, but XFS now has support for checksums for at-rest protection against bitrot. https://www.kernel.org/doc/html/latest/filesystems/xfs-self-describing-metadata.html

                                          1. 2

                                            This only applies to the metadata though, not to the actual data stored. (Unless I missed some newer changes?)

                                            1. 1

                                              No, you’re right. I can’t find it but I know I read somewhere in the past six months that XFS was getting this. The problem is that XFS doesn’t do block device management which means at best it can detect bitrot but it can’t do anything about it on its own because (necessarily) the RAIDing would take place in another, independent layer.

                                        2. 3

                                          I don’t see why XFS couldn’t replace ext4 as a default filesystem in general-purpose Linux distributions

                                          It is the default in RHEL 8 for what it’s worth
                                          https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_file_systems/assembly_getting-started-with-xfs-managing-file-systems

                                        3. 2

                                          Yep. I use xfs for 20 years now when I need a single drive FS and I use zfs when I need multiple drive FS. The ext4 and brtfs issues did not increase my confidence.

                                        1. 11

                                          STARTTLS always struck me as a terrible idea. TLS everywhere should be the goal. Great work.

                                          1. 6

                                            Perhaps this is partially the result of a new generation of security researchers gaining prominence, but progressive insight from the infosec industry has produced a lot of U-turns. STARTTLS was obviously the way forward, until it wasn’t, and now it’s always been a stupid idea. Never roll your own crypto, use reliable implementations like OpenSSL! Oh wait, it turns out OpenSSL is a train wreck, ha ha why did people ever use this crap?

                                            As someone who is not in the infosec community but needs to take their advice seriously, it makes me a bit more wary about these kinds of edicts.

                                            Getting rid of STARTTLS will be a multi-year project for some ISPs, first fixing all clients until they push implicit TLS (and handle the case when a server doesn’t offer implicit TLS yet), then moving all the email servers forward.

                                            Introducing STARTTLS had no big up-front costs …

                                            1. 9

                                              Regarding OpenSSL I think you got some bad messaging. The message is not “don’t use OpenSSL”. The real message was “all crypto libraries are train wrecks and need more funding and security auditing”. But luckily OpenSSL has improved a lot, and you should still use a well-tested implementation and not roll your own crypto and OpenSSL is not the worst choice.

                                              Regarding STARTTLS I think what we’re seeing here is that there was a time when crypto standards valued flexibility over everything else. We also see this in TLS itself where TLS 1.2 was like “we offer the insecure option and the secure option, you choose”, while TLS 1.3 was all about “we’re gonna remove the insecure options”. The idea that has gained a lot of traction is that complexity breeds insecurity and should be avoided, but that wasn’t a popular idea 20-30 years ago when many of these standards were written.

                                              1. 2

                                                The message is not “don’t use OpenSSL”. The real message was “all crypto libraries are train wrecks and need more funding and security auditing”. But luckily OpenSSL has improved a lot, and you should still use a well-tested implementation and not roll your own crypto and OpenSSL is not the worst choice.

                                                100%

                                                I prefer libsodium over OpenSSL where possible, but some organizations can only use NIST-approved algos.

                                            2. 3

                                              Agreed. It always felt like a band-aid as opposed to a well thought out option. Good stuff @hanno.

                                              1. 3

                                                Your hindsight may be 20/20 but STARTTLS was born in an era where almost nothing on the Internet was encrypted. At that time, 99% of websites only used HTTPS on pages that accepted credit card numbers. (It was considered not worth the administrative and computing burden to encrypt a whole site that was open to the public to view anyway.)

                                                STARTTLS was a clever hack to allow opportunistic encryption of mail over the wire. When it was introduced, getting the various implementations and deployments of SMTP servers (either open source or commercial) even to work together in an RFC-compliant manner was an uphill battle on its own. STARTTLS allowed mail administrators to encrypt the SMTP exchange where they could while (mostly) not breaking existing clients and servers, nor requiring the coordination of large ISPs and universities around the world to upgrade their systems and open new firewall ports.

                                                Some encryption was better than no encryption, and that’s still true today.

                                                That being said, I run my own mail server and I only allow users to send outgoing mail on port 465 (TLS). But for mail coming in from the Internet, I still have to allow plaintext SMTP (and hence STARTTLS support) on port 25 or my users and I would miss a lot a messages. I look forward to the day that I can shut off port 25 altogether, if it ever happens.

                                                1. 2

                                                  Your hindsight may be 20/20 but STARTTLS was born in an era where almost nothing on the Internet was encrypted.

                                                  I largely got involved with computer security/cryptography in the late 2000’s, when we suspected a lot of the things Snowden revealed to be true, so “encrypt every packet securely” was my guiding principle. I recognize that wasn’t always a goal for the early Internet, but I was too young to be heavily involved then.

                                                  Some encryption was better than no encryption, and that’s still true today.

                                                  Defense against passive attackers have value, but in the face of active attackers, opportunistic encryption is merely security theater.

                                                  I look forward to the day that I can shut off port 25 altogether, if it ever happens.

                                                  Hear hear!

                                                  1. 2

                                                    Defense against passive attackers have value, but in the face of active attackers, opportunistic encryption is merely security theater.

                                                    That’s not quite true, it still provides an audit trail. The goal of STARTTLS, as I understand it, is to avoid trying to connect to a TLS port, potentially having to wait for some arbitrary timeout if a firewall somewhere is set to drop packets rather than reject connections, and then retry on the unencrypted path. Instead, you connect to the port that you know will be there and then try to do the encryption. At this point, a passive attacker can’t do anything, an active attacker can strip out the server’s notification that STARTTLS is available and leave the connection in plaintext mode. This kind of injection is tamper-evident. The sender (at least for mail servers doing relaying) will typically log whether a particular message was sent with or without STARTTLS. This logging lets you detect which messages were potentially leaked / tampered with at a later date. You can also often enforce policies that say things like ‘if STARTTLS has ever been supported by this server, refuse if it isn’t this time’.

                                                    Now that TLS support is pretty-much table stakes, it is probably worth revisiting this and defaulting to connecting on the TLS port. This is especially true now that most mail servers use some kind of asynchronous programming model so trying to connect on port 465 and waiting for a timeout doesn’t tie up too many resources. It’s not clear what the failure mode should do though. If an attacker can tamper with port 25 traffic, they can also trivially drop everything destined for port 465, so trying 465 and retrying on 25 if that fails is no better than STARTTLS (actually worse - rewriting packets is harder than dropping packets, one can be done by inspecting the header the other requires deep-packet inspection). Is there a DNS record that can tell connecting mail servers to not try port 25? Just turning off port 25 doesn’t help because an attacker doing DPI can intercept packets for port 25 and forward them over a TLS connection that it establishes to 465.

                                              1. 3

                                                Basically this is a guide to setting up Mailu: https://mailu.io/1.7/

                                                Mailu looks interesting but I’m running all of the same major pieces myself on a $5 VPS. (Plus some other services.) It only has 1G of memory so I can’t really afford the overhead of docker. But I’ll definitely give it some serious thought when it’s time to rearrange the deck chairs.

                                                I assume the author’s definition of self-hosted is “running everything off a raspberry pi hanging off residential cable internet” because any reasonable VPS provider is going to have clean IP blocks and will help you if their IPs are blacklisted anywhere. Thus negating the need to rely on a third-party outgoing relay.

                                                A good chunk of the article is spent configuring a backup SMTP server and syncing IMAP… I feel like the author didn’t know that SMTP is already a store-and-forward system… Yes, you can have multiple incoming SMTP servers for high-availability but unless you’re a big company or an ISP you probably don’t need them. If your mail server is down, any RFC-compatible system will queue and retry delivery, usually for up to 24 hours.

                                                Also fetching mail from the backup server via IMAP seems bonkers to me… again, SMTP is store-and-forward, the backup server can simply be a relay that delivers to your main server and just holds it in a queue when the main server can’t be reached.

                                                1. 5

                                                  Mailu looks interesting but I’m running all of the same major pieces myself on a $5 VPS. (Plus some other services.) It only has 1G of memory so I can’t really afford the overhead of docker.

                                                  What overhead? It’s just a glorified fork with chroot?

                                                  1. 4

                                                    I feel like the author didn’t know that SMTP is already a store-and-forward system

                                                    I did in fact know this 🙂 But it’s not enough for my scenario. To expand, my personal self-hosted setup runs off of a machine in my house. It’s possible that if I were outside of the country and something catastrophic happened, it could be offline for an indeterminate amount of time. Possibly weeks. MTAs will stop retrying after some time and bounce the mail. So for my scenario, having a reliable backup MX is crucial.

                                                    I had an interesting discussion on Reddit about exactly this, with some anecdotes about how common MTAs handle downed MX servers: https://reddit.com/r/selfhosted/comments/ogdheh/setting_up_reliable_deliverable_selfhosted_email/h4itjr5

                                                    the backup server can simply be a relay that delivers to your main server and just holds it in a queue when the main server can’t be reached.

                                                    An SMTP relay with a infinite retention time would be a good way to achieve this as well. Though, with Google Workspaces already set up on all my domains, I didn’t want to spend additional effort reconfiguring all my MX records to point to a separate SMTP server, let alone paying for/setting up such a service. So this bonkers IMAP setup was the way for me!

                                                    1. 1

                                                      Historically, spammers would target lower priority MX because they would often not have the anti spam measures configured. It looks like in your scenario you won’t get your own anti spam measures applied, but you will get Google’s, whether you want it or not.

                                                    2. 2

                                                      It only has 1G of memory so I can’t really afford the overhead of docker.

                                                      I’ve forgotten how much memory my VPS has but it’s not likely more than 2G and I think we are running a couple of Docker containers. Is the daemon really RAM hungry?

                                                      1. 3

                                                        I checked on one of my VPSs and it uses 50MB. I don’t think that that is too bad. Could be less, sure, but not the end of the world.

                                                      2. 1

                                                        It only has 1G of memory so I can’t really afford the overhead of docker.

                                                        I’ve run multiple docker containers on a host with 1G of memory. Other than the daemon itself, each container is just a wrapper for cgroups.

                                                      1. 1

                                                        I would do naughty things for a tool that can also show the age of dirs/files in addition to just the size. Say I find a database dump taking up 32 GB of disk on a shared server. If it’s 5 days old, I’d probably leave it alone. If it’s 5 years old, 99% chance I can delete it without hesitation. But I currently have to jump to another terminal window just to find that out. It would make my day-to-day job a ton easier if I could just d it right then and there.

                                                        1. 3

                                                          version 2.16 has the -e flag to have it read mtimes during scanning - you have to press ‘m’ once it loads to actually display / sort by them.

                                                        1. 2

                                                          I wrote something quite similar to this around a year ago. I wanted a personal wiki for my notes. I had been using Dokuwiki for at least a decade or so, but wanted something smaller, simpler, and with first-class Markdown support.

                                                          I had started out wanting the pages to just be files on the filesystem for rock-solid future-proofing but between page history, indexing, full-text search, and all the corner cases involved I eventually discovered that I would basically be re-implementing Dokuwiki, which clashed with my goal of smaller and simpler.

                                                          It turned out just INSERTing the documents into an sqlite3 database and indexing them with the built-in FTS5 feature was pretty trivial. The only other major thing I had to bolt on was a Markdown parser modified to understand [[wiki links]].

                                                          I don’t have the code posted anywhere because it’s nowhere near respectable enough for public consumption.

                                                          1. 2

                                                            I’ve patched the markdown-it.py parser to understand intra links for the web GUI I wrote for bibliothecula, in case you want to check it out for inspiration.

                                                          1. 38

                                                            “The Gang Builds a Mainframe”

                                                            1. 28

                                                              Ha ha! I don’t think the mainframe is really a good analogue for what we’re doing (commodity silicon, all open source SW and open source FW, etc.) – but that nonetheless is really very funny.

                                                              1. 7

                                                                It makes you wonder what makes a mainframe a mainframe. Is it architecture? Reliability? Single-image scale-up?

                                                                1. 26

                                                                  I had always assumed it was the extreme litigiousness of the manufacturer!

                                                                  1. 3

                                                                    Channel-based IO with highly programmable controllers and an inability to understand that some lines have more than 80 characters.

                                                                    1. 1

                                                                      I think the overwhelming focus of modern z/OS on “everyone just has a recursive hierarchy of VMs” would also be a really central concept, as would the ability to cleanly enforce/support that in hardware. (I know you can technically do that on modern ARM and amd64 CPUs, but the virtualization architecture isn’t quite set up the same way, IMVHO.)

                                                                      1. 2

                                                                        I remember reading a story from back in the days when “Virtual Machine” specifically meant IBM VM. They wanted to see how deeply they could nest things, and so the system operator recursively IPL’d more and more machines and watched as the command prompt changed as it got deeper (the character used for the command prompt would indicate how deeply nested you were).

                                                                        Then as they shut down the nested VMs, they accidentally shut down one machine too many…

                                                                        1. 2

                                                                          Then as they shut down the nested VMs, they accidentally shut down one machine too many…

                                                                          This sounds like the plot of a sci-fi short story.

                                                                          1. 3

                                                                            …and overhead, without any fuss, the stars were going out.

                                                                    2. 1

                                                                      I’d go with reliability + scale-up. I’ve heard there’s support for like, fully redundant CPUs and RAM. That is very unique compared to our commodity/cloud world.

                                                                      1. 1

                                                                        If you’re interested in that sort of thing, you might like to read up on HP’s (née Tandem’s) NonStop line. Basically at least two of everything.

                                                                      2. 1

                                                                        Architecture. I’ve never actually touched a mainframe computer, so grain of salt here, but I once heard the difference described this way:

                                                                        Nearly all modern computers from the $5 Raspberry Pi Zero on up to the beefiest x86 and ARM enterprise-grade servers you can buy today are classified as microcomputers. A microcomputer is built around one or more CPUs manufactured as an integrated circuit. This CPU has a static bus that connects the CPU to all other components.

                                                                        A mainframe, however, is built around the bus. This allows not only for the hardware itself to be somewhat configurable per-job (pick your number of CPUs, amount of RAM, etc), but mainframes were built to handle batch data processing jobs and have always handily beat mini- and microcomputers in terms of raw I/O speed and storage capability. A whole lot of the things we take for granted today were born on the mainframe: virtualization, timesharing, fully redundant hardware, and so on. The bus-oriented design also means they have always scaled well.

                                                                  1. 1

                                                                    Yeah, but can it run Doom?

                                                                    1. 2

                                                                      Doom needed four floppies. Doom 2 took 5.

                                                                      1. 1

                                                                        Perhaps it could be loaded into memory from a DAT cassette.

                                                                        1. 1

                                                                          Most of the Doom data is game assets and levels though, I think? You might be able to squeeze the game engine and a small custom level in 400k.

                                                                          1. 1

                                                                            Yes it is. But the engine itself is about 700k. The smallest engine (earliest releases) were a little over 500k. You could probably build a smaller one with modern techniques and a focus on size though.

                                                                            1. 2

                                                                              Good news, ADoom for Amiga is only 428k! Bad news, Amigas only have double density FDDs so you only have 452k for the rest of the distro.

                                                                      1. 1

                                                                        I’ve been using Linux on the desktop on a daily basis for over 20 years and fully agree with most of the author’s other points. Ever since GNOME 3 was first released, it feels to me like the devs have been aiming to make GNOME a tablet-like experience on the desktop, which is not something I ever wanted. Improvements to the experience seem to be entirely experimental and without direction or cohesive vision. Pleas for options and settings to make GNOME behave similarly to conventional (read: tried and true) desktop idioms fall on deaf ears.

                                                                        I stuck with XFCE for a long time, but for the past couple of years tried seriously to make peace with GNOME. First it was Ubuntu’s take, which I found palatable with a handful of extensions. I then switched over to Pop OS but again had to plaster over the deficiencies with extensions. But having to wrangle extensions just to get a usable desktop just isn’t my idea of a good time.

                                                                        Earlier this week I decided to give KDE (or is it “Plasma” now?) another try and have so far been pretty happy with it. It seems like they have recently stripped back a lot of the fluff while being able to keep all of the customization that I require. I gave up on KDE in the past due to outright buggy behavior and crashes in common workflows but haven’t hit anything serious yet. Crossing my fingers that it stays that way for a while.

                                                                        1. 4

                                                                          A lot of the criticism I see leveled against Gnome centers around the way they often make changes that impact the UX but don’t allow long-time users to opt-out of these new changes.

                                                                          The biggest example for me: There was a change to the nautilus file manager where it used to be you could press a key with a file explorer window open, and it would jump to the 1st file in the currently open folder whose name starts with that letter or number. They changed it so it opens up a search instead when you start typing. The “select 1st file” behaviour is (was??) standard behavior in Mac OS / Windows for many many years, so it seemed a bit odd to me that they would change it. It seemed crazy to me that they would change it without making it a configurable option, and it seemed downright Dark Triad of them that they would make that change, not let users choose, and then lock / delete all the issue threads where people complained about it.

                                                                          It got to the point where people who cared, myself included, started maintaining a fork of nautilus that had the old behavior patched in, and using that instead.

                                                                          What’s stopping people who hate the new & seemingly “sadistic” features of gnome from simply forking it? Most of the “annoying” changes, at least from a non-developer desktop user’s perspective, are relatively surface level & easy to patch.

                                                                          1. 3

                                                                            Wow, I thought I was the only one who thought that behavior was crazy. Since the early 90’s, my workflow for saving files was: “find the dir I want to save the file in,” then “type in the name.” In GNOME (or GTK?) the file dialog forces me to reverse that workflow, or punishes me with extra mouse clicks to focus the correct field.

                                                                            I have never wanted to use a search field when trying to save a file.

                                                                            1. 4

                                                                              Wow, that thread is cancer.

                                                                              1. 5

                                                                                To me it just looks like 99% of all Internet threads where two or more people hold slightly differing positions and are just reading past each other in an effort to be right. At least there’s a good amount of technical discussion in there, as flame wars go, this is pretty mild.

                                                                            1. 1

                                                                              I’m not very active in this space anymore but my impression is that most people moving on from the “make an LED blink” level of expertise end up on the Arduino VSCode extension (https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-arduino) or Platform.io (https://platformio.org/)

                                                                              1. 2

                                                                                This problem, and others, can be solved by using official distribution images instead of those provided by the Raspberry Pi project. I’m using the official Fedora 33 ARM64 (aarch64) image for example, works perfectly on my Raspberry Pi 3B+ and has the exact same packages (including kernel!) as the x86_64 version of Fedora.

                                                                                See https://fedoraproject.org/wiki/Architectures/ARM/Raspberry_Pi

                                                                                1. 3

                                                                                  Do the distro-originated images come with all the same raspberry-pi configuration, hardware drivers, and firmware gubbins as Raspbian? That’s the main reason I run Raspbian, aside from it having more or less guaranteed support when things break and I need to do some googlin’ to fix it.

                                                                                  1. 2

                                                                                    Generally speaking? No.

                                                                                    Raspbian is the only distro that provides truly first class support for the pi’s hardware.

                                                                                    Graphics support is becoming more widespread at least, and there are bits and bobs of work happening in various distros.

                                                                                    But from what I’ve seen most distros are optimizing for a good desktop experience on the pi.

                                                                                    1. 1

                                                                                      At least on Fedora you get a kernel very close to upstream Linux, also for the Pi, so no crazy stuff and everything I use works out of the box (LAN, WIFi). That is the reason why the Raspberry Pi 4 for example still doesn’t work in Fedora, requires more stuff to be properly upstreamed: https://github.com/lategoodbye/rpi-zero/issues/43