1. 11

    STARTTLS always struck me as a terrible idea. TLS everywhere should be the goal. Great work.

    1. 6

      Perhaps this is partially the result of a new generation of security researchers gaining prominence, but progressive insight from the infosec industry has produced a lot of U-turns. STARTTLS was obviously the way forward, until it wasn’t, and now it’s always been a stupid idea. Never roll your own crypto, use reliable implementations like OpenSSL! Oh wait, it turns out OpenSSL is a train wreck, ha ha why did people ever use this crap?

      As someone who is not in the infosec community but needs to take their advice seriously, it makes me a bit more wary about these kinds of edicts.

      Getting rid of STARTTLS will be a multi-year project for some ISPs, first fixing all clients until they push implicit TLS (and handle the case when a server doesn’t offer implicit TLS yet), then moving all the email servers forward.

      Introducing STARTTLS had no big up-front costs …

      1. 9

        Regarding OpenSSL I think you got some bad messaging. The message is not “don’t use OpenSSL”. The real message was “all crypto libraries are train wrecks and need more funding and security auditing”. But luckily OpenSSL has improved a lot, and you should still use a well-tested implementation and not roll your own crypto and OpenSSL is not the worst choice.

        Regarding STARTTLS I think what we’re seeing here is that there was a time when crypto standards valued flexibility over everything else. We also see this in TLS itself where TLS 1.2 was like “we offer the insecure option and the secure option, you choose”, while TLS 1.3 was all about “we’re gonna remove the insecure options”. The idea that has gained a lot of traction is that complexity breeds insecurity and should be avoided, but that wasn’t a popular idea 20-30 years ago when many of these standards were written.

        1. 2

          The message is not “don’t use OpenSSL”. The real message was “all crypto libraries are train wrecks and need more funding and security auditing”. But luckily OpenSSL has improved a lot, and you should still use a well-tested implementation and not roll your own crypto and OpenSSL is not the worst choice.

          100%

          I prefer libsodium over OpenSSL where possible, but some organizations can only use NIST-approved algos.

      2. 3

        Agreed. It always felt like a band-aid as opposed to a well thought out option. Good stuff @hanno.

        1. 3

          Your hindsight may be 20/20 but STARTTLS was born in an era where almost nothing on the Internet was encrypted. At that time, 99% of websites only used HTTPS on pages that accepted credit card numbers. (It was considered not worth the administrative and computing burden to encrypt a whole site that was open to the public to view anyway.)

          STARTTLS was a clever hack to allow opportunistic encryption of mail over the wire. When it was introduced, getting the various implementations and deployments of SMTP servers (either open source or commercial) even to work together in an RFC-compliant manner was an uphill battle on its own. STARTTLS allowed mail administrators to encrypt the SMTP exchange where they could while (mostly) not breaking existing clients and servers, nor requiring the coordination of large ISPs and universities around the world to upgrade their systems and open new firewall ports.

          Some encryption was better than no encryption, and that’s still true today.

          That being said, I run my own mail server and I only allow users to send outgoing mail on port 465 (TLS). But for mail coming in from the Internet, I still have to allow plaintext SMTP (and hence STARTTLS support) on port 25 or my users and I would miss a lot a messages. I look forward to the day that I can shut off port 25 altogether, if it ever happens.

          1. 2

            Your hindsight may be 20/20 but STARTTLS was born in an era where almost nothing on the Internet was encrypted.

            I largely got involved with computer security/cryptography in the late 2000’s, when we suspected a lot of the things Snowden revealed to be true, so “encrypt every packet securely” was my guiding principle. I recognize that wasn’t always a goal for the early Internet, but I was too young to be heavily involved then.

            Some encryption was better than no encryption, and that’s still true today.

            Defense against passive attackers have value, but in the face of active attackers, opportunistic encryption is merely security theater.

            I look forward to the day that I can shut off port 25 altogether, if it ever happens.

            Hear hear!

            1. 2

              Defense against passive attackers have value, but in the face of active attackers, opportunistic encryption is merely security theater.

              That’s not quite true, it still provides an audit trail. The goal of STARTTLS, as I understand it, is to avoid trying to connect to a TLS port, potentially having to wait for some arbitrary timeout if a firewall somewhere is set to drop packets rather than reject connections, and then retry on the unencrypted path. Instead, you connect to the port that you know will be there and then try to do the encryption. At this point, a passive attacker can’t do anything, an active attacker can strip out the server’s notification that STARTTLS is available and leave the connection in plaintext mode. This kind of injection is tamper-evident. The sender (at least for mail servers doing relaying) will typically log whether a particular message was sent with or without STARTTLS. This logging lets you detect which messages were potentially leaked / tampered with at a later date. You can also often enforce policies that say things like ‘if STARTTLS has ever been supported by this server, refuse if it isn’t this time’.

              Now that TLS support is pretty-much table stakes, it is probably worth revisiting this and defaulting to connecting on the TLS port. This is especially true now that most mail servers use some kind of asynchronous programming model so trying to connect on port 465 and waiting for a timeout doesn’t tie up too many resources. It’s not clear what the failure mode should do though. If an attacker can tamper with port 25 traffic, they can also trivially drop everything destined for port 465, so trying 465 and retrying on 25 if that fails is no better than STARTTLS (actually worse - rewriting packets is harder than dropping packets, one can be done by inspecting the header the other requires deep-packet inspection). Is there a DNS record that can tell connecting mail servers to not try port 25? Just turning off port 25 doesn’t help because an attacker doing DPI can intercept packets for port 25 and forward them over a TLS connection that it establishes to 465.

        1. 3

          Basically this is a guide to setting up Mailu: https://mailu.io/1.7/

          Mailu looks interesting but I’m running all of the same major pieces myself on a $5 VPS. (Plus some other services.) It only has 1G of memory so I can’t really afford the overhead of docker. But I’ll definitely give it some serious thought when it’s time to rearrange the deck chairs.

          I assume the author’s definition of self-hosted is “running everything off a raspberry pi hanging off residential cable internet” because any reasonable VPS provider is going to have clean IP blocks and will help you if their IPs are blacklisted anywhere. Thus negating the need to rely on a third-party outgoing relay.

          A good chunk of the article is spent configuring a backup SMTP server and syncing IMAP… I feel like the author didn’t know that SMTP is already a store-and-forward system… Yes, you can have multiple incoming SMTP servers for high-availability but unless you’re a big company or an ISP you probably don’t need them. If your mail server is down, any RFC-compatible system will queue and retry delivery, usually for up to 24 hours.

          Also fetching mail from the backup server via IMAP seems bonkers to me… again, SMTP is store-and-forward, the backup server can simply be a relay that delivers to your main server and just holds it in a queue when the main server can’t be reached.

          1. 4

            Mailu looks interesting but I’m running all of the same major pieces myself on a $5 VPS. (Plus some other services.) It only has 1G of memory so I can’t really afford the overhead of docker.

            What overhead? It’s just a glorified fork with chroot?

            1. 4

              I feel like the author didn’t know that SMTP is already a store-and-forward system

              I did in fact know this 🙂 But it’s not enough for my scenario. To expand, my personal self-hosted setup runs off of a machine in my house. It’s possible that if I were outside of the country and something catastrophic happened, it could be offline for an indeterminate amount of time. Possibly weeks. MTAs will stop retrying after some time and bounce the mail. So for my scenario, having a reliable backup MX is crucial.

              I had an interesting discussion on Reddit about exactly this, with some anecdotes about how common MTAs handle downed MX servers: https://reddit.com/r/selfhosted/comments/ogdheh/setting_up_reliable_deliverable_selfhosted_email/h4itjr5

              the backup server can simply be a relay that delivers to your main server and just holds it in a queue when the main server can’t be reached.

              An SMTP relay with a infinite retention time would be a good way to achieve this as well. Though, with Google Workspaces already set up on all my domains, I didn’t want to spend additional effort reconfiguring all my MX records to point to a separate SMTP server, let alone paying for/setting up such a service. So this bonkers IMAP setup was the way for me!

              1. 1

                Historically, spammers would target lower priority MX because they would often not have the anti spam measures configured. It looks like in your scenario you won’t get your own anti spam measures applied, but you will get Google’s, whether you want it or not.

              2. 2

                It only has 1G of memory so I can’t really afford the overhead of docker.

                I’ve forgotten how much memory my VPS has but it’s not likely more than 2G and I think we are running a couple of Docker containers. Is the daemon really RAM hungry?

                1. 3

                  I checked on one of my VPSs and it uses 50MB. I don’t think that that is too bad. Could be less, sure, but not the end of the world.

                2. 1

                  It only has 1G of memory so I can’t really afford the overhead of docker.

                  I’ve run multiple docker containers on a host with 1G of memory. Other than the daemon itself, each container is just a wrapper for cgroups.

                1. 1

                  I would do naughty things for a tool that can also show the age of dirs/files in addition to just the size. Say I find a database dump taking up 32 GB of disk on a shared server. If it’s 5 days old, I’d probably leave it alone. If it’s 5 years old, 99% chance I can delete it without hesitation. But I currently have to jump to another terminal window just to find that out. It would make my day-to-day job a ton easier if I could just d it right then and there.

                  1. 2

                    version 2.16 has the -e flag to have it read mtimes during scanning - you have to press ‘m’ once it loads to actually display / sort by them.

                  1. 2

                    I wrote something quite similar to this around a year ago. I wanted a personal wiki for my notes. I had been using Dokuwiki for at least a decade or so, but wanted something smaller, simpler, and with first-class Markdown support.

                    I had started out wanting the pages to just be files on the filesystem for rock-solid future-proofing but between page history, indexing, full-text search, and all the corner cases involved I eventually discovered that I would basically be re-implementing Dokuwiki, which clashed with my goal of smaller and simpler.

                    It turned out just INSERTing the documents into an sqlite3 database and indexing them with the built-in FTS5 feature was pretty trivial. The only other major thing I had to bolt on was a Markdown parser modified to understand [[wiki links]].

                    I don’t have the code posted anywhere because it’s nowhere near respectable enough for public consumption.

                    1. 2

                      I’ve patched the markdown-it.py parser to understand intra links for the web GUI I wrote for bibliothecula, in case you want to check it out for inspiration.

                    1. 38

                      “The Gang Builds a Mainframe”

                      1. 28

                        Ha ha! I don’t think the mainframe is really a good analogue for what we’re doing (commodity silicon, all open source SW and open source FW, etc.) – but that nonetheless is really very funny.

                        1. 7

                          It makes you wonder what makes a mainframe a mainframe. Is it architecture? Reliability? Single-image scale-up?

                          1. 26

                            I had always assumed it was the extreme litigiousness of the manufacturer!

                            1. 3

                              Channel-based IO with highly programmable controllers and an inability to understand that some lines have more than 80 characters.

                              1. 1

                                I think the overwhelming focus of modern z/OS on “everyone just has a recursive hierarchy of VMs” would also be a really central concept, as would the ability to cleanly enforce/support that in hardware. (I know you can technically do that on modern ARM and amd64 CPUs, but the virtualization architecture isn’t quite set up the same way, IMVHO.)

                                1. 2

                                  I remember reading a story from back in the days when “Virtual Machine” specifically meant IBM VM. They wanted to see how deeply they could nest things, and so the system operator recursively IPL’d more and more machines and watched as the command prompt changed as it got deeper (the character used for the command prompt would indicate how deeply nested you were).

                                  Then as they shut down the nested VMs, they accidentally shut down one machine too many…

                                  1. 2

                                    Then as they shut down the nested VMs, they accidentally shut down one machine too many…

                                    This sounds like the plot of a sci-fi short story.

                                    1. 3

                                      …and overhead, without any fuss, the stars were going out.

                              2. 1

                                I’d go with reliability + scale-up. I’ve heard there’s support for like, fully redundant CPUs and RAM. That is very unique compared to our commodity/cloud world.

                                1. 1

                                  If you’re interested in that sort of thing, you might like to read up on HP’s (née Tandem’s) NonStop line. Basically at least two of everything.

                                2. 1

                                  Architecture. I’ve never actually touched a mainframe computer, so grain of salt here, but I once heard the difference described this way:

                                  Nearly all modern computers from the $5 Raspberry Pi Zero on up to the beefiest x86 and ARM enterprise-grade servers you can buy today are classified as microcomputers. A microcomputer is built around one or more CPUs manufactured as an integrated circuit. This CPU has a static bus that connects the CPU to all other components.

                                  A mainframe, however, is built around the bus. This allows not only for the hardware itself to be somewhat configurable per-job (pick your number of CPUs, amount of RAM, etc), but mainframes were built to handle batch data processing jobs and have always handily beat mini- and microcomputers in terms of raw I/O speed and storage capability. A whole lot of the things we take for granted today were born on the mainframe: virtualization, timesharing, fully redundant hardware, and so on. The bus-oriented design also means they have always scaled well.

                            1. 1

                              Yeah, but can it run Doom?

                              1. 2

                                Doom needed four floppies. Doom 2 took 5.

                                1. 1

                                  Perhaps it could be loaded into memory from a DAT cassette.

                                  1. 1

                                    Most of the Doom data is game assets and levels though, I think? You might be able to squeeze the game engine and a small custom level in 400k.

                                    1. 1

                                      Yes it is. But the engine itself is about 700k. The smallest engine (earliest releases) were a little over 500k. You could probably build a smaller one with modern techniques and a focus on size though.

                                      1. 2

                                        Good news, ADoom for Amiga is only 428k! Bad news, Amigas only have double density FDDs so you only have 452k for the rest of the distro.

                                1. 1

                                  I’ve been using Linux on the desktop on a daily basis for over 20 years and fully agree with most of the author’s other points. Ever since GNOME 3 was first released, it feels to me like the devs have been aiming to make GNOME a tablet-like experience on the desktop, which is not something I ever wanted. Improvements to the experience seem to be entirely experimental and without direction or cohesive vision. Pleas for options and settings to make GNOME behave similarly to conventional (read: tried and true) desktop idioms fall on deaf ears.

                                  I stuck with XFCE for a long time, but for the past couple of years tried seriously to make peace with GNOME. First it was Ubuntu’s take, which I found palatable with a handful of extensions. I then switched over to Pop OS but again had to plaster over the deficiencies with extensions. But having to wrangle extensions just to get a usable desktop just isn’t my idea of a good time.

                                  Earlier this week I decided to give KDE (or is it “Plasma” now?) another try and have so far been pretty happy with it. It seems like they have recently stripped back a lot of the fluff while being able to keep all of the customization that I require. I gave up on KDE in the past due to outright buggy behavior and crashes in common workflows but haven’t hit anything serious yet. Crossing my fingers that it stays that way for a while.

                                  1. 4

                                    A lot of the criticism I see leveled against Gnome centers around the way they often make changes that impact the UX but don’t allow long-time users to opt-out of these new changes.

                                    The biggest example for me: There was a change to the nautilus file manager where it used to be you could press a key with a file explorer window open, and it would jump to the 1st file in the currently open folder whose name starts with that letter or number. They changed it so it opens up a search instead when you start typing. The “select 1st file” behaviour is (was??) standard behavior in Mac OS / Windows for many many years, so it seemed a bit odd to me that they would change it. It seemed crazy to me that they would change it without making it a configurable option, and it seemed downright Dark Triad of them that they would make that change, not let users choose, and then lock / delete all the issue threads where people complained about it.

                                    It got to the point where people who cared, myself included, started maintaining a fork of nautilus that had the old behavior patched in, and using that instead.

                                    What’s stopping people who hate the new & seemingly “sadistic” features of gnome from simply forking it? Most of the “annoying” changes, at least from a non-developer desktop user’s perspective, are relatively surface level & easy to patch.

                                    1. 3

                                      Wow, I thought I was the only one who thought that behavior was crazy. Since the early 90’s, my workflow for saving files was: “find the dir I want to save the file in,” then “type in the name.” In GNOME (or GTK?) the file dialog forces me to reverse that workflow, or punishes me with extra mouse clicks to focus the correct field.

                                      I have never wanted to use a search field when trying to save a file.

                                      1. 4

                                        Wow, that thread is cancer.

                                        1. 5

                                          To me it just looks like 99% of all Internet threads where two or more people hold slightly differing positions and are just reading past each other in an effort to be right. At least there’s a good amount of technical discussion in there, as flame wars go, this is pretty mild.

                                      1. 1

                                        I’m not very active in this space anymore but my impression is that most people moving on from the “make an LED blink” level of expertise end up on the Arduino VSCode extension (https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-arduino) or Platform.io (https://platformio.org/)

                                        1. 2

                                          This problem, and others, can be solved by using official distribution images instead of those provided by the Raspberry Pi project. I’m using the official Fedora 33 ARM64 (aarch64) image for example, works perfectly on my Raspberry Pi 3B+ and has the exact same packages (including kernel!) as the x86_64 version of Fedora.

                                          See https://fedoraproject.org/wiki/Architectures/ARM/Raspberry_Pi

                                          1. 3

                                            Do the distro-originated images come with all the same raspberry-pi configuration, hardware drivers, and firmware gubbins as Raspbian? That’s the main reason I run Raspbian, aside from it having more or less guaranteed support when things break and I need to do some googlin’ to fix it.

                                            1. 2

                                              Generally speaking? No.

                                              Raspbian is the only distro that provides truly first class support for the pi’s hardware.

                                              Graphics support is becoming more widespread at least, and there are bits and bobs of work happening in various distros.

                                              But from what I’ve seen most distros are optimizing for a good desktop experience on the pi.

                                              1. 1

                                                At least on Fedora you get a kernel very close to upstream Linux, also for the Pi, so no crazy stuff and everything I use works out of the box (LAN, WIFi). That is the reason why the Raspberry Pi 4 for example still doesn’t work in Fedora, requires more stuff to be properly upstreamed: https://github.com/lategoodbye/rpi-zero/issues/43

                                            1. 1

                                              I hadn’t realized there was an 8GB Pi 4 – the announcement post notes the SoC could support up to 16GB, and the limit is how much they can get in a single package. An 8GB option for the keyboard-shaped Pi 400 (for a few more bucks, of course) would be interesting too.

                                              1. 2

                                                I’ve had an 8 GB version since spring/summer at least and it’s a wonderful cheap low-power Linux box for a variety of development duties and experiments. Main downside is that it absolutely needs a decent heat sink if you’re doing anything CPU-intensive or else the CPU speed gets throttled.

                                              1. 1

                                                Unrelated to the news, but LWN content is nearly impossible to read on a phone…very annoying

                                                1. 4

                                                  I think is mostly because it’s an email. Otherwise, that’s usually OK (not great).

                                                  I wonder if they would accept help to fix that.

                                                  1. 3

                                                    I think it’d be hard trying to intelligently format a 72-column fixed plain text email into something that isn’t a VT100. It’d probably be easier if it was rich text (or at least designed to reflow) in the first place.

                                                  2. 2

                                                    I’m using wallabag to bookmark the content and read on my phone, usually much later. I also think that lwm works ok with Firefox readability view.

                                                    1. 1

                                                      Thanks for the suggestion. I will give it a try although I’m using Firefox less frequently these days

                                                    2. 1

                                                      Not sure which phone you have but mine is able to display the original article just fine in horizontal mode. Or in either orientation with Firefox reader view.

                                                      1. 1

                                                        You can always switch to the desktop version.

                                                      1. 15

                                                        My experience with Bash is: “avoid it at any cost”. Unless you are writting very OS specific stuff, you should always avoid writting bash.

                                                        Bash efficiency is a fallacy, it is never the case. Bash is sticky, it will stay with you until it transforms into a big black-hole of tech-debt. It should never be used in a real software project.

                                                        After years of Bash dependency we realized that it was the biggest point of pain for old and new developers in the team. Right now Bash is not allowed and new patches introducing new lines of Bash need to delete more than what they introduce.

                                                        Never use Bash, never learn to write Bash. Keep away from it.

                                                        1. 4

                                                          What do you use instead?

                                                          1. 8

                                                            Python. Let me elaborate a little bit more.

                                                            We are a Docker/Kubernetes shop, we started building containers with the usual, docker build/tag/push, plus a test in between. We had 1 image, one shell script did the trick.

                                                            We added a new image, and the previous one gained a parameter which existed in a JSON file which was captured using jq (first dependency added). Now we had a loop with 2 images being built tested and pushed.

                                                            We added 1 stage: “release”. Docker now had build tag push, test, tag push (to release). And we added another image, the previous images gained more parameters, something was curled from the public internet, the response piped into jq. A version docker build-arg was added to all of the images, this version was some sort of git describe.

                                                            2 years later, the image building and testing process was a disaster. Impossible to maintain, all errors captured after the images were released, the logic to build the ~10 different image types were spread into multiple shell scripts, CI environment definitions, docker build-args. The images required very strict order of operations to build: first run script build then run script x then tag something… etc.

                                                            Worst of all, we had this environment almost completely replicated to be able to build images locally (when building something in your own workstation) and remotely in the CI environment.

                                                            Right before the collapse, I requested to management 5 weeks to fix this monstrosity.

                                                            1. I captured all the logic required to build the images (mostly parameters needed)
                                                            2. I built a multi-stage process that would do different kind of tasks with images (build, tag, push)
                                                            3. I added a Dockerfile template mechanism (based on jinja2 templates)
                                                            4. Wrote definitions (a pipeline) of the process or lifecycle of an image. This would allow us to say, “for image x, build it, push it into this repo” or “for this image, in this repo, copy it into this other repo”
                                                            5. I added multiple builder implementations: the base one is Docker, but you can also use Podman and I’m planning on adding Kaniko support soon.
                                                            6. I added parallelized builds using multi-processing primitives.

                                                            I did this in Python 3.7 in just a few weeks. The most difficult part was to migrate the old tightly coupled shell-scripting based solution to the new one. Once this migration was done we had:

                                                            1. The logic to build each image was defined in an inventory file (yaml, not great, but not awfull)
                                                            2. If anything needs to be changed, it can be changed on a “description file”, not in shell scripts
                                                            3. The same process can be run locally and in the CI environment, everything can be tested
                                                            4. I added plenty of unit tests to the Python codebase. Monkeypatching is crucial to test when you have things like docker build in the middle, although this can be fixed by running test using the noop builder implementation.
                                                            5. Modularized the codebase: parts of the generic image process pipeline are confined to its own Python modules. Everything that’s application dependant lives on our repo, and uses the other modules we build. We expect those Python modules to be reused in future projects.
                                                            6. It is not intimidating to make changes, people are confident about the impact of their changes, meaning that they feel encouraged to make changes, improving productivity**

                                                            Anyway, none of this could be achieved by using Bash, I’m pretty sure about it.

                                                            1. 13

                                                              It sounds to me like your image pipeline was garbage, not the tool used to build it.

                                                              I’ve been writing tools in bash for decades, and all of them still run just fine. Can’t say the same for all the python code, now that version 2 is officially eol.

                                                              1. 3

                                                                bash 3 broke a load of bash 2 scripts. This was long enough ago that it’s been largely forgotten.

                                                                1. 1

                                                                  I agree with you, the image pipeline was garbage, and that was our responsibility of course. We can write the same garbage in Python no doubt.

                                                                  Bash however, does not encourage proper software engineering, definitely, and it makes software impossible to maintain.

                                                            2. 1

                                                              I can confirm this. I’ve had to replace a whole buildsystem made in bash with cmake roughly 2 years ago and bash still contaminates many places it should not be involved in with zero tests.

                                                            1. 25

                                                              Negatively: Drinking when I got stressed. Now I drink all the time, to the point where it’s an unreasonable portion of my outgoing expenditure and I’ll usually pour myself something to take the edge off before standup. If I could offer any advice to anyone reading; please only drink alcohol during fun, social occasions.

                                                              1. 10

                                                                When I was a cable guy, the only outlet I had was drinking. 4 out of 5 mornings I had a hangover, was still buzzed, or even drunk. My (horrible, universally hated) boss reprimanded me for it multiple times a month. The only thing that stopped me was quitting that job in June.

                                                                With some help, (a week in the hospital and a lung injury) I’ve also quit smoking cigarettes and avoid nicotine. I now have a very nice and infinitely more affordable green tea habit.

                                                                I drink still, avoid keeping liqour around, and ceased my habit of staying drunk or getting shitfaced regularly. Stress kills, folks.

                                                                1. 5

                                                                  Thanks for sharing. I think avoiding keeping liquor around is a good point I hadn’t really considered, by now it’s part of the furniture. Maybe I’ll give my liquor shelf to my parents.

                                                                2. 11

                                                                  A relative taught me these rules when I was a kid:

                                                                  • Never drink on an empty stomach.
                                                                  • Never drink alone.
                                                                  • Drink for the taste, not for the effect.

                                                                  Works for me.

                                                                  1. 6

                                                                    I’ve heard these rules a couple of times, and, to me, they always sound patronizing. It feels on par with telling an addict to “just stop”. How can the advice work when you want to drink on an empty stomach, alone, and for the effect, and it’s out of your control?

                                                                    1. 16

                                                                      These aren’t guidelines for an alcoholic, they’re guidelines to prevent one from becoming an alcoholic.

                                                                      1. 9

                                                                        Sorry, I realized my first comment was a little intense.

                                                                        I understand this. I just don’t think they very good guidelines – they’re more of a description of “patterns of people who aren’t alcoholics”. I think what makes someone an alcoholic is a very complex, and often genetic thing. For some, these rules are essentially impossible to follow from the get-go. Additionally, someone can choose to break all these rules all the time, and still not become an alcoholic.

                                                                        1. 2

                                                                          I get your point, but if it’s genetic, then a list of rules won’t make a difference one way or the other.

                                                                1. 1

                                                                  After the click-baity title I expected something slightly more interesting and more numerous. It was about a single meaning, not a myriad of meanings:

                                                                  A command named pwd (also having a shell builtin of same name for performance reasons.

                                                                  I expected to learn of new things under the same name, to get useful knowledge of possible pitfalls where having a preconception of what PWD would represent would cause me trouble.

                                                                  1. 4

                                                                    I agree. I don’t like to poo-poo other people’s work in general but I’m not sure I can think of a reason it matters whether the p in pwd means “print” or “present”, or why one would want to lobby for one or the other since they are, at a very practical level, exactly thee same thing.

                                                                    If I was going to lobby for one or the other, it makes more sense to default to “print” since early Unix terminals were literally teletype machines. As a result, both C and Unix have long used the very “print” to mean display, present, echo, show (or any other synonym you care to think of) data to the user or stdout/stderr.

                                                                    But I guess the thing that disappointed me the most was the author’s attempt to discredit the Wikipedia source by quoting the man page. The article says “there are actually zero references to pwd being short for ‘print working directory’” and yet right there in the screenshot the man page literally says, “prints the pathname of the working (current) directory”. Yes, you have to remove an “of” and a couple of “the’s” but it says so right there!

                                                                    1. 2

                                                                      (Author here) - Oh dear, that’s not good, sorry to have disappointed you. But while you aren’t sure you can think of a reason it matters enough to write a little post about it … I could think of one - my growing fascination with Unix history and how things grow, merge, and fork. And so I wrote it. Moreover, I wasn’t lobbying for one or the other, as you can see from the content of the post.

                                                                      Regarding your last point, I guess it’s down to how literally one interprets the written word. For me, if one is “looking for” evidence that it means “print working directory”, one can find it indirectly in the man page. But I was looking for something concrete and explicit (hence the quotes), and it wasn’t there.

                                                                      Anyway, I still think it’s an interesting topic, but I know that not everyone will agree, and that’s more than OK. Thanks!

                                                                    2. 2

                                                                      (Author here) - I’m sorry you considered the title “click-baity”, that wasn’t my intention at all. Perhaps the use of “myriad” was a little extreme, but I thought having at least 4 potential meanings for such a “lowly” command as pwd warranted at least some adjective, and I decided to allow myself some breadth in expression.

                                                                    1. 2

                                                                      I really want to run Fedora but 25 years of dpkg & apt are hard to get over. Maybe I’ll try it again when I next get a new laptop. But I’m just so comfortable on Debian…

                                                                      1. 2

                                                                        I just switched this year, after a few years of debian. It’s probably not the same experience, but for basic things, dnf is practically equivalent to apt. My personal intuition is that it might not be worth it, unless you’re also interested in GNOME (strictly speaking, the Fedora spins aren’t real Fedora releases, and usually aren’t as polished).

                                                                        1. 3

                                                                          the Fedora spins aren’t real Fedora releases, and usually aren’t as polished

                                                                          I concur with this sentiment. I’m pretty steeped in the Red Hat universe for some time, so I really like Fedora on the systems I have to touch most. As an experiment, I tried the KDE spin for a year. It was OK, but had lots of paper cuts that the standard workstation edition just doesn’t have. They’re generally very minor, like needing to use the command line for firmware updates instead of getting alerted to them by the system tooling. Since I was mostly in KDE for kwin-tiling and a few other things that are much less integrated than that, I switched back to the standard workstation edition once Pop Shell shipped and got easy to integrate with the standard Fedora GNOME installation.

                                                                          1. 3

                                                                            My personal intuition is that it might not be worth it, unless you’re also interested in GNOME

                                                                            To me, the most interesting subproject of Fedora, even though it may not be ready for wide use yet, is Silverblue. Having an immutable base system with atomic upgrades/rollbacks is really nice. This really sets it apart from other Linux distributions (outside NixOS/Guix). Sure, Ubuntu is trying to offer something similar by doing ZFS snapshots on APT operations, but that looks like another hack piled upon other hacks, rather than a proper re-engineering.

                                                                            1. 2

                                                                              Then again, I haven’t head good things about trying to use Silverblue with XFCE or other WMs.

                                                                            2. 2

                                                                              I like Debian and I’ll run it on servers but for desktop use, I want things to Just Work out of the box. My experience with Debian on the desktop is that you have to know all the packages you need in order to get the same out-of-the-box experience as Ubuntu or Fedora. At least, that’s what it was like when I tried Debian with XFCE.

                                                                              You might also be interested in PopOS and Linux Mint, both of which are based on Ubuntu but strip out most of the annoyances like snapd.

                                                                            3. 1

                                                                              I’d like to add that rpm command is similar to dpkg

                                                                              for example: dpkg -l > rpm -qa

                                                                              1. 1

                                                                                A couple years ago I went through a distro jumping phase. Fedora worked fine but I didn’t find any particular advantages of running it over - say - running Ubuntu. The one thing setting it apart from other distros was Wayland as default.

                                                                                I ended up on Manjaro, and it’s been a breath of fresh air: most software is a click away (thanks AUR!), things just work out of the box and in general their configuration of Plasma and Gnome feel snappier than Fedora and Ubuntu.

                                                                                1. 2

                                                                                  The one thing setting it apart from other distros was Wayland as default.

                                                                                  The one thing setting Fedora apart from other distros is often getting bleeding edge stuff as default. Most of the times it works out super.

                                                                                  1. 2

                                                                                    You are not wrong. What I meant was on the ‘experience’ front. Most of the time - if I’m lucky and the hardware obliges - I don’t bother remembering what version of the kernel, Mesa, etc. I am using, so being on the bleeding edge doesn’t introduce a lot of advantages.

                                                                                    BTW, the last time I tried Fedora was when Intel introduced their Iris driver and I wanted to see if it’d improve the sluggish performance I was experiencing on Gnome.

                                                                              1. 2

                                                                                it occurs to me that for many of us infoslaves (i’m mostly kidding with that term), with the new normal of possibly being remote permanently, a desktop rig makes more sense right now for price/performance. unless you’re on a mac.

                                                                                1. 1

                                                                                  My workplace only issues laptops to employees, the main reason being that most of us don’t want to be chained to our desks all day long. Being able to bring your laptop into a meeting is a huge advantage, and until recent events, lots of us would spend at least half the day working from random places in the building.

                                                                                  I have a laptop at home that is docked most of the time but when I want to take it downstairs and work on the couch just to be in the same room as my wife, then I’m very happy to have it.

                                                                                  Desktops have always been cheaper price/performance wise. But they also take up more space, consume more power, and are generally louder. (This doesn’t hold for small form-factor boxes, but those tend to be priced similarly to laptops.)

                                                                                  unless you’re on a mac.

                                                                                  In which case we assume price was never much of a factor ;)

                                                                                  1. 1

                                                                                    Yep - that’s my experience. My desktop has more ram and less thermal throttling than a similar price laptop. When I don’t need portability (and that has been the case for some months and will be so for some more), it’s a huge win.

                                                                                  1. 4

                                                                                    This seems to be desktop vs laptop? I’ve never understood why anyone buys a laptop to put on a desk, plug into the wall, and leave there 98% of the time. It’s just a more expensive and less flexible desktop at that point.

                                                                                    1. 3

                                                                                      I’ve had that setup a few times. Usually for minimalism purposes - if I don’t need the computational power of a desktop, a single laptop (+ charger) makes for less clutter. And it’s still good that 2% of the time you want to take it somewhere, you can do so easily (without owning a desktop + laptop).

                                                                                      1. 2

                                                                                        I don’t think many people with a laptop have it plugged in or docked 98% of the time. For those that do, maybe it’s extremely valuable to have a laptop for that other 2% of the time.

                                                                                        My laptop is docked at my desk all day but when I go on a trip, I just grab the thing, I don’t have to worry about whether my work is copied over to it or pushed up into the cloud. I’m not a gamer or a bitcoin miner so I don’t need a ridiculous CPU or GPU, or water cooling, or colored case lights come to think of it.

                                                                                        My last “desktop” computer currently sits unplugged under my desk. I haven’t gotten rid of it because it makes an excellent foot rest.

                                                                                        1. 1

                                                                                          I took my laptop home from the office when on call. Barring that work requirement, I’d happily live without a laptop these days.

                                                                                        1. 6

                                                                                          ergonomics is another reason to use desktop computers, if you actually care about looking at a monitor at the correct height and typing on an input device that won’t kill your wrists, desktops make a lot more sense. The laptops I use at work are just really crappy portable desktops, at least, how I use them.

                                                                                          1. 4

                                                                                            Yeah, due to my history w/ RSI, using a laptop for any extended duration (> 2 hours or so) is really not viable. When you give up the goal of “mobile computing” it really stops making sense having a laptop. I have one that I bring with me on work trips and whatnot (granted, those won’t be happening for a while). My desktop was cheap to build, is incredibly powerful (which is great when working in compiled environments), upgradeable at actual consumer prices. As you mentioned, I also invested in building a desktop that is ergonomic and comfortable. The whole thing was (desktop, peripherals, monitor, desk) was less than the price of a premium Macbook.

                                                                                            I think laptops are great and have an important place for a majority of users, but it’s worth raising that the alternatives are real and viable.

                                                                                            1. 2

                                                                                              Most laptops have a way to plug in an external monitor, keyboard, and mouse. Then your desktop computer and your portable computer are the same thing.

                                                                                              In fact, despite being a computer nerd, I decided years ago that I would probably not buy another desktop computer. The take up too much space, they are loud, power-hungry, space heaters and can’t be easily shoved into a backpack in one second. The only thing that would have kept me from moving in this direction is the expandability of the typical tower. But these days, practically all accessories are USB. And I’m not a gamer or bitcoin miner, so I don’t need a high-end CPU or GPU with liquid cooling.