Threads for ggpsv

    1. 15

      Herman Miller chairs are worth it. Mine is over 10 years old, and still good. Don’t waste your back’s health on some cheapo chair that will quickly fall apart anyway.

      External keyboard + mouse + monitor is a must. Laptops will destroy your neck and wrists.

      You’ll need a dock for this, and there’s a fun problem: docks using USB 3 will cause electromagnetic interference that may be super annoying if you use wireless mouse or keyboard: https://www.usb.org/sites/default/files/327216.pdf I’ve had to splurge on Thunderbolt docks to have a reliable setup.

      WFH now equals having video calls. Check that your audio has a decent quality. You don’t need a pro microphone, but don’t use laptop’s mic if it’s poor quality, or too far from you. My room had echo/reverb from windows and hard floor, which made my audio worse. I needed a carpet + acoustic padding to fix it.

      For video it’s important to have good lighting. There are lots of tutorials how to set up the classic side + top + diffuse combination. I’ve used Nanoleaf panels, which allows me to automatically turn them on and adjust color, but they’re kinda pricey and are still limited to only 2.4Ghz Wi-Fi, which makes them flaky.

      All webcams are bad. Even the ones that claim to be high-end are still relatively small sensors with tiny usually plastic lens. I’ve got a DSLR + HDMI capture card ($15 ones on eBay are surprisingly good and low-latency) + mount to have professional-looking video. Many old DSLRs will work great, you just need to look up which ones have clean HDMI output, and sometimes buy a USB-to-battery adapter to run them off mains power.

      1. 4

        Alternatively, phones now have great cameras on the back, and the Camo app from Reincubate turns a phone into a highly adjustable webcam. I got a MagSafe-compatible phone holder that sticks onto the back of the monitor. I had an old iPhone that I leave attached to my USB dock, but you could also just plop a phone in the holder when needed. MacOS got Continuity Camera after I set this up, which works too, but isn’t as adjustable.

        1. 2

          All webcams are bad. Even the ones that claim to be high-end are still relatively small sensors with tiny usually plastic lens. I’ve got a DSLR + HDMI capture card ($15 ones on eBay are surprisingly good and low-latency) + mount to have professional-looking video. Many old DSLRs will work great, you just need to look up which ones have clean HDMI output, and sometimes buy a USB-to-battery adapter to run them off mains power.

          I take a lot of calls on my phone (with airpods in). The camera is already quite good but what makes it a nice experience is to have the phone on a small tripod. That gives a much better perspective and I don’t have to keep worrying about my phone falling over.

          I think that generalizes webcams in that it’s nice to have them on a boom or be able to adjust the positioning in some other way.

          1. 3

            I use my phone and my video looks noticeably better than everyone else’s in meetings. If you have already bought into the apple ecosystem (mac + iphone) then “continuity camera” works great for running Zoom or whatever on your real computer while using the camera on your phone, in case you need to share windows during meetings.

            1. 1

              I use my phone and my video looks noticeably better than everyone else’s in meetings. If you have already bought into the apple ecosystem (mac + iphone) then “continuity camera” works great for running Zoom or whatever on your real computer while using the camera on your phone, in case you need to share windows during meetings.

              You can also use Camo if you have a Windows computer, to use a phone as your camera. Phone cameras are so much better than any webcam.

              1. 1

                I have this thing where my AirPods won’t connect to my work laptop, so I just accept it and dial into meetings via my phone.

                1. 3

                  There is no technology more frustrating on a daily basis for me than Bluetooth.

                  1. 1

                    The bluetooth is fine but I think it’d make me log on to my work laptop with my personal iCloud and I refuse to do that (even if IT would let me).

                    (This is seriously the best Bluetooth device I’ve ever used.)

            2. 2

              I’ll second the recommendation of a good chair. I’ve had a Herman Miller chair for seven years now, it’s still as new and has five more years of guarantee.

              It is a night and day difference to low-tier office chairs. That said, I echo what other’s have said of incorporating walks, moving around, and doing strength training.

            3. 9

              Single host? Just use docker-compose. What makes you think it’s not a good idea? It’s extremely straightforward and works great.

              1. 4

                it’s a fair question. until other commenters pointed out that compose in production is in fact endorsed by Docker, i would have (wrongly) said that it’s because it’s not endorsed by docker.

                beyond that, it’s some combination of “feeling scared” of the iptables problem (which I think is actually entirely solved by aggressive use of docker networks) and just the face that people make when you say you’re using compose in production.

                1. 14

                  If you’re using ufw, there’s ufw-docker.

                  That said, the easiest way around Docker punching holes in the network is to not publish ports on all interfaces. Instead of doing -p 8000:8000 (which exposes the service on all network interfaces), bind the services explicitly to a single interface. For example, to expose it locally on the host, use -p 127.0.0.1:8000:8000.

                  You can then manage access using tunnels or proxies in a way that abides with your firewall rules. For example, if you’re trying to expose an HTTP service, you can open just 443 on your firewall, and have a web server like Caddy, Traefik, or Nginx proxy the request to the port on localhost or whichever interface you chose.

                  1. 2

                    That may be my favourite thing about using podman. I used docker with ufw-docker before, but for podman this isn’t necessary.

                    I do still have an Nginx reverse proxy running locally. I use it to directly connect to my home server with other devices when I’m on the home network, instead of via Nginx on a VPS. This way I can easily use the same URL when I’m at home or away while not adding latency and wasting VPS bandwidth when I’m at home.

                  2. 1

                    Interesting, yeah I wouldn’t worry about iptables. The use of networks doesn’t have to be aggressive it’s like a 2-liner to set up a private network.

                2. 4

                  I’ve used docker on single-node hosts for several years and employing different strategies. I’ve used compose, I’ve used Ansible, and I’ve used bash scripts. This has all been “boring” to me in the sense that there haven’t been issues that I cannot attribute to something other than my misunderstanding of OCI containers and Docker specifically as I learned their ins and outs.

                  These days I’m transitioning to Podman for rootless containers and pods. As for managing the containers, I’m exploring either continuing to use Ubuntu for tighter integration with systemd using Quadlet, or Alpine Linux as a minimal host with services configured using OpenRC.

                  Do note that Docker did recommend against running compose in production at one point, but that’s no longer the case. For most single node setups, compose is good enough unless you require complex deployment strategies, replication, etc.

                  There’s also kamal, but I’ve not used it personally.

                  1. 2

                    thank you for sharing! though this is adjacent to my original post, I’d love to see a detailed writeup where someone explains how they do rootless podman in production. my experience trying to set up something rootless (I can’t remember what it was) was a failure, I simply couldn’t get it to work.

                    1. 2

                      Fortunately, setting up rootless containers in Podman is far from setting up rootless containers in Docker. Podman’s documentation is a good start. Granted, I’m still exploring it but getting up and running was pretty straightforward: I created a dedicated user to run the containers and updated /etc/subuid and /etc/subgid for said user. Again, I have not gotten into the weeds yet but I appreciated how simple it was to get going, particularly in Alpine Linux.

                  2. 2

                    I’m currently cleaning up and testing two old Dell Latitude laptops that I’m planning to repurpose as home servers. I’ve tested both using Alpine Linux and I’ve found no issues yet. Ultimately they’ll run either FreeBSD or Alpine, and I’ll be able to decommission the VPS that I use for the things that I host.

                    I do need to tear them down to see how they’re looking like internally. Ideally I should replace the thermal paste and clear out any dust inside.

                    I’ll also write a blog post as I’d like to document and share this.

                    1. 13

                      I host Miniflux. Setting it up and operating it is easy, though it does depend on Postgres.

                      I do most of my reading on an iPad, so I use NetNewsWire as an additional client.

                      Overall, I enjoy using it. Not only do I use it to follow blogs, but also to read later, and to filter firehose feeds like HN, Lobsters, Mastodon, and Lemmy based on what I’m interested.

                      1. 2

                        What are you using to get feeds from HN and Lobsters?

                        1. 2

                          I use this one from Lobsters, and this one for HN. Miniflux supports filtering based on regex, so it only picks up entries that match the keywords I’m interested in following.

                          1. 1

                            To share my own experience I’m following Lobsters through their per-tag RSS feeds (https://lobste.rs/about#tagging). For following on Hackernews, I’m using a combination of https://hackernewsletter.com/?ref=find-your-newsletter and ChangeDetection or https://hnrss.github.io/ (discovered via https://blog.jim-nielsen.com/2024/hacker-news-clones/ )

                        2. 3

                          This is cool, though it reminds me just how many ways there are to approach settings in Django :-/. For what it’s worth, there’s a recently created DEP for improving the default Django project template.

                          1. 1

                            Yeah probably I shouldn’t have talked about Django settings so much, maybe it prevents focusing on what I really wanted to push: SOPS and esbuild.

                            1. 1

                              For what it’s worth, I appreciated your take on esbuild and organizing static files across apps, as well as having a UI app where you consolidate what’s shared amongst these other apps. I’ve been using Vite and django-vite with a top-level static_src directory, and organizing the modules and splitting the processed bundles accordingly is one of the pain points that I’ve ran into.

                          2. 13

                            But the recent Mastodon upgrade has caused a significant amount of performance degradation, and I think the only way to really solve it is going to be to throw a lot of money into hardware.

                            I sometimes think that Mastodon is an albatross around the neck of the Fediverse’s success. I’m glad that there are now quite a number of other instance serving software packages out there, but Masto is still far and away the leader in terms of sheer numbers, and I myself had the experience of running my own instance for a few years only to have the sheer weight and complexity of the Mastodon software make me throw up my hands and stop trying.

                            I hope other lighter weight varieties like goto.social start to take off so we can see fewer annoucements like this and more growth in the Fedi space.

                            1. 10

                              Yeah, Mastodon had a big head start, but nowadays the additional resource load and extra sysadmin load make it really hard to recommend over alternatives if you’re going to start a new server. I expect it’s mostly still just dominating due to inertia at this point.

                              But another big factor is open registrations; I think this makes it much, much more likely for an instance to shut down. Here’s a thread from the admin of another masto instance that’s been around since 2017: https://toot.cafe/@nolan/113394607836985100 and he credits his instance’s longetivity to having closed registrations. He also says of the Mastodon version upgrade process: “I get anxious during big upgrades and my hands literally shake” which is like night and day vs the experience I have with my own gotosocial server I’ve been running over the past 2 years: https://technomancy.us/201

                              1. 6

                                I’ve been running Honk for a while now and it’s pretty good. My main complaint is that I had to patch in support for avatars, because Ted is a weirdo who prefers generating gravatar-esque (but less easily distinguished) icons instead.

                                I’m still annoyed by the lack of a good way to refer to “a Fediverse-based microblogging service in the style of Mastodon but not necessarily running the Mastodon software”, though.

                                1. 5

                                  Honk is neat but being able to say “Hey I like this!” feels like table stakes to me and that’s not included in Honk’s opinionated view of the world :)

                                  His bat and ball, so he gets to build whatever he wants, but not my cup of tea :)

                                2. 5

                                  the sheer weight and complexity of the Mastodon software make me throw up my hands

                                  The Mastodon architecture decisions seems really ill considered on many levels:

                                  • pumping terabytes of data around the internet 🤷🏼
                                  • not able to show full replies on any post even with all the data pumped around
                                  • RoR not providing any feature velocity
                                  • deployment and operations challenging even for elite sysadmins
                                  • costs quite unpredictable and unsustainable

                                  And all of this architecture is so deeply locked in that I don’t see any of it changing.

                                  1. 2

                                    The “not showing replies” was a deal breaker for me. This, and server rules that change on the whim of the server admin; One day I noticed that some rules had been added into the regulations that placed me in the category of people who are explicitly not welcome. So, being a good law-abiding citizen, I removed my account.

                                  2. 4

                                    On the varieties front, there’s also snac2.

                                    1. 8

                                      I’ll be That Guy and say that I’m not really interested in increasing the number of internet-exposed C programs I run 😬

                                  3. 27

                                    If I need to put Linux on a computer for a non-enthusiast to use, I always go with a KDE desktop, and those people have been happy with it; familiar enough to get into, plenty functional. GNOME has an uphill battle to become a desktop for all…

                                    1. 14

                                      I’m an enthusiast and I’m very much liking Plasma 6.2.2. Such a great desktop.

                                      1. 10

                                        See, I find the opposite, most non-technical, everyday users prefer Gnome over KDE, especially since MacOS and Smartphone UIs being more popular these days.

                                        KDE might be windows-like, but it is too busy, too configurable for your average user. People care about apps, and getting things done.

                                        Gnome imitates the Apple/MacOS model and it’s working for them.

                                        Even i as a power user, prefer Gnome over KDE. It gets out of my way and lets me get stuff done. I never fight it, it just works, and its pleasant to look at. It’s calm and not eye straining.

                                        We really need to get away from this KDE vs Gnome war. People prefer different things, and I hate how both sides rag on each other constantly for not using the same UI as them. Let people use what they like. The can always switch later if their needs change.

                                        1. 19

                                          Gnome imitates the Apple/MacOS model and it’s working for them

                                          From my experience with GNOME, that imitation is superficial and often misses the point. Things I value on macOS, which have been there since at least 10.2 (some are 10+ years older, but I didn’t use the system back then, and some of them were on Classic MacOS, others on OPENSTEP):

                                          • Keyboard shortcuts are the same everywhere. Even in the terminal, command-c copies and command-v pastes. Command-, always brings up preferences. Keyboard navigation between UI elements works in an expected order. Text-navigation shortcuts are the same in every text field.
                                          • Menu layouts are consistent and everything is in menus and so discoverable. I can search for things in menus by typing in the help menu and it shows me the expanded menu, pointing to the item I want, so I can find it later.
                                          • Drag and drop just works, everywhere[1]. I expect to be able to select something in one app and drag it to another. Every document-driven app has a file proxy icon in the title bar, I can drag that to the terminal to paste the full path, drag it to an email to attach that file, and so on.
                                          • File choosers are uniform. If I want to save a file in a directory I have open in the Finder, I just drop that directory in the save dialog and it works.
                                          • Almost every app support sudden termination. I can install updates and the system just force quits every app and they all come back in the same state. This includes the Terminal, which will restore all of my windows and tabs, in the previous location, in the same directory, with the same UUID in an environment variable, which I use to make all of my ssh sessions transparently reconnect after reboot.
                                          • Closely related, autosaving is the default everywhere, respecting Raskin’s First Law. Losing data is a user action. If my machine crashes (which hasn’t happened for a while), I don’t lose any data.
                                          • The system provides a bunch of services that apps integrate with. Spell checking is uniform across anything that uses NSTextView, which is most things because it’s rich enough to build a full DTP system. Address book, calendar, password management, and so on are all services provided to all apps and so only the really stupid ones (looking at you, MS Office) roll their own, everything else that needs these integrated with them.
                                          • Every app exposes functionality to AppleScript for scripting. For my first book, I did a load of diagrams in OmniGraffle (which remains my favourite drawing program). I wrote a rule in my Makefile to invoke an AppleScript to do the .graffle to .pdf conversion and this was about four lines of AppleScript.
                                          • Text Services extend the functionality of other applications as plug-in transform services, for a long time, I used one that took the selected text, typeset it with pdflatex, and embedded the source as metadata for reversing the transform. Suddenly, every rich text view on the system could include typeset mathematics.
                                          • Beyond AppleScript, there are a bunch of other nice integrations with the command line. The pbpaste and pbcopy commands let you exchange typed data with the clipboard (so you can pipe things through ImageMagick, for example, and pop the transformed result back on the clipboard). The open command will open a file with the default app (or, optionally, an app you specify), open . will open the current directory in the Finder, if you want to do some things that are easier in the GUI. The terminal and GUI worlds are easy to move between.

                                          It’s not perfect and I could list a lot of things that could be improved in macOS, but GNOME always feels like someone has seen OS X through a telescope and tried to copy it. They’ve captured the look but little of the underlying behaviour that makes it a platform I actually enjoy using.

                                          [1] One of the first things I noticed when I started using Windows for the first time after a decade or so was that I couldn’t drag a slide from one PowerPoint presentation to another (this is how fixed), which worked perfectly on the Mac version of PowerPoint.

                                          1. 4

                                            I notice in general an awful lot of people focus way too much of the “g” and not enough on the “ui”. (This is also one of my criticisms of Wayland!) When I see a new gui library and they’re talking about “rendering” it is almost always a pass - it might look ok but it just doesn’t work right. Making it work right takes a lot of time…. and a coherent, focused vision to make it consistent, both things not easy to find among open source projects… or even commercial projects for that matter.

                                            I’ve not used too much Mac, but when I do, I really appreciate that menu search, it is legitimately great.

                                            1. 3

                                              I will acknowledge all those points but just say that Gnome is working towards that goal, even as a bunch of people give them shit for it. And the Wayland transition hasn’t helped things.

                                              Maybe some day Gnome will be closer, but it’s still something, and I vastly prefer it over KDE myself.

                                              1. 4

                                                I’d be more willing to believe GNOME were working towards those things if they weren’t among the things that were my objections to GNOME when we started Étoilé, twenty years ago.

                                                1. 2

                                                  I’m not sure how I missed your connection to Étoilé up until now, but now that you’ve mentioned it I feel compelled to say that I’ve always found that project extremely interesting, but also difficult to engage. I know about it, but have unfortunately never actually gotten it running, because I used to follow GNUStep and its progress tracker religiously even back before Mac OS X was even in beta. (I still kick myself for not investing in Apple stock instead of a Sawtooth G4 + Mac OS X Server when StepWise was quoting me ~$8 a share, but that’s a different if adjacent story).

                                                  Is the Étoilé dream still alive? Or has it dispersed into other projects that I should be looking into?

                                                  1. 3

                                                    I cam the the conclusion that some of the things I wanted to do were not feasible with current hardware. For example, I wanted users to be able to share documents for collaborative editing and those documents be able to include code (in some end-user programming environment) that invoked native libraries. That requires a level of sandboxing that isn’t feasible with MMU-based isolation and a level of close communication that isn’t feasible with SFI (things like WebAssembly). So I worked on hardware that can do it. We’ll be shipping microcontrollers next year and hopefully we or others will get to application cores not long after.

                                                    Quentin and Eric made CoreObject usable on iOS and a few apps used it. I wish Apple had adopted it. A uniform set of model objects that you can use for distributed editing and unlimited persistent undo would have made iOS so much nicer.

                                                    1. 1

                                                      We’ll be shipping microcontrollers next year and hopefully we or others will get to application cores not long after.

                                                      Étoilé 2.0 laptops soon 🙏

                                                      Quentin and Eric made CoreObject usable on iOS and a few apps used it. I wish Apple had adopted it. A uniform set of model objects that you can use for distributed editing and unlimited persistent undo would have made iOS so much nicer.

                                                      There’s some kind of CloudKit integration with Core Data, but it’s definitely not as elegant sounding.

                                            2. 2

                                              See, I find the opposite, most non-technical, everyday users prefer Gnome over KDE, especially since MacOS and Smartphone UIs being more popular these days.

                                              My sample size is small, so idk, but like one of the people is also someone who bought an iphone, absolutely hated it and then bought another android. (I saw that iphone just sitting on his desk for two months straight and was like “do you wanna sell that thing?” and now it has been sitting on my desk barely used for almost a year…. but eh, it was my test hardware for something I blogged about earlier this month, so it isn’t completely unused! I actually think I might switch to it when my current phone dies, but that could be many years still.)

                                              KDE might be windows-like, but it is too busy, too configurable for your average user. People care about apps, and getting things done.

                                              Some people. Maybe even most, but the title of this article is “for All”. The people I set up kubuntu for aren’t technical at all but they like the generally familiar look and enjoy playing with the configuration options.

                                              (I said this in another comment thread recently on a different link, but I do think the config options are a support burden, since you can’t consistently be like “look at the lower left of the screen” or similar, but I’m not convinced they’re a problem for the users themselves - just hopefully there’s an “undo” button for when something disappears!)

                                              Again, my sample size is very small too, but tbh I haven’t seen any rigorous study so if comes down to my small personal experience vs the small personal experience of someone else on the internet, about the only thing I’ll say with confidence is that a project aiming to be “for all” has an uphill battle ahead of it.

                                              Even i as a power user, prefer Gnome over KDE. It gets out of my way and lets me get stuff done. I never fight it, it just works, and its pleasant to look at. It’s calm and not eye straining.

                                              I personally use neither; they’re both suboptimal at best.

                                              I hate how both sides rag on each other constantly for not using the same UI as them.

                                              I guess you agree “a desktop for all” is an uphill battle too :)

                                              1. 1

                                                Ok, I will give you the “for all” part. I don’t think there ever will be a for all. Not everyone likes windows, not everyone likes Mac, and they have no choice there.

                                                On Linux you get choice and because of that, there will never be one true way.

                                                But we can definitely make it easier to onboard between all the options.

                                                I think Distros need to find a way to have Gnome/KDE/other wm, to be chosen at user creation and stop having all three Spins and duplicate Distros just to change WMs.

                                                I know it’s not an easy problem to solve, but it should be something to strive for. Or Gnome/KDE need to go the Zorin route and support multiple layout presets.

                                                1. 2

                                                  I think Distros need to find a way to have Gnome/KDE/other wm, to be chosen at user creation and stop having all three Spins and duplicate Distros just to change WMs.

                                                  That’s how it has worked on Slackware Linux for as long as I have used it! They used to even include all in the default install so you can even pick the one you want when you login. Nowadays though, the default install no longer includes gnome - it has kde, xfce, and a number of the traditional smaller window managers.

                                                  One of the reasons why they changed is just that it was like an extra gigabyte of download for things their typical user wouldn’t use; they’d pick one or the other and leave the rest just sitting there.

                                                  But this is actually why I was able to easily pick neither lol, just a case of running the other already installed options.

                                              2. 2

                                                We really need to get away from this KDE vs Gnome war.

                                                Not that hard. I steer non-technical users to Cinnamon, which is a simple, stable environment modeled after ‘classic’ desktops with a taskbar with a menu button.

                                                It was developed by Linux Mint, generally regarded as one of the most user friendly distro’s and also one of the most popular. So I’d say it is a wellknown alternative.

                                                1. 1

                                                  I loved cinnamon, but it stagnated for a long time. Now I need it to get Wayland support to be willing to suggest it.

                                              3. 5

                                                I’m a long time KDE fan (like late 1990s) and have constantly been surprised by how much traction GNOME seems to get from the major vendors despite being a bit more radical versus the UI paradigms casual users are more accustomed to. That’s not to say there is anything wrong with GNOME, but it feels a lot different than Windows and that is a potential barrier and they’ve committed to pretty significant departures from the GNOME 2 style UI.

                                                1. 5

                                                  but it feels a lot different than Windows

                                                  ~60% of the world is on mobile (according to statcounter), The amount of smart TVs that not-programming-forum people don’t mind using (or, can stand using) is increasing too. At some point, being Windows-like is not going to be a benefit compared to being app-ish / page-ish. (Even Microsoft knows its coming, hence Windows 8, though they jumped in a bit early).

                                                  KDE and Gnome both have ecosystems that target Phone, Tablet, and Deskop use (with KDE also targeting TVs).

                                                  1. 2

                                                    Sure and closer to 100% of PC users would have a smartphone, tablet, or TV with some alternative UI. In my humble opinion, those UIs are great on their respective form factors but suck for the desktop where the Windows (or to split hairs you can call it original UNIX desktop) UI works fine. And yes I agree KDE scales fine to TV or convertibles, I have used it on both.

                                                    KDE incorporated a lot of minor but nice UI improvements that seem conspicuously similar to features that eventually landed in Windows and I understand a lot of people have negative connotations around Windows in general so I do not think KDE is a Windows work-alike nor bound to make Windows mistakes or lethargy.. but it also doesn’t feel so different just for the sake of being different like GNOME3+.

                                                2. 1

                                                  Yep. I don’t even prefer kde on my computers, but it’s the least surprising. I think gnome is intuitive enough to be learnable, but it’d be jarring like the windows 8 metro menu for the non-technical people in my life, I think.

                                                  So I guess that means for better or worse, SteamOS is the immutable flatpak oriented Linux desktop for the masses and that’s weird!

                                                  1. 1

                                                    What’s a good distro that ships with a KDE desktop?

                                                    1. 4

                                                      Fedora has a KDE variant, for both the workstation and atomic versions (Kinoite).

                                                      1. 2

                                                        NixOS has Plasma 5 and 6.

                                                      2. 1

                                                        I using Gnome on my Laptop (for quite some time) and KDE on my Steam Deck. I personally like the look and feel of Gnome much more. KDE always feels a little bit too condensed and sometimes cluttered, some would say functional. But in the end both are great and a like to use both.

                                                      3. 7

                                                        The author’s article would be stronger if they it did not presuppose a particular attitude towards PHP.

                                                        Consider the expressions used, such as “Were were all writing PHP to our heart’s content, … we were solving smaller problems”, “We’re not going back to PHP”, “the old PHP code we all hated”, “that crazy PHP code”.

                                                        The author appears to think that his opinion of PHP (whether it comes from experience or cargo-culting) is the general consensus, and I’m not sure that’s the case.

                                                        The article would’ve benefitted from concrete examples and compare how specific problems are solved by the server-side rendered paradigm espoused by Next.js and their kin. Even stronger then if it accounts for any increased complexity in the stack incurred by the latter. For example, I suspect that these days a PHP application would be simpler to deploy and maintain than Next.js, using PHP-FPM and Caddy.

                                                        Edit: typo

                                                        1. 6

                                                          I think if you mentally replace “PHP” with “PHP 3” everywhere it occurs in the article, it’s easier to see what the author means.

                                                          That said, I didn’t find it compelling either. And I suspect that, like some of the shitshows we used to see marshaling data from the client to the server when people started leaning too much on client side stuff with PHP (and others… I remember having some real fun with JSP and ASP.net), we’re going to see it with react on the server. That seems to be an unwillingness to retain past lessons about trusting serialized data from an untrustworthy source.

                                                          I think having react on client and server is going to exacerbate that, and we’re about to see a new round of it.

                                                          In that respect, I think React on the server may turn out to be quite a bit like PHP 3.

                                                          1. 4

                                                            Hah, this comment inspired be to lookup when PHP 3 was a thing… and it turns out PHP 4 came out in May 2000! So those of us with an anti-PHP 3 bias are 24 years out of date at this point.

                                                            1. 4

                                                              What got my attention when reading this was: “Looking back, I think it might’ve been an excuse to avoid writing too much JavaScript – and honestly, who could blame us?”

                                                              That’s an interesting idea because I absolutely do do things to avoid writing too much Javascript. explicitly so. And questioning that is worth something to me - am I blinded by an obsolete ideology? Was it ever REALLY a good idea, or unfairly biased even back then?

                                                              Those are great questions and despite the author’s attitude, I tried to keep an open mind… but like I hinted at in my other comment, the arguments either lack substance entirely or seem like they’re attacking strawmen rather than what I actually do/did. (I still have the code to some of my old work projects from 2005-2010 so I can check how I used to do it without the rose colored glasses!)

                                                              Of course, I have confirmation bias toward my existing beliefs too, but if this is to be actually overcome, authors need to attack it head on, not just assume everyone agrees with them already and knock down things that weren’t reality.

                                                              1. 2

                                                                Author seems emotionally invested in the JS frameworks right now, so this probably makes it hard to reach an objective comparison. Which is fine (we aren’t robots) but they still present the history as if this were some settled argument and what’s new is an improvement over what came before.

                                                                I was excited by the prospect of rendering React on the server (components are a great idea for both server and client-rendered views IMO) but then found out some of them simply don’t work in a server context currently. That’s disappointing, I thought the whole point was to own the runtime/platform so you can render them wherever.

                                                              2. 2

                                                                Going the other way, I recently discovered feeds in space. This provides an RSS to ActivityPub bridge. You can publish things on your blog and each RSS entry will become an ActivityPub status.

                                                                It seems more people have a Mastodon or similar client than an RSS reader now.

                                                                I saw that jwz has a thing to use ActivityPub for blog comments, which I’d also like (with moderation).

                                                                1. 1

                                                                  I did not know about feeds in space, interesting! Looks similar to EchoFeed, which I have not used but looks convenient for POSSE (Publish own-site, syndicate elsewhere).

                                                                2. 4

                                                                  Continuing to read Absolute FreeBSD. Can’t wait to get to the chapter on Jails, which is one of the main reasons I’m drawn to FreeBSD. Looking forward to see how it pales to my usual approach of a bare Linux install + containers via Docker.

                                                                  Will finish some drafts that I have on my blog and slowly ease out of vacation.

                                                                  1. 1

                                                                    I hadn’t heard about NextDNS before, which seems to provide DNS-level ad-blocking. Do other people have experience with it, where can I read more about it?

                                                                    1. 4

                                                                      I’ve been using it for about 3 years on my mobile devices which can’t run browser ad blockers. Easy to install as a Private DNS provider. It works well as a DNS replacement. As an ad blocker it’s so-so, DNS ad blocking has its limitations and is slowly losing the arms race against adtech. But it’s way better than nothing.

                                                                      Of the DNS ad blockers NextDNS is the best I know of. I definitely prefer it to running my own PiHole or whatever.

                                                                      1. 2

                                                                        I’ve been a paid user for a couple of years now, and originally started using it for its ad-blocking capabilities. I’ve also used it to curb my internet usage at times when I’ve needed that.

                                                                        I’ve set it up on home routers, individual devices (like iOS for when I’m on-the-go), and on my Tailscale tailnet. The price is fair and I’ve had no issues.

                                                                        1. 2

                                                                          If you are running your own equipment you can create blocklists at the network level without having to use a specific service. I have done this a few times for unbound from ad-block lists or with this CoreDNS plugin where I also added the NXDOMAIN settings (most other resolvers can do similar things or even have some native handling).

                                                                          1. 1

                                                                            If I recall correctly from their launch, it was a pair of ex-Netflix engineers who started it. I recently heard about a similar effort with https://controld.com/

                                                                            1. 1

                                                                              It’s very popular on r/homelab, which has some real-world examples of people’s setup. I’ve been loving it for ad-blocking, but have plans on using some of their site restriction functions when my kids are a bit older.

                                                                            2. 16

                                                                              Lately I’ve been using Go and it’s been great!

                                                                              It’s easy to get started, easy to build and deploy, and easy to come back to whenever I decide something needs tending to. Keeping dependencies to a minimum helps. gofmt helps.

                                                                              Most times I can get away with using a single JSON file for persistence. If I need something more, I’ll add SQLite.

                                                                              If I need a UI I’ll use Go’s HTML templating and vanilla CSS/JS without any build tools. If I need something more, I’ll download and keep Alpine.js and HTMX scripts in the project’s public directory.

                                                                              If a side project looks and quaks like a static site, I’ll use Hugo in the same manner as I’ll use Go - no dependencies and plain vanilla/JS.

                                                                              I keep a small VPS for anything that needs to be hosted. Go, Docker, and Caddy makes operations pretty straightforward, and so does Tailscale for anything private. I’ve written about my approach for the latter here.

                                                                              Edit: Included how I host side projects.

                                                                              1. 2

                                                                                UI I’ll use Go’s HTML templating

                                                                                Check out https://github.com/jba/templatecheck if you haven’t already. It has saved me so much time.

                                                                              2. 26

                                                                                I note you don’t have a robots.txt file; that is the usual way of instructing bots not to scrape. Amazonbot’s support page, linked from the user agent string, says they respect it.

                                                                                1. 6

                                                                                  This is true, and was pointed out to me shortly after publishing :D Forgejo doesn’t provide a robots.txt by default, so I’d have to setup an nginx config snippet to do it for me - which is now on my todo list, maybe providing a default to Forgejo.

                                                                                  1. 6

                                                                                    Worth noting is that you can put a robots.txt in the custom path and it will be picked up, assuming that part of the Gitea code hasn’t been changed.

                                                                                    I.e. $CustomPath/public/robots.txt

                                                                                    1. 1

                                                                                      Forgejo issue to add a sane default: https://codeberg.org/forgejo/forgejo/issues/923 (shouldn’t be too hard to implement, if anyone is interested :)

                                                                                    2. 2

                                                                                      I too host my own Forgejo instance but run a web server (Caddy in my case) in front of it.

                                                                                      Besides adding a robots.txt, you could block certain user agents.

                                                                                      For anyone not using Cloudflare’s WAF. I have an example Ansible playbook to create the robots.txt file and Caddyfile based on a list of known user agents.

                                                                                      EDIT: Updated for clarity.

                                                                                    3. 5

                                                                                      Unfortunately some of the AI bros have already figured out the telemarketer trick of creating a new file, or new parameter for robots.txt, that you must add to opt-out of their specific scraper, creating an arms race you can’t win. It’s a nice example of why the opt-out model doesn’t work unless every actor is a saint. Only opt-in is scalable.

                                                                                    4. 17

                                                                                      Talk about tradeoffs and downsides

                                                                                      To me this is often the most valuable part. I understand some don’t feel like writing their blog post as if it were an RFC or pull request, but nuance is important and accounting for it elevates the conversation a lot. There’s lots of things written out there about the “how”, but rarely do they go deep on the “why” and “when” vs “when not to”.

                                                                                      1. 7

                                                                                        A lot of blogs are essentially “content marketing” for some company. The information may be accurate, but they’re writing to advocate a point of view. Having some downsides and skepticism helps prove it isn’t.

                                                                                        1. 5

                                                                                          The only downside to our AwesomeWidget software, from our traditional computing division, is that all your competitors will be lobbying the government to sue you for being a monopoly in your sector. But, for that we have the Lawyer2050 software from our advanced computing division. The only downside to that is the bajillion dollars it will cost you. However, for that we have the LendMeMoney program from our finance division. The only downside to that …

                                                                                      2. 7

                                                                                        A push-based strategy of git push <server> main a la heroku has been just enough for me in several projects.

                                                                                        In the server, I initialize a bare git repo (e.g git init --bare) in /srv/<repo>/git. I scp a post-receive script to /srv/<repo>/git/hooks/post-receive and make it executable. This script has just two relevant lines:

                                                                                        GIT_WORK_TREE=/srv/<repo>/app git checkout main -f
                                                                                        /srv/<repo>/deploy.sh
                                                                                        

                                                                                        I then scp the deploy.sh script which will do whatever is needed to deploy the project with whatever got checked out in /srv/<repo>/app. This could be go build, docker build, etc.

                                                                                        In the client, I add this git repo as a remote via SSH and then just push!

                                                                                        I learned this from Deployment from Scratch and has served me well for simple projects.

                                                                                        1. 1

                                                                                          Finally! I can try it on my computer!

                                                                                          I’m not one to follow web trends (I just stick with nextjs and npm) but I’ve been interested to try Bun and see what all the hype (and venture funding) is about

                                                                                          1. 2

                                                                                            You’re likely set in your tools for Node.js but imagine if all those tools just came with Node.js in the first place. In my line of work just having the typescript transpiler built into the runtime is a huge win.

                                                                                            1. 3

                                                                                              Honestly a compelling argument, but I had hopes for Deno and it seems to be a constant hobby project - a joke I hear is the only production Deno software is the Deno homepage.

                                                                                              There’s also jsr.io that popped up recently too. I may experiment a bit with Bun, assuming Next.js works fine with it!

                                                                                              1. 4

                                                                                                What makes you say that? I’ve got a good handful of Deno projects in production and am a huge fan at the moment. The tooling, Deno std lib, and npm compatibility make the project pretty compelling for me.

                                                                                                1. 1

                                                                                                  Oh it’s just a general vibe, anecdata not fact, I’m not aware of any businesses using Deno in production but I’m sure there are! What sort of products have you built with Deno currently? I’m quite curious how it stacks up against the rest - particularly in hiring/longevity discussions etc. (which is why I typically tend to advise companies I work with to just stick with the most boring and widely used tools)

                                                                                                  1. 2

                                                                                                    Ah gotcha! As far as businesses using Deno, Slack is using it for their new automation platform, Netlify and Supabase for their serverless/functions offering, etc. It has some big deployments out there for sure!

                                                                                                    The main Deno application we have in production is a small service that collects analytics on internal tools’ usage and reports it up to Datadog. It’s nothing big or shiny, but it works wonderfully and the repository is one of the simplest in our collection.

                                                                                                    And I think your advice is correct, just for the record ;)

                                                                                                    1. 3

                                                                                                      The folks at val.town are also using it in production in the current iteration of their runtime.

                                                                                                      1. 1

                                                                                                        Oh snap! I didn’t realize val.town was using Deno! I’ll need to take a second look!

                                                                                                      2. 1

                                                                                                        Oh nice, that’s much better than I thought, I’m glad Deno is seeing some success in the enterprise world! Thanks 😁

                                                                                                2. 2

                                                                                                  Yeah, never again the bullshit of having to figure out node-ts/ts-node. It should just run.

                                                                                              2. 15

                                                                                                This seems like pretty bad advice. I don’t think we should be teaching people to run systems like this. Use something like Puppet or Nix or (it’s not to my taste but) Ansible, learn it properly, and you get to live in a world where not only are you “independent”, you can also change your config without running the whole install script again, keep multiple machines in sync, rebuild effortlessly if you lose one, and undo changes that didn’t work out. As well as just… knowing what your configuration is. It’s enormously valuable to me that I can just look at the code in one repo to see how my server deviates from the default install, rather than having to go and look at every individual thing that could have changed.

                                                                                                Also there’s this:

                                                                                                You honestly don’t have to do anything to maintain your server. It will just work as-is for decades!

                                                                                                Telling people to bet the farm on OpenBSD’s (and all the other software their script installs) not having a remote exploit in the next several decades is not just a weird take; it’s either maliciously reckless or just malicious.

                                                                                                1. 24

                                                                                                  I actually think this is pretty great advice for somebody starting out and who is intimidated by setup.

                                                                                                  I liken this article to “Growing stuff is easy–here, buy a succulent, put it near a window, occasionally water it, and it’ll basically never die”–by contrast, you seem to be concerned that the author isn’t telling folks to get the Farmer’s Almanac, a support contract with John Deere, and do soil sample testing.

                                                                                                  It’s okay for folks starting out to not have configuration as code and to not run a FIPS-compliant system.

                                                                                                  As well as just… knowing what your configuration is.

                                                                                                  If I’m the only person administering my server, and I haven’t been too crazy in setup, I can look at my server and figure out what the configuration is. Sure, there are tools that make this easier–I run Nix for years now for this purpose–but for somebody that’s just trying to backup some movies and serve a webpage about their cat, this is fine.

                                                                                                  ~

                                                                                                  Freaking engineers, I swear–for normies starting out, or experimenting, it’s more important to have easy-to-follow and hard-to-fuck-up steps than to have industry best practices.

                                                                                                  1. 4

                                                                                                    I agree, and this tracks with my personal experience when I first started running my own VPS many years ago. Eventually I got up to speed with the intricacies of the underlying system, doing automation, etc.

                                                                                                    The author of this article was on Tim Ferris’ podcast a few months back and they spent approximately 20 minutes talking about this subject. I imagine more than one person is going to attempt running their own server with varied degrees of success. I fret at the thought of newcomers getting pwned but as you point out, you don’t have to run critical infrastructure. Start with a website, leave some slack in the system, and keep learning.

                                                                                                    We need more people taking ownership of their digital lives!

                                                                                                    1. 4

                                                                                                      It’s okay for folks starting out […] to not run a FIPS-compliant system.

                                                                                                      Sure, but it’s one thing to tell people “here’s a reasonable default configuration that should be fairly low-maintenance.” It’s something else entirely to say “You honestly don’t have to do anything to maintain your server. It will just work as-is for decades!” To an experienced person, that is an obvious exaggeration. To a newbie, it’s a recipe for running a system that will slowly but surely become insecure over time.

                                                                                                      1. 9

                                                                                                        It will just work as-is for decades!

                                                                                                        I think the OpenBSD server I set up for my parents to handle on-demand dialup and file sharing on the LAN ran for a decade with basically nothing. When I was at gradschool my father had to call me up once and I used him as a teletype to fix some minor thing that had changed with the network configuration.

                                                                                                        It’s actually quite reasonable advice. People have just forgotten that it’s reasonable.

                                                                                                      2. 2

                                                                                                        It’s okay for folks starting out to not have configuration as code and to not run a FIPS-compliant system.

                                                                                                        Not being FIPS-compliant is a plus.

                                                                                                      3. 12

                                                                                                        the advice isn’t bad, it’s just not aimed at folks with experience. for someone just starting out, using nix/puppet is bad advice - how could a person know how to use nix or puppet when they don’t know how to configure a system in the first place?

                                                                                                        besides, managing a system by hand is just fine for personal use - it’s low overhead and understandable - i know nix & chef & puppet & all the rest, and still prefer managing my personal systems by hand simply because i find it more enjoyable.

                                                                                                        1. 5

                                                                                                          I would agree. While I tend to be skeptical of step-by-step guides without talking about alternatives in the space or going in depth on how do things in a ‘nicer’ way later, the introduction was clear that it’s meant to be about as simplified as it can taking all of the choices out. I’m not the target, but maybe this style of guide is how some prefer to gain their understanding & intuition–& with the broad goal of taking back ownership of one’s data, having more styles of guides for different kinds of folks is good.

                                                                                                          1. 1

                                                                                                            Sorry for replying so much later; I went on a wild camping holiday for Easter but I wanted to respond to some of the pushback here.

                                                                                                            Basically, I think learning good habits is much easier than learning bad habits and then trying to replace them with better ones. Empirically hardly anyone does replace them. I think one of the reasons Nix has done so well is it forces people to do things right from the beginning, and once people have experienced doing things right, they like it.

                                                                                                            I can’t really argue with your enjoying managing your system by hand, but this article isn’t encouraging people to pick up system administration as a hobby—it’s encouraging them to do the bare minimum sysadmin as a means to an end. I believe that using config management will is ultimately somewhat lower-maintenance than doing everything by hand. I also think it’s much safer in that if your server mysteriously disappears or whatever you have a greater chance of setting up its replacement in a timely fashion. (Note the author wants us to self-host email, and missing emails can occasionally have serious consequences).

                                                                                                            how could a person know how to use nix or puppet when they don’t know how to configure a system in the first place?

                                                                                                            I’ve configured things with Nix and Puppet that I’ve never configured without. I have a friend who self-hosts email with Nix and doesn’t even know which software they’re using to do it. I’m not sure that I’d personally recommend taking such a zoomed-out view of a computer you’re responsible for, but I think this is pretty strong evidence that using configuration management is easier than not.

                                                                                                            1. 2

                                                                                                              using nix/puppet does not automatically mean you’re “doing things right” - that’s a big assumption and depends on a ton of context.

                                                                                                              i used nix to manage my systems in 2016, and ultimately fell out of love with it because i didn’t think the level of abstraction was worth it - it’s nice when it works, but when something inevitably goes wrong it turns into hell pretty quickly.

                                                                                                              first, nix works differently than “normal” routes of managing systems, so you’ve locked yourself out of the utility of generic online resources. is that “doing things right”?

                                                                                                              i’ve worked in the config management space for years, and can say with certainty that it’s more work to code, maintain, and update a system like puppet than it is to manage systems by hand.

                                                                                                              there’s a tipping point where the complexity starts to be “worth it”, and imo it’s somewhere around 10 systems.

                                                                                                              sometimes all you need to do is apt install a single package and start a single service and update the thing once a year - no abstraction, direct management with obvious patterns when things go wrong - there’s a lot of value in doing things simply.

                                                                                                              imo, everyone should learn the fundamentals before they learn the abstraction layer, especially if they’re planning on being responsible for the uptime of the thing.

                                                                                                              your insistence that beginners learn nix because it’s unequivocally “correct” is what i take issue with. it is not always correct, and imo beginners are just gonna be frustrated by it.

                                                                                                              you shouldn’t learn javascript by starting with React, and you shouldn’t learn Linux Administration by starting with nix.

                                                                                                              1. 1

                                                                                                                using nix/puppet does not automatically mean you’re “doing things right” - that’s a big assumption and depends on a ton of context

                                                                                                                I singled Nix out because it pretty much forces you into doing things declaratively while less absolutist tools require more discipline. Obviously nothing can make you do everything right, but Nix makes a very good effort at stopping you from having configuration that is not code, which is the “do things right” I was talking about here.

                                                                                                                first, nix works differently than “normal” routes of managing systems, so you’ve locked yourself out of the utility of generic online resources. is that “doing things right”?

                                                                                                                I don’t think I would claim it as an advantage but I don’t think it’s that bad. I find it very difficult to find useful information about sysadmin stuff from generic online resources anyway, just because there’s so much SEO content farm stuff.

                                                                                                                there’s a tipping point where the complexity starts to be “worth it”, and imo it’s somewhere around 10 systems

                                                                                                                I have one server now, and I think it’s worth it. I don’t really know what to say to your assertion that it’s more work. It’s not for me. I especially like how easy updates and reinstallations are.

                                                                                                                your insistence that beginners learn nix because it’s unequivocally “correct” is what i take issue with. it is not always correct, and imo beginners are just gonna be frustrated by it

                                                                                                                I don’t think they should always use Nix, I just think they should use something reproducible.

                                                                                                                and you shouldn’t learn Linux Administration by starting with nix

                                                                                                                But, again, this isn’t a primer on How To Become A Sysadmin (definitely not a Linux one since OP wants you to use OpenBSD…). It’s aimed—I think—at people who were quite happy not doing any of this. I’m also a professional sysadmin and I believe I know the value of being familiar with the basics. But I (a) don’t think that implies one has to start with them, any more than one needs to start programming in assembly language; and (b) don’t think you have to understand everything you use to such a high standard. If you see a server as something akin to a kitchen appliance, your choices are:

                                                                                                                • Do it all yourself (you can’t, you don’t know how computers work)
                                                                                                                • Do the things this article says, install a weird OS and then run some random script you don’t understand and type a bunch of stuff into it
                                                                                                                • Follow a hypothetical config-as-code version of this: copy someone’s random code you don’t understand and type a bunch of stuff into it, run a weird program/install a differently weird OS

                                                                                                                If that’s where you get off, they look pretty similar to me. But I really think you’re on a better footing with config management if you ever want to make any changes or reinstall your system or whatever.

                                                                                                                I was kind of taken aback by how many people disagreed with me about this one. I think the main thing that surprised me was that folks think config management is more work or only appropriate for hardened professionals or huge fleets of servers. And I just don’t see it that way. I feel like managing a computer manually is akin to bonsai or maintaining a vintage car or something—it’s a respectable pursuit and I can understand why people want to opt into doing it, but to the non-enthusiast it’s just a chore.

                                                                                                          2. 4

                                                                                                            Not to mention the massive red flag in the form of

                                                                                                            SSH into root, and get my script

                                                                                                            right after making some SSH keys

                                                                                                            1. 2

                                                                                                              What exactly is a red flag in this for you ?

                                                                                                              If this is the “SSH into root”, if password authentication if disabled it doesn’t seem to be an issue for me. Using a custom user with sudo permissions doesn’t bring much more in terms of security.

                                                                                                              If this is the “and get my script” part, then OK. You shouldn’t executing a script form an untrusted source without reviewing it.

                                                                                                              1. 4

                                                                                                                From the step

                                                                                                                Windows? Start → Windows PowerShell → Windows PowerShell

                                                                                                                Mac? Applications → Utilities → Terminal

                                                                                                                It seems that your intended audience for this is nontechnical people. If so, is it reasonable to ask them to review a long bash script?

                                                                                                                1. 2

                                                                                                                  Agreed, it is not. And overall i don’t think it is good idea to ask non-technical people to self-host.

                                                                                                                2. 2

                                                                                                                  There is no operational difference between executing a script from an untrusted source on a VPS and having your services managed entirely by an untrusted source. Is there?

                                                                                                              2. 1

                                                                                                                Telling people to bet the farm on OpenBSD’s (and all the other software their script installs) not having a remote exploit in the next several decades is not just a weird take; it’s either maliciously reckless or just malicious.

                                                                                                                That’s hard to take seriously without an argument for why betting the farm on something else is better.

                                                                                                                1. 4

                                                                                                                  I mean, you could just… update your server? I thought we had pretty much everyone convinced on this point, to be honest.

                                                                                                                  1. 1

                                                                                                                    I see, you would prefer if they advised people to update their server rather than just saying to do it “if you like.” but a lot of people won’t want to take the time either way.

                                                                                                              3. 1
                                                                                                                • hardening my new akkoma server
                                                                                                                • setting up my own IRC bouncer so i can stop aping off my friend’s
                                                                                                                • making git-over-ssh work on my forgejo server
                                                                                                                • finally making the decision of “does this monitor i’m not using stay on my desk or not?”
                                                                                                                1. 1

                                                                                                                  update: the monitor is gone

                                                                                                                  1. 1

                                                                                                                    Did you get git-over-ssh working? I run a forgejo instance using containers and could not get SSH to work due to an issue with the uid in the Dockerfile that they provide. May check again this weekend to see if this has been fixed.