Threads for delucks

  1. 3

    This is the missing website that I’ve wanted for years while comparison shopping single board computers! Thanks for compiling this data and making a site for it, I’m going to be coming back as my projects demand new hardware.

    1. 12

      Personal inventory management.

      I want to be able to slap a QR UUID sticker on a box. Then added the contents of the box to a db with

      1. Image(s) of object
      2. Name
      3. UPC of item if possible
      4. Purchase infomation if available

      Lastly add all of the images to a to image recognition system so I can look up what an item is and where it goes. Also being able to query from the other direction (e.g. where is tiny tiny machine square).

      It ultimately would not be helpful for keeping organized, and potentially counter productive, but it’s nice to dream that this would magically fix my poor organization skills.

      1. 5

        The image recognition system isn’t present, but snipe-it can do the inventory management with UPC and purchase info + QR code generation parts of this idea. I self host it to track computer parts and systems through the process of repair, it’s been very useful in staying organized so far.

        1. 3

          I’m actually working on an app like this. You can take a photo, scan a barcode, and add a description. There’s also a barcode generation tool. Any feedback is welcome.

          https://stuffo.app

          1. 1

            What would you need to inventory? like your closet of clothes? Or stuff in your storage or garage? Or for your small shopify business?

          1. 20

            I absolutely just want a tiny, silent, cool ePaper laptop capable of running Linux/BSD in a purely text mose.

            1. 9

              I would also love a Linux compatible e-ink laptop. I look for one from time to time, but there has never been one that has been worth the price for me. There are some things on the market that come close, but they normally have a few things that I don’t like and come with a price tag too high for me to want to compromise.

              1. 7

                Exactly what I’m dreaming of. I even asked MNT founder about it: https://mamot.fr/web/@ploum/109082688438688769

                I’ve written about my quest here : https://ploum.net/the-computer-built-to-last-50-years/

                I thought that Astrohaus was nailing it.

                Unfortunately, I’m really angry against Astrohaus for the Freewrite. Their software are a shame, force using a proprietary cloud and are full of bugs. My Freewrite, despite its weight, have no more battery than my laptop. The traveler has a very very bad keyboard to the point of making it unusable for me (I had to send it back because some keys were always quadrupled. Now, the space is only working if I press it really violently). See gemini://rawtext.club/~ploum/2021-10-07.gmi

                Placing all my hope on the MNT Pocket even if I would need to adapt my layout to the keyboard. Hoping to see an eink version soon to use with only a terminal. Neovim, Neomutt and Offpunk are all I need 95% of the time ;-)

                  1. 4

                    I’ve written about my quest here : https://ploum.net/the-computer-built-to-last-50-years/

                    This is very interesting, thank you for sharing. One point I’m unsure about is storage… I’m not aware of any existing storage technologies that would last more than a dozen years. Mechanical drives fail because they’re just fragile, especially in a computer than can be easily moved around. SSDs/flash are less fragile but blocks still “go bad”, though wear leveling helps a little I guess. Maybe some purpose-built SSD with a huge number of spare blocks would last 50 years?

                    1. 4

                      SSDs also require power, at least sporadically, for them to retain data. I’ve seen a recommendation to power up and read all the data on a SSD once yearly to make sure there’s no data loss.

                    2. 2

                      addendum: I think your blogpost would be worth its own submission

                      1. 1

                        Thanks. It has already been submitted : https://lobste.rs/s/1b1rxk/computer_built_last_50_years

                        1. 1

                          my bad, missed that when looking for it.

                      2. 2

                        It has come up in discussions around the reform in the past (I think on the reform forums), and @mntmn said that it’d be an interesting option, but at least at the time there wasn’t really anything good enough available that could be easily used.

                        Would not surprise me to see someone do it as a modding project though, if they can find a usable panel in a close-enough size.

                      3. 3

                        I haven’t tried it, but the Remarkable2 is apparently running linux; https://www.mashupsthatmatter.com/blog/USB-keyboard-reMarkable2 walks through adapting it to take a USB keyboard, but looks like it’s still a bit of work.

                        1. 1

                          I’ve seen that it was possible to install Parabola Linux on the rM1. I haven’t tried it yet but it’s indeed a very interesting possibiity.

                          1. 1

                            I have an rM2, and it has some Linux distro installed by default. I haven’t messed around with installing a totally separate OS, but I have used https://toltec-dev.org to install some homebrew apps as well as general Linux utilities

                      4. 9

                        I read this comment from my Kobo Clara HD e-reader which is running full NixOS (with Rakuten’s vendor kernel) - it’s not a laptop but it is kinda a tablet and does support OTG.

                        I’m hoping the Kobo Clara HD 2e is similarly hackable because it has Bluetooth. I’d love to be able to use a wireless keyboard and have audio, in the future.

                        1. 1

                          Since writing this, I had a quick look at the Kobo Clara 2e and it looks close enough that I’m going to gamble that my existing NixOS installation might boot. Purchased for $209AUD. Let’s see.

                          1. 1

                            Huh, nice. I have a Kobo Clara HD, but the only hacking I’ve done to it is to install KOReader. It would be pretty nice to be able to write with it, and to have a Gemini client on it.

                            1. 3

                              I tried Gemini yesterday! nix-shell -p castor:

                              https://i.imgur.com/TXGCfmq.jpeg

                              1. 1

                                Looks good!

                          1. 8

                            Once I figure out how (and do some more checking), I will try to submit a pull request. I wish I understood git better, but in spite of your help, I still don’t have a proper understanding, so this may take a while.

                            If Brian Kernighan feels this way, we can all feel better about headaches with git.

                            1. 11

                              If Brian Kernighan feels this way, we can all feel better about headaches with git.

                              But is he talking about git, or about Github? The term “pull request” as Github uses it confuses me to this day, while I generally don’t have problems with git itself.

                              1. 2

                                But is he talking about git, or about Github? The term “pull request” as Github uses it confuses me to this day, while I generally don’t have problems with git itself.

                                You may be right. The blurred edges between git and GitHub sometimes confuse me too.

                                E.g., I only recently realized that if you are in the middle of a pull request on GitHub, and you want to include further changes in the same pull request (e.g., you forgot to update docs and the README), the best thing to do is to use git commit --amend and then force push to the branch on your repo where the pull request originates. If you do that, then GitHub will automatically update the original pull request with the further changes, and everything goes smoothly. On the other hand, if you make the changes as a new commit and then push that to the branch, the new commit is not reflected in the pull request. I suppose this makes sense in one way: the pull request was made starting from a certain commit, and it only runs up to that commit. On the other hand, force pushing in the middle of a pull request initially feels dangerous and potentially destructive.

                                1. 12

                                  if you make the changes as a new commit and then push that to the branch, the new commit is not reflected in the pull request.

                                  I use both approaches regularly and each leads to my changes being incorporated into the pull request.

                                  1. 1

                                    I use both approaches regularly and each leads to my changes being incorporated into the pull request.

                                    Thanks for the note. I would have sworn the new commit method didn’t work for me recently. But I may have done something else wrong, or maybe I misunderstood what was happening to the PR. In any case, I’ll keep that in mind for next time.

                                    1. 2

                                      I have had problems in the past where I thought I was pushing to one remote branch, but actually I was pushing somewhere else. This is a pretty central UX failure of git: the whole concept of remotes and locals is needlessly confused.

                                      If I were in charge of git, branches would have relative and absolute addresses. local/main would be the absolute address of your local main vs. origin/main. As it is, in git today there’s origin/main, but no way of saying “local/main”. Then there’s the fact that origin/main is only sort of remote. It should just be origin/main is absolutely remote and any time you try to interact with it, git will behind the scenes try to do fetch and only if that fails, use the cache but put up a warning that it couldn’t fetch it. Instead, git will sometimes talk to origin and sometimes not, and you just have to know which commands do or do not cause a fetch. Then the whole concept of branches and upstreams is needlessly baroque. There’s a lot of implicit state in git—when you commit, does the commit update a branch or just make a detached commit? where does it rebase from or push to if you don’t specify?—that has a lot of command specific terminology. It should just be that there’s your current default branch and your current default upstream and then you can change either on an ad hoc basis for any command by doing –branch or –upstream, but instead you just have to memorize how each command works. It’s an awful UX all around.

                              2. 1

                                Has there ever been an explanation to git’s interface? I feel getting historical context would help in creating the right frame of mind for using it for a lot of people. I know how to use git well but I would still love to read this.

                                More on topic, let me try to start a discussion: what’s the most involved awk programs people have come across? I feel the language is heavily under utilized.

                                1. 5

                                  More on topic, let me try to start a discussion: what’s the most involved awk programs people have come across? I feel the language is heavily under utilized.

                                  Not especially involved, but I always thought that Ward Cunningham’s expense calculator was very clever.

                                    1. 3

                                      More on topic, let me try to start a discussion: what’s the most involved awk programs people have come across? I feel the language is heavily under utilized.

                                      Here’s a 3D game written in gawk.

                                      1. 1

                                        More on topic, let me try to start a discussion: what’s the most involved awk programs people have come across? I feel the language is heavily under utilized.

                                        At my first software engineering job in the digital mapping world we had a data quality check tool written by someone who only new shell/awk. This thing was a 2000k like gawk and ksh mish-mash. It worked and it was quite well structured, but I still feel it would have been a bit easier to maintain in another language. This was in the early 2000s.

                                    1. 15

                                      The proposed solution is to just push off the complexity of accurate solar timekeeping until the next millennium, which could possibly make a Y3K bug for anyone tracking accurate solar time that didn’t adopt the proposal. This is a civil and social problem that could be addressed with standards, treaties, and diplomacy - then accurately modeled in software when the human element is figured out. I’m no expert in timekeeping but Facebook’s history has made it clear that you can’t solve civil and social problems with code alone.

                                      1. 3

                                        … Y3K bug …

                                        It seems likely to me that human civilization as we know it will be unrecognizable in less than 100 years. Take your pick: climate change, resource wars, genetically engineered plagues, robot uprising, uploading, or (in the best case) a smooth transition to a post-scarcity society.

                                        I vote for not worrying about solar timekeeping, and if it can be put off to Y3K, that’s fine.

                                      1. 8

                                        I’m glad the author is happy, and it’s true Firefox’s privacy focus is very nice, but I think it bears mentioning that a stunning and ever growing number of websites I encounter just don’t work quite right with Firefox :(

                                        I think some web devs have decided that Chrome has won and don’t bother testing on anything else.

                                        1. 17

                                          Since I uBlock most shite, a lot of websites work better on firefox.

                                          Not as the authors intended…. but that is a different metric…

                                          1. 5

                                            I admire your intestinal fortitude sir :) I can barely use the modern web when it works exactly as authors intended :)

                                            1. 3

                                              Every JavaScript, CSS, UX and Graphics Designer hates my intestines. I make heavy use of, and strongly recommend https://addons.mozilla.org/en-US/firefox/addon/tranquility-1/

                                              Your super cluttered, javascript frameworks to the max with megabytes of dynamic css pride and joy…. Click. Gone. Sigh. Tranquility.

                                              1. 1

                                                I think that given the advanced state of decay the web is now in, most of the time, the web is better when you don’t allow it to function the way the authors intended.

                                                1. 0

                                                  I’ll leave expert assessments on the state of the web to more qualified folks like you, but for the unwashed like myself, what’s the ideal you’re shooting for? Just plain mark-up with no executable code at all perhaps? Or maybe some kind of super restrictive capability model thing?

                                                  I hear people express opinions like this all the time, and sure what we have is a giant ungainly mess in some regards, but it mostly works for most people’s use cases.

                                                  I’m just curious how you’d rebuild the web world in your vision of crystalline perfection :)

                                            2. 8

                                              Just wait until Europe finishes implementing its new law that requires Apple to allow alternative browser engines on iOS. I still don’t understand how anyone ever decided that was the biggest anti-competitive issue in the browser market, but here we are, and probably very soon after Chrome’s rendering engine gets onto iOS the browser market will simply cease to exist. Likely will be helped along a bit by Google doing a final “please view our site in Chrome” push across all their properties, and refusing to renew Mozilla’s funding, but hey, at least we will have stuck it to Apple.

                                              1. 1

                                                I share exactly this concern… the only thing I see that might prevent it is chrome’s battery drain relative to that platform’s default browser.

                                              2. 3

                                                Keeping an unconfigured instance of Chromium/Chrome around for these situations is a fine strategy. I use Firefox as my primary browser and keep Chromium installed when websites require it- but that only happens every couple of months for me. Google doesn’t get my login info or browsing history, Firefox gets the vast majority of my web traffic.

                                                1. 2

                                                  Definitely true. I usually just don’t use those websites, but sometimes that option is undesirable enough that I do start up Chrome. I do try to make sure to kill it as soon as possible afterwards . . .

                                                1. 5

                                                  This is the message of https://boringtechnology.club/; don’t spend your limited innovation tokens unnecessarily.

                                                  1. 4

                                                    I am of the opinion that by now, Kubernetes is a very cheap innovation token. Most kubernetes providers are stable and easy to get started with. I know I would be able to deploy much quickly and a more reliable system through kubernetes than any other mean. Obviously, it’s always an “it depends” scenario where managing a fleet of nodes and container does not have the same requirements as running a Wordpress.

                                                    1. 4

                                                      In my experience, Kubernetes is a substantial time investment and source of complexity for even the simplest of services. Hand-crafting a server from scratch is less effort at small scale.

                                                      Even in situations where it is the best tool for the job, it’s still a full-time job.

                                                      1. 3

                                                        To each their experience! I’m running a cluster with a dozen nodes and two hundred containers on EKS and in the last year I would estimate at most 2 or 3 days worth of work that were caused by kubernetes (mostly upgrade and maintenance and a bug around volume provisioning). I would be interested to see how someone keeps itself busy for a full-time job running a cluster (that would not be a full-time job without kubernetes).

                                                        1. 3

                                                          Fair enough, I am personally frustrated by this because we were previously using Heroku and it was just fine but now there’s this initiative to get everything on Kubernetes and it feels like I suddenly have to think about a tremendous amount of things that I didn’t previously.

                                                          To me, the sign of a good abstraction is exemplified by Larry Wall: make the easy jobs easy, without making the hard jobs impossible. Given the number of technologies you have to know in order to ship hello-world on k8s, I feel like it doesn’t live up to this.

                                                          1. 1

                                                            I think that a cross-cloud migration is not the right time for learning Kubernetes. I have recently undertaken a similar migration, and it took me about two weeks to complete, working at a sedate pace and testing each of my steps incrementally. This wasn’t my first time with Kubernetes, so it was easy to work incrementally and build objects on top of each other.

                                                            For the specific case of Heroku, the order I might use is: Pod, Deployment, Service, HPA, External DNS (if desired), Ingress and TLS.

                                                      2. 3

                                                        Hosted k8s is cheap in terms of effort to get started but like many managed offerings isn’t cheap in terms of money when you suddenly need to scale up.

                                                        1. 2

                                                          I don’t think it’s particularly expensive either, especially compared to what the blog post would suggest (Heroku, fly.io, etc.) or the cost of computing resource you will be managing with kubernetes (EKS is 73 USD + 0.2-ish CPU requested per node?). I’m of the opinion that if someone get to a point where they need to scale-up and kubernetes cost are an issue, maybe there’s something wrong elsewhere.

                                                    1. 2

                                                      For myself:

                                                      • Working on my personal server. I’m trying to delegate services as much as possible: maintaining your own infrastructure is fun, but it takes time. I already have moved DNS to a provider, and I am slowly migrating multiple email accounts and Dovecot/Fetchmail to a single Google Workspace account (at least it is my own domain).
                                                      • Improving my Gnus (Emacs) configuration to deal with the way Gmail handles IMAP labels.
                                                      • Probably starting a new Satisfactory base now than Update 6 is out.

                                                      For work:

                                                      • Finishing the Docker runner for my job scheduling platform (it currently only support local execution and Kubernetes).
                                                      • Start writing some documentation.
                                                      1. 1

                                                        I love hearing about folks personal servers. I think I’m like you - i use it as my test bed and playground for things i want to learn. What is your favorite thing you host locally?

                                                        1. 2

                                                          Not who you’re replying to, but I’d probably list gitea as my favorite thing I host locally. It’s small, lightweight and an amazing little service.

                                                          1. 2

                                                            And if you don’t care about collaboration features in the web ui, cgit is a very light and straightforward option.

                                                            1. 1

                                                              I used to use bare repos on an ssh server, but switched to gitea because it can be configured to create a repo on push. This allows me to easily create new repos from any machine with access.

                                                              Now I use it as an authentication store for some services. Gitea can be an oauth provider and is much simpler than many of the alternatives to run.

                                                          2. 2

                                                            I host all my private Git repositories (because I can and because all Git operations are way faster than with GitHub, which is satisfying when pushing). I also have NGINX for my website, a private IRC server (ngircd) for a few friends, a mail setup with Fetchmail/Dovecot and Influx/Grafana (mostly for the fun of it).

                                                            Everything is running on FreeBSD and managed with a deployment system written in pure POSIX sh.

                                                            While it sometimes means a couple hours spent upgrading the system or fixing some kinks, it is satisfying. I have learned a lot about software and ops that way.

                                                            Note to any developer out there: running your own server will change the way you design software. Running in production is not easy.

                                                            1. 1

                                                              Note to any developer out there: running your own server will change the way you design software. Running in production is not easy.

                                                              This is a weird statement. Change how?

                                                              1. 1

                                                                When you run a server, you have to deal with software not working properly, because it happens all the time. Thus you learn how important it is to write precise and meaningful error messages with the right context. You learn how software should behave consistently, and how this behaviour should be documented.

                                                                Having to deal with software in production is a good wakeup call for all developers.

                                                                1. 1

                                                                  I’ve been keeping a personal server for years (website, VCS, file synchronisation, file sharing, chat servers/client/bots, central place for all my note keeping, all sorts of Internet processing), but I don’t share your experience. Things go wrong very rarely, mostly during development. The worst problem I’ve had was with the laptop killing its battery circuitry and then randomly shutting down, resolved by retiring the machine.

                                                                  If anything, I’ve learnt to use #!/bin/sh -e and actually keep logfiles–cron mails error output automatically, systemd needs to be nudged into doing that. Knowing that something’s wrong at all is what’s important.

                                                            2. 2

                                                              Also not who you’re replying to, but I love hosting Snipe-It locally to track all the machines flowing in and out of my repair lab (plus my own machine and parts collection).

                                                            3. 1

                                                              I always felt like the right balance for DNS was: run BIND, but only as a “hidden master”; let some third party service AXFR from you and handle all of the public requests.

                                                              1. 1

                                                                That’s what I do for my domains. It works well.

                                                            1. 3

                                                              Interesting, but surprisingly low on detail. I’d like to hear more specifics about the drives in use at the storage tier and their relative resiliency compared to non-SMR drives, how long a single NVME SSD lasts at the database tier, what type of CPU they employ for their compute workloads, and how they handle failures in the single top of rack switches that route traffic for 46 compute nodes each.

                                                              1. 1

                                                                Installing Sorbet Leopard on my main laptop, a Powerbook G4 from 2004. The promise of better battery life and performance was enough to draw me away from the OpenBSD install currently on there.

                                                                1. 7

                                                                  I don’t see a difference between this and contributions to most other projects requiring a Microsoft (GitHub) account.

                                                                  1. 5

                                                                    you don’t have to have a @outlook.com account to contribute to projects on microsoft github.

                                                                    1. 5

                                                                      I think the point is that since Google develops both their email service and Go, they are the first party in both situations and have total control over whether this is required. If you were contributing to microsoft/dotnet on GitHub it would be the same situation, but most developers on that code hosting site are not Microsoft and don’t also control the account requirements.

                                                                    1. 13

                                                                      I think more programmers should be using secret scanners but there weren’t any “no-brainer” solutions I could find, so I decided to build a new one. The core of secret scanning is running regex against a large number of files, and it turns out this is something ripgrep is excellent at. By leveraging the ripgrep library effectively secrets is able to scan files roughly 100x faster than other solutions I tested. This is my first Rust project and I was impressed with how quickly I was able to put something together that is also really fast. Let me know if you have any feedback!

                                                                      1. 7

                                                                        I appreciate that you put links to other similar projects in the README! It’s a small thing but really helps to encourage adoption of the idea, even if the implementation doesn’t meet specific requirements. That being said, this tool looks good for my use case and I’m definitely going to try it.

                                                                        1. 5

                                                                          I like the secretsignore feature. Sometimes you want things that look like secrets in your tests, and not being able to accommodate that has made me avoid similar tools in the past.

                                                                          1. 1

                                                                            There’s also the git-secrets project (from AWS, first released in 2015) that’s also designed as a pre-commit hook.

                                                                            (I used to work for AWS and used git-secrets, but never worked on git-secrets.)

                                                                          1. 5

                                                                            I can really appreciate this but I have a hard time imagining using hardware that old on the modern web with all the stuff that exists now. I used a W520 for 2012-2014 at IBM and it was beast in all metrics. I run an AMD 5950X these days in a custom build and switching back to my 2019 build with an AMD 2600 feels like warping back into the days of sepia comparatively.

                                                                            1. 6

                                                                              My personal machine is still a 2013 MacBook Pro, with less RAM than this beast. 10-15 years ago, a computer that was 3-4 years old felt slow. 20-25 years ago, a computer that was 1-2 years old felt slow. Now, even with big web apps, I don’t notice much difference. My 9.7” 2016 iPad Pro is starting to feel a bit limited by the 2 GiB of RAM (in particular, the Apple News app seems to spawn a huge number of background download threads, run out of RAM, and then crash) but the web browser still feels fast and it’s a slower machine than the laptop.

                                                                              Moving from any of the portables to my 10-core Xeon work desktop is very noticeable, but only really for bloated Electron apps and big compile jobs.

                                                                              1. 1

                                                                                One time in 2015-2016, I made a joke that I needed a separate computer just for the chat apps I needed. I had a spare computer available to me at work and so I tried it for a week before I declared the back and forth not worth it. I did enjoy the spare resources on my main machine for that week, though!

                                                                              2. 3

                                                                                I have also used W530 (the W520 successor) from 2012 to 2016 with the new ‘island’ keyboard - both at work and home - and after all that time I still definitely prefer the W520 keyboard - its just so natural.

                                                                                As for speed - I also use ThinkPad T14 GEN1 daily (my work laptop) and I do not see any speed difference between that one and W520. Both are 4 core CPUs.

                                                                                … but if you compare 11 years old mobile 4 core CPU to modern desktop/server 16 core CPU … and especially taking into account that AMD did really great job with its ZEN* architectures … then yes, that W520 will be slower :)

                                                                                Regards.

                                                                                1. 3

                                                                                  Using a weak-ass laptop does make you filter out your usual workflow to the things that are actually worth waiting for, that is true.

                                                                                  1. 3

                                                                                    Indeed. I’ve recently becomes obsessed with the first and second generation EeePCs from ASUS (650MHz on the low end). Throwing Alpine on there and using only lynx (and if I really want a GUI, it runs Netsurf fine) has been helpful in keeping at bay some modern day distractions. Thankfully, it’s easy enough to put in a new Wifi/bluetooth mini PCIe and it has an ethernet port which is nice. Also, I really miss palmtops.

                                                                                    1. 2

                                                                                      Haiku OS also runs well on early Asus netbooks, too - even the wifi works! The native WebPositive browser isn’t bad, either.

                                                                                      1. 3

                                                                                        Will check it out! The original wifi card worked well but I did want a bit of speed increase. I just love how easy it was to swap out relatively basic parts.

                                                                                      2. 2

                                                                                        EeePCs were amazing! They are probably not very practical these days, I guess, but the barrier for practical use is surprisingly low.

                                                                                        About two and a half years ago I did a lot of development work using an old laptop of mine, a lazy Thinkpad E120 – an i3 machine that was slow even by 2012-ish standards, so it was hardly Speedy Gonzalez in 2019. I could do productive systems and embedded development work on it, and even some basic FPGA development. That involved some contact with the modern web, too, as browsing documents and forums is sort of a given, and I could do it comfortably with Firefox.

                                                                                        Electron stuff and most of those websites, you know the ones I mean, the ones with extraneous JavaScript and “subscribe to our newsletter” and chat bot popups, were generally off the table. I mean you could use them, sort of – slowly – but not along with anything else.

                                                                                        But seven years is a long time, and the rate at which hardware decays has certainly slowed down. Even a high-end computer from 1996 would’ve been pretty much useless in 2003, outside some super special-purpose niches (running a particular old version of a program running, or hooking up a particular CNC machine to it or whatever). Whereas an entry-level office laptop from 2012 was still an entry-level office laptop in 2019.

                                                                                        (I actually still have the little gizmo and it’s still usable today, when it’s like ten years old, but I haven’t really done any serious work on it so I can’t say how usable it still is in a professional setting).

                                                                                    2. 3

                                                                                      T61 checking in…

                                                                                      What “stuff”?

                                                                                      1. 4

                                                                                        Eh, browser tabs, seemingly ever other chat app I need to interact with my various communities is Electron as are apps to control some lighting. All RAM hungry, for sure. I noticed a massive difference in JavaScript-heavy apps and sites going from a high-end 2015 13” MBP (4c i7) as my daily driver to the beast I’ve got now. Plus stuff outside of the browser, like audiovideo rendering, etc.

                                                                                        I could probably use a Raspberry Pi as my daily driver if I didn’t participate in or run many of the communities or activities I enjoy.

                                                                                        1. 3

                                                                                          This is one reason why the trend in soldered on, non-user-replaceable RAM for laptops and desktops is so infuriating. Machines from the past ten years or so are much more limited by their memory capacity than any other metric. Socketing the RAM leads to a tradeoff of performance for longevity and less waste. It’s disappointing but not surprising for manufacturers to pick the option that drives their bottom line at the expense of the environment.

                                                                                          1. 1

                                                                                            On the other hand, maybe making it easy to upgrade computers would relax the existing constraints on software bloat, leading to even more energy waste…

                                                                                            1. 1

                                                                                              That’s a good point, but it’s only relevant when the use phase of the device is sufficiently long. The machines embodied emissions are generally the largest component of the total emissions for laptops and mobile devices [1] in the replacement culture that exists today. This blog post [2] references a 3-5 year upgrade period for laptops, which agrees with my experience with corporate technology refresh cycles. Expectations would need to be adjusted by 5-10x the current lifespan of laptops before it starts to become a major issue.

                                                                                              1 2

                                                                                              1. 1

                                                                                                Yes the use phase of the device is the dominant factor, but that is also affected by software bloat.

                                                                                          2. 1

                                                                                            That makes sense. But I really wish people who were aware of this would run their communities in a way that accommodates people who don’t have new computers. Maybe for some it is a feature to filter out less affluent would-be participants…

                                                                                            1. 2

                                                                                              It’s challenging to balance the interests of the administrators of a community with the interests of participants with no active duty of administration. In my experience these last ~27 years of online participation — netizenry? — I’ve observed that online communities are not that much different from neighborhoods in real life. That is, how can we as a collective improve the neighborhood while establishing consensus on a shifting window of norms? Early adopters can exceed some standards of their own volition and late adopters might need some help from others to progress.

                                                                                              All of the communities I have in mind started on IRC. Some in the early 2000s, some in the mid-2010s. We decided as a community to move to other platforms (Discord, Slack, etc.) knowing that the vast majority of participants could use those platforms. One community in particular actually experienced an increase in growth because of the switch to Slack: it’s a professional community and it turns out that a lot of employers blocked outbound connections to IRC out of security concerns or prohibited employees from installing an IRC client altogether. During the Slack popularity explosion, it grew because of the network effect: so many employers were switching to Slack that it was trivial for folks who wanted to participate in this community to configure their client to talk to both work Slack and our community’s Slack. Nearly every open community Slack benefitted similarly, to the point that I’ve got seven Slack workspaces configured on my work machine for various work-related and professional communities I interact with on a daily basis. I’ve got a few others on my personal workstation! The same logic applies to Discord. These proprietary, heavy systems won out because the feature set was so much better for the community — both the administrators, who need to establish, enforce, and revise norms; and the participants, who want to understand and abide by the norms and see them followed — that IRC fell out of favor for communities with these particular concerns. If Matrix or Mattermost had existed at the time with feature parity, including free hosting, maybe we’d have chosen those. Switching is complex; attrition for large communities is a great threat.

                                                                                              This probably does leave some people behind. That’s crushing and there are ways to alleviate that, largely around passing second-hand equipment to a new owner and using the professional network to give people an opportunity and community to teach and support to lift up everyone.

                                                                                              There still remains the question of environmental impact and that’s a rabbit hole of thought I’ve run out of time to explore in the scope of this discussion.

                                                                                              1. 1

                                                                                                I personally have a revulsion to anything that is purposefully inefficient in order to benefit a company. Like paper towel dispensers designed to work only with rolls made by the same company. I get the same feeling with chat applications when they close their XMPP/IRC gateways, so I don’t miss having the ability to participate on those platforms. To me, lifting up everyone would mean pressuring the chat applications to start supporting standard protocols again, not teaching people to switch to applications which they have no control over. Maybe that helps to imagine why people use old hardware, as you probably don’t hear this perspective very often for obvious reasons.

                                                                                        2. 2

                                                                                          I’m still using my desktop from 2012. The only thing I changed since then was the additional DRAM, which now amounts to 24 GB.

                                                                                          There is only one web stuff that makes me feels like working on a dial up, and that is Micro$oft Team$.

                                                                                          1. 1

                                                                                            I probably could have tolerated my 2015 13” MBP longer if I had more RAM. It was 8 GB and that was limiting. On my work machine, a 2021 16” MBP w/M1 Pro, I’m idling with my normal suite of tools running at 22 GB of 32 GB available. I think my gaming rig idles around 12 GB. I don’t think I’ve used all 32 GB on either yet. Room to grow and add more Electron apps, I guess!

                                                                                        1. 1

                                                                                          The use of Arch as the base distro seems curious to me. Why?

                                                                                          1. 2

                                                                                            I agree, but reading further it seems like the read only rootfs with A/B upgrade mechanism might paper over possible inconsistencies in system state that come from rolling package upgrades. My guess is that they went with Arch to take advantage of the community’s prompt packaging of new versions of game dependencies.

                                                                                            1. 1

                                                                                              Considering they have the Steam runtime for games from Steam and Flatpak for everything else, I’m not sure of that.

                                                                                              1. 1

                                                                                                Revisiting this a little more caffeinated, I think I meant to say “system level dependencies”. Steam depends on system packaged dependencies like mesa, vulkan, etc and they mention in the article that they’re also packaging KDE for a normal desktop experience.

                                                                                                1. 2

                                                                                                  I have never had a more disappointing Linux experience than SteamOS, and got rid of it pre-this.

                                                                                                  I think you’re right, in that they don’t have whatever it takes to actually maintain a distro, so they’re betting on Arch being stable enough that they can crowdsource/freeload all the hard work.

                                                                                                  Not sure if they’re snapshotting Arch versions, but if they do proper upgrades, they kinda have to. Hope they do at least some QA on it, but the debianish version fell into such abandon, I don’t trust any of this.

                                                                                                  1. 2

                                                                                                    Valve is definitely not the kind of company good at long-term unless it makes them a shitload of money.

                                                                                          1. 1

                                                                                            I wonder what’s the cheapest and least worrisome home setup for Plex. Some sort of RPi or TinyMiniMicro build with an external HD? Plug it in, configure wifi, ssh, setup a network share, install plex with apt-get, done, and never worry about it ever again.

                                                                                            (Mostly for streaming music I’ve purchased on bandcamp. Spotify with local files is annoying as hell.)

                                                                                            1. 2

                                                                                              I have an rpi 4 with 8 GB of ram with an external hard drive. It works as expected and I haven’t run into any issues (beside the seemingly weekly Plex auth hiccup, but that’s beyond my control). I can even do some remote streaming but then the uplink bandwidth from my home connection is the bottleneck. The rpi can even do some transcoding on the fly. It’s not as fast as a real CPU/GPU, but it’s still bearable with minimal delay.

                                                                                              Edit/Addition

                                                                                              I forgot to add that Plex is ok with music only if you are willing to pay/upgrade to Plex Pass, as it will give you access to their music player app (Plexamp). Playing music with the main app is rather primitive.

                                                                                              If you want to just stream music, there are other server setup options that might be better (like airsonic). Plex is nice because it integrates video and audio streaming in a single platform, but music streaming (in the main app) is definitely second class and audiobook streaming is third class (although Prologue is an outstanding iOS player).

                                                                                              1. 3

                                                                                                For the last use case you mention, I highly recommend Navidrome, another subsonic-compatible server that’s especially fast and low on resource utilization. The catch is that it’s missing some features of more mature projects like Airsonic, but development is very active and it’s getting there.

                                                                                                The subsonic-compatible approach really shines for mobile listening because the app ecosystem is very rich. For Android I can recommend Audinaut, it has good support for downloading and playing back offline and the interface is pretty slick. There are dozens of easy to host web clients for subsonic compatible servers too. The downside of this approach is that multi user sharing isn’t really seamless compared to something like Plex.

                                                                                                1. 1

                                                                                                  Airsonic looks great, it‘s hard to beat Prism and Prologue on iOS though

                                                                                                  1. 1
                                                                                                    1. 1

                                                                                                      I’m using Navidrome and I like it quite a bit. Especially on mobile, where it’s a lot more convenient than dumping music onto the phone.

                                                                                                      That said, I’m not a big fan of the interface, but it’s a massive step up on frontend and backend over Subsonic and forks.

                                                                                                      1. 1

                                                                                                        I’ve used CherryMusic0 for browser based streaming for a few years, it’s great as well! Navidrome is pretty awesome, tho.

                                                                                                  1. 23

                                                                                                    Every generation has to discover which operating systems cheat on fsync in their own way.

                                                                                                    1. 7

                                                                                                      Hello, $GENERATION here, does anyone have historical examples or stories they’d be willing to share of operating systems cheating on fsync?

                                                                                                      1. 14

                                                                                                        Linux only start doing a full sync on fsync in 2008. It’s not so much “cheating” (posix explicitly allows the behavior) as much as it is “we’ve been doing it this incomplete way for so long that switching to doing things the correct way will cripple already shipping software that expects fast fsync”. Of course the longer you delay changing behaviour, the more software exists depending on the perf of an incomplete sync…

                                                                                                        The real issue marcan found isn’t that the default behaviour on macOS is incomplete. It’s that performing a full sync on apple’s hardware isn’t just slow compared to other nvram, it’s that the speed is at the level of spinning disks.

                                                                                                        1. 2

                                                                                                          It’s funny, I have 25+ years of worrying about this problem but I don’t have a great reference on hand. This post has a bit of the flavor of it, including a reference from POSIX (the standard that’s useful because no one follows it) http://xiayubin.com/blog/2014/06/20/does-fsync-ensure-data-persistency-when-disk-cache-is-enabled/

                                                                                                          The hard part is the OS might have done all it can to flush the data but it’s much harder to make sure every bit of hardware truly committed the bits to permanent storage. And don’t get me started on networked filesystems.

                                                                                                          1. 2

                                                                                                            Don’t forget scsi/eide disks cheating on flushing their write buffers, as documented here So even when your OS thinks it’s done an fsync, the hardware might not. It’s one of the earliest examples I remember, but I’m sure this problem goes back to the 90s. I also remember reading about SGI equipping their drives with an extra battery so they could finish pending flushes.

                                                                                                            1. 1

                                                                                                              I remember the ZFS developers (in the early 2000’s, in the Sun Microsystems days maybe?) complaining about this same phenomenon when they loudly claimed “ZFS doesn’t need a fsck program”. Someone managed to break ZFS in a way that made a fsck program necessary to repair because their drives didn’t guarantee writes on power off the way they said they did.

                                                                                                        1. 8

                                                                                                          Designate a place in your home or a friend/family home where important documents are stored, put them in a waterproof and fireproof safe and include your 2FA backup codes. It’s significantly harder to lose a safe than lose a sheet of paper and the safe provides resilience against fire and flood, both of which would destroy any computer storage that could contain these codes otherwise.

                                                                                                          1. 15

                                                                                                            The “waterproof AND fireproof” point is important. Fires tend to be fought with lots and lots of water, but many fireproof safes that you can buy at the hardware store are only fireproof. I’ve made that mistake myself.

                                                                                                          1. 3

                                                                                                            My new battlestation: a maxxed out Dell Precision 3650 and my trusty Vortex Vibe keyboard https://cdn.masto.host/fedi9tilde/media_attachments/files/107/428/915/557/590/187/original/9d53a1db069eccbf.jpeg

                                                                                                            1. 2

                                                                                                              That’s a nice wallpaper, do you have a link to it?

                                                                                                              1. 1

                                                                                                                I would love to have balcony with that much light in my home office :D How do you like “crafting interpreters”?

                                                                                                              1. 2

                                                                                                                Pop!_OS now hosts its own custom software repositories. This results in faster, more stable installations.

                                                                                                                This point is a little confusing. If I’m reading the differences page correctly, Pop! is upstream of Ubuntu, which has a large set of repository mirrors hosted by Canonical and many volunteers. Are the Pop! developers swapping from this infrastructure to separate mirrors for the Pop! software repos? Or does this point mean that software not available in the upstream repo is now hosted in a Pop!-specific repo?

                                                                                                                1. 17

                                                                                                                  Interesting take. (context: I bought the original Ergodox kit (not EZ) in round 3 on Massdrop and eventually developed my own keyboard design based on my experiences with it.)

                                                                                                                  Not enough keys

                                                                                                                  This is especially funny to me since when I last remapped my Ergodox I found it had way more keys than necessary, and I didn’t even bother adding keycodes to the number row since I found the numpad on the fn layer to be dramatically faster and more accurate.

                                                                                                                  Learn to use layering! I think some people are suspicious of the fn key because many laptops implement it in a completely useless way, putting it way off in the far corner and putting the fn numpad in an awkward, badly placed position. Using a well-designed fn layer is like night and day compared to that.

                                                                                                                  Lack of labels

                                                                                                                  Putting labels on a layered keyboard is pretty silly IMO–the labels will only ever tell you what’s on the base layer, and that’s the one that’s easiest to learn. The part that takes longer to learn is the other layers, and you need a separate cheat sheet for that anyway!

                                                                                                                  You could theoretically produce keycaps which have legends for both the base layer and the fn layer, but IMO this is a really bad idea since the point of a reprogrammable keyboard is to allow you to move things around at your whim, so if your keycaps say that the arrow keys are on fn+WASD but you want them under ESDF where your hand naturally rests then you just have to put up with labels that are wrong; much worse than labels that are just not there to begin with.

                                                                                                                  Basically it’s just fundamentally impossible to have all three of: reprogrammable, labeled, layered.

                                                                                                                  Context shifting

                                                                                                                  This one can be a pretty big problem if you move around a lot; like if you keep your Ergodox on your desk but still want to hack on your couch or something. (or, in the pre-pandemic days, at a coffee shop). IMO the biggest flaw of the Ergodox is that it’s a pain to take with you when you’re not at your desk, which is why when I designed by own based on my experiences with the Ergodox, I made mine small enough to fit in a large pocket and able to be placed on top of my laptop’s internal keyboard when I’m on the couch.

                                                                                                                  frankly, I’m not sure that a multi-week dip in productivity is going be be offset by whatever gains I might make by using it long-term.

                                                                                                                  This is a common refrain you also hear when people talk about learning improved layouts like Colemak or Dvorak instead of Qwerty.

                                                                                                                  IMO it’s quite misguided; the advantage of a better keyboard or better layout is not productivity, it’s comfort. If you were spending multiple weeks of relearning just in hopes that you’d get a bit faster in the end I’d agree for most people it’d be a waste of time, but if you’re doing it because you want to avoid potentially career-ending stress injury, that’s a completely different story.

                                                                                                                  1. 5

                                                                                                                    I’ll second the parts about layers and labels. When I started using the thumbs to shift layers I got a lot faster and my hands moved around a lot less. Which was the point of me getting an Ergodox. Layers just made using my keyboard so much more comfortable. I’m also considering removing my number keys for the same reason. On labeling, I went from a weird way of not quite hunting and pecking and not quite touch typing to a full touch typer in a pretty short time. I went with blank keycaps when I got my Ergodox EZ and it forced me to learn to type without looking at my keyboard. Plus I like the tiered keys you get with an EZ if you get blank caps.

                                                                                                                    1. 3

                                                                                                                      Right, like… I think a lot of people don’t understand that they already use layers on their conventional keyboard! It just happens to be a single layer for mostly capital letters, but some special punctuation as well. Turns out while having one layer shifting key is good, having two is even better! Three is a bit extreme, but it should be an option too; everyone has different needs.

                                                                                                                    2. 3

                                                                                                                      Learn to use layering! I think some people are suspicious of the fn key because many laptops implement it in a completely useless way, putting it way off in the far corner and putting the fn numpad in an awkward, badly placed position. Using a well-designed fn layer is like night and day compared to that.

                                                                                                                      I’m using Moonlander and I’m really struggling to get into using layers. So far, I have only two effective layers, base one and one for window manipulation. In the base one, I managed to cram as much as possible, and using the thumb cluster so that each key either behaves as Cmd/Ctrl/Alt when pressed and Enter/Space/Backspace/application launcher (Emacs, Terminal, Quicksilver). But I feel I’m not reaching the right comfort and things could be done differently (especially since I have short fingers and reaching corners/top line is a struggle). What are good examples of layers?

                                                                                                                      1. 3

                                                                                                                        The best is probably moving all movement related keys (arrows, home, end, page up, page down) to a convenient layer. For me, holding a activates motion layer, and ijkl are arrow keys). This simultaneously makes the keys you use all the time most convenient, and frees up a bunch of space in the base layer.

                                                                                                                        1. 2

                                                                                                                          It never occured to me to use an existing key to switch to a layer. I’ll definitely incorporate this approach for both keyboard and mouse navigation.

                                                                                                                          I have a layer with arrow keys in place of j, k, l, ; (as I didn’t want to move my hand), but never use that layer much, since I also have arrow keys in the bottommost row. Maybe I turn those off, forcing myself to use layers more…

                                                                                                                        2. 2

                                                                                                                          I use at least half a dozen layers on a daily basis. Link below to my layout, most of those layers have been in use for about a decade.

                                                                                                                          https://axelsvensson.com/layout1/

                                                                                                                          1. 1

                                                                                                                            Interesting! How do you switch layers here? Do you go with “hold key for the layer” or “tap for the layer” approach?

                                                                                                                            1. 1

                                                                                                                              Hold. The “mods” layer is really the hold modifiers, rather than a layer. So pretty much every key is dual-function.

                                                                                                                          2. 2

                                                                                                                            What are good examples of layers?

                                                                                                                            I’ve been using this layering with minor tweaks since about 2014: https://atreus.technomancy.us/cheat.pdf

                                                                                                                            I use Emacs nearly exclusively which informed the layout, with one concession to more conventional use (since this is also the default layout for the keyboards I sold) I put arrow keys on the fn layer; I would have omitted the arrow keys altogether if it were just a layout for myself exclusively.

                                                                                                                            Just another example of how everyone’s got different needs and that you should expect to do a lot of tweaking to find what’s best for you. Another example is how I use shift-insert to paste, so insert is on the fn layer; it would definitely not be there for most people.

                                                                                                                            Edit: for clarification, the final layer is not accessed with a modifier key; it’s modal and accessed by pressing and releasing fn+esc and disabled by tapping fn on its own.

                                                                                                                            1. 2

                                                                                                                              I have a layer for playing games: WASD and the surrounding keys are intact but the right hand is a numpad and common modifier keys like space, left ctrl, left shift etc are moved closer. Many games use numpad keys for secondary controls assuming the player is using a full sized keyboard and changing the arrangement of the modifier keys reduces travel and discomfort. Perhaps there are workflows in the software you use that feel awkward to type? You can create layers to make that repetitive motion easier.

                                                                                                                            2. 2

                                                                                                                              Not enough keys

                                                                                                                              it had way more keys than necessary

                                                                                                                              Both :< Some of my ergodox keys are unmapped (almost all bottom layer, for example), but I don’t have enough keys. The problem is, Russian language annoyingly has just enough more letters that English (33 vs 26) that it works ok without any kind of special input method on a full-sized keyboard (layout). While it feels ok for my brain that [ and { are in the separate layer, having a couple of Cyrillic letters in a layer feels very jarring.

                                                                                                                              1. 3

                                                                                                                                Russian language annoyingly has just enough more letters that English (33 vs 26) that it works ok without any kind of special input method on a full-sized keyboard

                                                                                                                                I had a similar problem when I started learning Thai; my 42-key Atreus layout had been designed around having precisely the right number of keys for typing English, and Thai has 44 consonants and 15 vowels, so I had to switch back to my Ergodox for that. Nowadays the Atreus has 44 keys, which makes it a better fit for latin languages which need AltGr/compose but it’ll never be a good fit for Thai.