1. 6

    I’m not sure when I would use this but I love it when people use languages in ways the authors hadn’t intended.

    1. 16

      Regardless of the practicality of this approach, this exercise found many bugs in the new Go generics implementation (most of which are now fixed) and opened several questions into their semantics.

      1. 2

        … this exercise found many bugs in the new Go generics implementation (most of which are now fixed) and opened several questions into their semantics.

        Can you provide a link to the related discussion?

      2. 4

        “Here is an extremely convoluted way to make C kinda look sorta like C++ but without the additional type safety.”

        “What’s the advantage?”

        “It’s not C++.”

        “SOLD!”

      1. 3

        Recovering from a corrupted filesystem that ate all my files (yeah, all of them, except for some empty root directories: /media, /proc, etc) and half of my life.

        And no, my backups were not up to date.

        This time I will keep my backups up to date. This time I will keep my backups up to date. This time I will keep my backups up to date.

        No, I really will this time. I swear.

        1. 5

          I’m gonna take a nubbin out of Amazon culture that I’ve found super useful in cases like this: Rely on mechanisms. Not best intentions.

          Telling yourself that if your backup strategy requires discipline and manual intervention is a recipe for self disappointment and failure. Make it effortless and you win big today and into the future.

          There’s something to be said here for having some kind of central file store (I use a NAS, YMMV) and backing that up to at least 2 additional places.

          My NAS backs up to a local USB disk as well as to Baclblaze, and I’ve made actually using it for my regular work trivial with things like using autofs on the Linux side (It’s easier than you think to set up, but in true Linux fashion you’d never know that from the docs. Blog post incoming) and on the Mac/Windows side the very first thing I do is set up a persistent fileshare and do all important work there.

          It’s all work for sure but it’s a one time investment that’s allowed me to make continual iterative forward progress rather than having to start from scratch every few months when my laptop gets blown away :)

          1. 3

            Inspiring. My backup setup is pretty good, using borg to back up several computers onto a dedicated drive on my home server/NAS, but… Maybe what I should do with my weekend is get notifications for failed backups working properly and automate restore tests.

            1. 3

              I’ve found borgmatic to be great for automating my borg backups and sending me notifications when there are problems.

              1. 2

                You’re already well ahead of the game from most people :) I should look into borg.

                I’m using a Synology NAS at this point because when I bought it ~3 years ago I wasn’t confident enough in my own skills to be sure that I could manage a server of my own without losing data.

                Were I to do it again today I’d definitely look REALLY hard at using FreeNAS or UnRAID or … Something :)

              2. 2

                My NAS backs up to a local USB disk as well as to Baclblaze

                Backblaze is so simple to setup and forget about that I think it should be the first step in most anyone’s back-up strategy.

              3. 1

                I have a folder on my desktop. I have the same folder on my laptop. And my NAS. I run backups from my NAS only.

                Syncthing is really nice like that.

              1. 1

                I too can look up words in a dictionary:

                main: Most important; principal

                A branch has nothing to do with decisions or dialogue.

                1. 11

                  Updates causing a reboot

                  When this happens though, it’s not only a reboot. It’s then waiting forever for the updates to install. It’s not an exaggeration to say I’ve seen my partner’s laptop sit there for an hour “installing updates”.

                  1. 9

                    It looks like the complaint here is that the author doesn’t really grok async processes, so they demand sync processes via threads. FYI you can use worker threads in NodeJS that use a different thread.

                    I’m not sure what the data structures non sequitur was about, you can write any data structures you need in JS if they aren’t already in the language.

                    This article is all about personal preference, though. The author can’t remember the Promise API, but in the context of the post, it seems to mean they can’t remember how to write client.query(/*..*/).then() instead of using await client.query. Is it that abstracted for you or did you just never really use promises to begin with?

                    I’ve been with JavaScript for a long time (since roughly 1998) and I remember what it was like when it was pretty much something you used to add a little jazz hands action to your site via DHTML. The evolution of JS (which is criticized by the author) is due to millions of eyes being on the language and adding improvements over time. JS naturally came out of an async ecosystem (the browser), so Node followed the same idea. Callbacks were a hassle to deal with, so we got to Promises. Promise syntax is a big unwieldy, so we switched to async/await. If someone comes up with an easier way to do it, they will. You can still write fully blocking functions if you want. You can also avoid your “red/blue” function thing by using Promises without signifying a function with async. Just use the old syntax.

                    I don’t primarily develop in Node, but I see a lot of misdirected anger or hate on the language and ecosystem because people just don’t understand how to use it. It’s different than a lot of stuff for sure, but it is a nice tool to have.

                    1. 4

                      Thank you! I’ve been writing JS since 1999 so I definitely relate to the DHTML days. For the last 8-10 years I’ve been writing Ruby professionally, and switched to JS (well, mostly TypeScript) just last year when I changed jobs. Gotta say, there’s been a ton of work done around developer ergonomics and making the language a bit less unwieldy.

                      1. 4

                        I’m not sure what the data structures non sequitur was about

                        I thought it tied in quite nicely to the part about Erlang and Elixir. Erlang was designed with good data structures around concurrency but Node’s data structures have been strained since callbacks and Promises are building on top of that abstraction.

                      1. 1

                        A slightly altered database schema could handle this through a check constraint by storing the number of symptoms.

                        CREATE TABLE arrivals (
                          id INTEGER NOT NULL PRIMARY KEY,
                          has_symptoms BOOL NOT NULL DEFAULT FALSE,
                          num_symptoms INT NOT NULL DEFAULT 0,
                          pcr_test_result BOOL NOT NULL DEFAULT FALSE,
                          CHECK ((has_symptoms AND num_symptoms > 0) OR (NOT has_symptoms AND num_symptoms=0))
                        );
                        
                        INSERT INTO arrivals (id, has_symptoms, num_symptoms, pcr_test_result) VALUES (123456, true, 2, true);
                        INSERT INTO arrival_symptoms VALUES (123456, 1);
                        INSERT INTO arrivals (id, pcr_test_result, num_symptoms) VALUES (123458, false, 5);
                        -- error: new row for relation "arrivals" violates check constraint "arrivals_check"
                        INSERT INTO arrivals (id, pcr_test_result, has_symptoms) VALUES (123458, true, 0);
                        -- error: new row for relation "arrivals" violates check constraint "arrivals_check"
                        
                        1. 1

                          But wouldn’t it create another potential for discrepancy between arrivals.num_symptoms and the number of rows in arrival_symptoms?

                        1. 4

                          This problem has bitten my butt so many times that I finally added an alias in my .gitconfig to track the remote branch.

                          1. 4

                            I’m used to git checkout -b [new branch] , which on git push errors and suggests the git push -u origin [new branch] mentioned in the blog post. I might be misreading, but it looks like this sidesteps the upstream issues mentioned.

                            If I were to try to update a feature branch from master/main, I’d reach for git merge first, but if there aren’t conflicts, why bother?

                            1. 2

                              I’ve never seen this issue, and I’m having a hard time following along. I might retrace the steps to understand better.

                            1. 3

                              One of the things this video explains is how to configure less as your pager. The next thing I would suggest trying is setting up delta as your Git pager. It adds syntax highlighting to your diffs, then passes the formatted diff on to your $PAGER. You’re still using less, but any diffs you’re looking at are a little more readable.

                              You may want to change Delta’s configuration. I use --color-only. I find that without that flag, Delta’s reformatted filenames blend in too much with code around them.

                              1. 1

                                Thanks for the suggestion! That’s a handy tool.

                              1. 3

                                I find it a little frustrating that they renamed the functions too.

                                In the given example, if they had moved the function TempDir into the os module, the only change that would be needed would be deleting the io/ioutil import. But they also renamed the function so the code needs to change as well.

                                1. 9

                                  os.TempDir() already exists and is used to get the system’s temp directory (i.e. ${TMPDIR:-/tmp} on most Unix systems).

                                  ioutil.TempDir() will keep working; you don’t need to update any of your code, and I believe go fix should make all the required changes anyway.

                                1. 1

                                  I don’t really see how this would cause any problems. I’m pretty sure twitter just follows the link once to generate a preview. It wouldn’t loop because it doesn’t follow any links on the previewed page.

                                  1. 1

                                    But to generate the preview, it must render itself which requires it to generate a preview.

                                    1. 1

                                      The preview is generated asynchronously. It’s going to either get a 404, or the page without a preview.

                                  1. 3

                                    Small shell scripts are your friend, something like:

                                    cat << 'EOF' > /usr/local/bin/publicise-recording
                                    #!/bin/sh
                                    cd /var/recordings
                                    mv "$@" public/
                                    cd -
                                    EOF
                                    

                                    Then, hit their hand with a ruler when they use mv, rather than the script.

                                    You may have grand plans for VOD and making recordings public by default, but those plans may not come to fruition for another year, if ever, but if you spend 5 minutes putting up guards around dangerous manual processes, you won’t have to spend hours grepping through binary files…

                                    1. 5

                                      I think that last cd is superfluous

                                      1. 1

                                        Nah, it would move you back to the directory you started in.

                                        1. 9

                                          It won’t do either. It operates in the context of the shell process running the script, which is a sub process of the shell you invoke it from. That shell’s CWD will be unaffected.

                                        2. 1

                                          Yeah, that was just a nice thing to put them back where they were, if they were in a different directory

                                          1. 3

                                            That would only matter if the script was sourced though.

                                      1. 5

                                        As cool as this is, kind of a misleading title:

                                        The first invocation of the script will be slower as the script is compiled

                                        1. 4
                                          • Running CAT-6 cable from the network rack in the basement up to my office.
                                          • Reading The Traitor Baru Cormorant by Seth Dickinson
                                          • Getting in a nap
                                          1. 4

                                            I don’t get why so many people tolerate or even insist on proprietary IM.

                                            1. 8

                                              Because the people you want to communicate with are there. And for “joining an existing group” there is no way to convince them to switch, you can just choose to not participate.

                                              1. 3

                                                I’ve been trying to find out what the reason is, especially with the enormous increae over the last few months in Discord usage. For the most part, it seems like the easier set-up and it’s “shinny” look attract people. Neither appear legitemate in my eyes, but I guess most people don’t want to bother setting up their own server, so they use pre-existing services, that have to be funded, that require propriatory software for user exploitation. I don’t know what to say about the latter, I dislike the look-and-feel of Discord, but apparently many don’t.

                                                1. 3

                                                  Not to say that they are outstanding quality, but there aren’t really good non-proprietary alternatives. Don’t get me wrong, XMPP offers means, Matrix is heading in that direction, and if you are willing to self-host you have an amazing Slack alternative named Mattermost.

                                                  But, XMPP can be messy, Matrix/Riot/Element is not really polished and scares people away and most importantly something appealing about the proprietary ones is effectively centralization. If you are a gamer, you have that one app to join all the gaming communities on your phone or laptop/desktop. You don’t really care about someone collecting you gaming communications, etc.

                                                  Combine that with community things (Stickers, etc.), extremely quick onboarding and most alternatives don’t look appealing anymore.

                                                  I think something like Matrix (or XMPP, PSYC, maybe IRCv3 even) could become a contender. But I also think you need to have some big entities taking care about development and infrastructure. Maybe multiple even, with some concentrating ob business, some on private, some on gaming communications. Open standards and technologies even means that everyone could profit from enhancements in either space.

                                                  Every now and people try that. So far none really succeeded in gaining a critical mass which is probably the most important factor for communication solutions. But for that you kinda need marketing. I think what Mozilla did there in the early days of Firefox, also marketing wise would be needed for that, because if all the gamer friends use discord, you will use discord, if all employers use Slack, you will use Slack and if people get WhatsApp or Telegram you’ll get these.

                                                  Don’t get me wrong, I don’t mean to say protocols cannot be decentralized, after all email works. but that you also want to have an entity that you know will take care of problems and make it appear like it won’t be gone tomorrow. So some mainstream server (think what gmail is for email nowadays) is quite a necessity. If Mozilla was in better shape or the FSF wasn’t so extreme it drives away users, I think such an institution would be able to make these things happen.

                                                  I think Matrix and Jitsi for example make good progress, but to my knowledge they are a bit small, which might be the cause for things to be a bit unpolished.

                                                  Of course you first have to make things work to a certain degree, but without marketing and making things shiny things won’t be appealing to people.

                                                  1. 3

                                                    To contrast with all of the existing answers, none of which can fully account for the remarkable popularity of services like Freenode’s IRC network, I think that we need a theory of users who don’t care about Freedom. They’re blind to it, as a matter of education and rote, and so it doesn’t bother them that Discord scores poorly on metrics of Free Software.

                                                    This helps explain why people are comfortable joining IRC on an ad-hoc basis to ask for help but leave as soon as their questions are answered: They don’t understand that there is a longstanding tradition of being a regular member who idles in chat rooms; they only understand that IRC is a place to ask quick questions and get pithy answers.

                                                    This gives us a hint for how to beat proprietary services. Systems like Discord are necessarily centralized as a matter of account management. If we could figure out an incentive structure which punishes groups like Discord for employing people to work on centralized chat systems, or merely incentivizes proper decentralization, then we likely could permanently exclude them from the Free Software ecosystem.

                                                    1. 2

                                                      Because open standards aren’t meeting the needs of the average user.

                                                      1. 3

                                                        HTTP, DNS, W3C Standards, SMTP, IMAP, etc. did a great job. I also think XMPP and Matrix are pretty solid on the standards side. XMPP was and is used in many widespread commercial services.

                                                        It’s more that a lot of implementations for various reasons don’t appeal to people. Either rough edges, hard to use, esp. with multiple devices, focusing on things you don’t care about in particular situations (like E2E Encryption while gaming), etc.

                                                        And then you have annoying things, like mobile messaging and push notifications, which is though slowly being fixed (there’s XEPs and working implementations). I think over the past few years a lot of ground was covered, but there’s rough edges and lack of marketing.

                                                    1. 2

                                                      I think it’s showing it’s age now. There are finally different commands for dealing with branches vs updating file contents.

                                                      git switch $branch will change your branch. git switch —create $branch will create a new one. git restore —source=$commit $file will update a file’s contents to the given commit.

                                                      Of course, git checkout is still there.

                                                      1. 8

                                                        This looks like a static page but it constantly uses 100% of one CPU core and the “GPU process” in Chromium also is spinning at a high rate. What is going on here?

                                                        1. 5

                                                          There appears to be a task scheduler with hugely deep stack frames; maybe matter.js? There’s ~2mb of JavaScript so it’s hard to tell.

                                                          1. 4

                                                            I ran the FF profiler on it and it looks like you’re right on the money. I found references to objects from matter.js in all of the most heavily used functions. The process taking the most time is “graphics”.

                                                            Profiling the site with JS enabled, I got an average of 44fps. Profiling it with JS disabled, I got an average of 127fps. With JS disabled, the biggest difference is that none of the code snippets loaded.

                                                          2. 4

                                                            I suspect it’s the sparkles around “holy grail layout” that might be the cause.

                                                            1. 4

                                                              Don’t know if that’s it but the page loads and then become blank on my low end smartphone.

                                                            1. 4

                                                              One of the results shown is taking 5s to intall a 15M package. Does that include the download time? Or was the package already on the local drive? It doesn’t say, so I’m inclined to think the author is complaining more about a slow internet connection than anything else (and makes me want to force the author to use a computer from 1992).

                                                              1. 4

                                                                Also it is quite important to check how many dependencies are fetched. If base OS contains all deps already, then it will be obvious that it will fetch and install faster than other manager that need to fetch all deps. This partially explains Alpine results as it build packages statically in most cases.

                                                                1. 1

                                                                  If you look in appendix B, the author lists the output from each command several of which include how long it takes to download the packages. It doesn’t seem like internet speeds are an issue.

                                                                  1. 5

                                                                    The author says “fetch and unpack” (emphasis mine). There is no place where download time is separated out.

                                                                    1. 2

                                                                      A few of the package managers themselves report download speeds for example apt reports it took 2s to download metadata updates before installing qemu

                                                                      Fetched 151 MB in 2s (64.6 MB/s)
                                                                      

                                                                      dnf and pacman also report download speeds for their metadata updates.

                                                                      1. 1

                                                                        Have you read the article? Because that information is not reported.

                                                                        1. 1

                                                                          I have read the article. I copied that straight from it. Scroll down to Appendix B. Under the qemu header, click on the arrow next to “Debian’s apt takes 51 seconds to fetch and unpack 159 MB.” to expand that section and see the author’s run of the command:

                                                                          % docker run -t -i debian:sid
                                                                          root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
                                                                          Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB]
                                                                          Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8426 kB]
                                                                          Fetched 8574 kB in 1s (6716 kB/s)
                                                                          […]
                                                                          Fetched 151 MB in 2s (64.6 MB/s)
                                                                          […]
                                                                          real	0m51.583s
                                                                          user	0m15.671s
                                                                          sys	0m3.732s
                                                                          
                                                                          1. 1

                                                                            You are correct. I did not know you can expand the results.

                                                                1. 1

                                                                  To limit ourselves to active websites, we exclude all websites for which we could not resolve via Tor circuits

                                                                  Is that a good way to check for whether a website is active? It reduced their pool of PBWs by about 25%.

                                                                  1. 8

                                                                    I do something quite similar except I break my session up by path. I rely on a script called tat from the wonderful folks at ThoughtBot.

                                                                    I have this in my .zshrc file so that when I open a terminal, I’m kicked into a tmux session automatically:

                                                                    # Add this to your zshrc or bzshrc file
                                                                    _not_inside_tmux() { [[ -z "$TMUX" ]] }
                                                                    
                                                                    ensure_tmux_is_running() {
                                                                      if _not_inside_tmux; then
                                                                        tat
                                                                      fi
                                                                    }
                                                                    
                                                                    ensure_tmux_is_running
                                                                    

                                                                    And then when I navigate to a folder where I want a new session, I hit Ctrl-B which is defined in my .tmux.conf file to call tat again.

                                                                    bind-key C-b send-keys "tat" "C-m"
                                                                    

                                                                    This setup keeps me always in tmux but with sessions based on each project that I work on.

                                                                    Edit to add: I forgot the most useful part of this! I have a tmux shortcut of C-j that opens a split showing me the other sessions. It uses fzf so that I can fuzzy match and switch among the sessions:

                                                                    # Fuzzy matching session navigation via fzf utility
                                                                    bind C-j split-window -v "tmux list-sessions | sed -E 's/:.*$//' | grep -v \"^$(tmux display-message -p '#S')\$\" | fzf --reverse | xargs tmux switch-client -t"
                                                                    
                                                                    1. 1

                                                                      Brillant, I’m definitively stealing this.

                                                                      1. 1

                                                                        I’ve had something similar for years on all my remote shells, automatically throw me into a tmux.

                                                                        For some reason I stopped doing that, not even sure why. I think it misfired and I had to do “ssh host mv .zshrc” one time too often, so now I start it manually again.

                                                                      1. 2

                                                                        I’ll be running a half-marathon for Beat the Blerch. I’ll also be clearing out and winterizing a garden bed that is done for the year. I still have one bed full of tomato plants that are still going strong that I’ll be leaving for a few more weeks.

                                                                        1. 1

                                                                          Nice trail out there you got for run! Also cool T-shirt give out if you ask me :-) Will see if I can run another half over the weekend and join you virtually :-D