Threads for hobbified

    1.  

      The volunteer project I mentioned two weeks ago goes live for a 4-hour event on Sunday. We hit all of the primary goals and it seems like everything is working, which is great, but I’ll be on hand just in case.

      On the slim chance that anyone here is interested, the app is the “tracker” page for Ham Radio Workbenches on the Air, which is a fun little on-air activity in honor of the podcast’s upcoming 200th biweekly episode. The tracker will display live info about which hosts are on the air, on what frequencies and modes — and there’s a separate web interface for them to update that information, as well as an app that can use CAT to pull it live from any radio supported by hamlib.

    2.  

      A particularly nasty one to start with today!

      1.  

        A part of me wonders whether the creator went out of his way to guard against common LLM usage.

        1. 5

          Maybe a little bit, but it’s a recurring theme in AoC that you have to implement the spec as written, but not the spec as you think it means on a first read.

        2.  

          I can barely read the trite stuff about Elves as it is and I habitually skim all the text. I think that might just be enough obfuscation against LLMs.

          1.  

            I think there was a rash of solutions in the early days of Dec 2022 where people were oohing and aahing over that current generation of LLMs solving the problems instantly.

            It died down quite a bit as the difficulty ramped up.

            1.  

              Oh yeah, I got one very tedious bit of slice manipulation handed to me by copilot but for the rest it’s been mostly saving me typing debug output and the likes.

      2.  

        I apparently lucked into doing it the way that doesn’t run into any of the problems other people had, on a whim. (Spoiler below, stop if you don’t want them, but it’s day 1 so…)

        For part 1, rather than doing a multi-match and extracting the first and last matches in the list, I did a match against the input and a match against the reversed input, which is an old trick.

        For part 2, I kept the same structure rather than rewrite, which meant that I matched the reversed string against /\d|eno|owt|eerht|ruof|evif|xis|neves|thgie|enin/, and then re-reversed the capture before passing it through a string-to-num map.

        And it turns out that that totally sidesteps the problem of “wait, how am I supposed to get 21 out of xxtwonexx?”

        1.  

          I just used a regular expression, with the leading group as optional. Means you always pick up the trailing “one” in “twone” first.

          1.  

            I looked for non-overlaping matches and got the right solution. Maybe my input never hit this “twone” edge case by luck!

        2.  

          For part 1, rather than doing a multi-match and extracting the first and last matches in the list, I did a match against the input and a match against the reversed input, which is an old trick.

          I took a similar approach. I’m using C++, so the natural solution was to use reverse iterators .rbegin() and .rend() that iterate through the elements of a container in reverse order. Rather than use a regex—which seemed like overkill—for part two, I just had an array of digit names I looped through and performed the appropriate search and chose the earliest one:

               for (int i = 0; i < 10; i++) {
                  auto it = std::search(line.begin(), line.end(), digit_names[i].begin(), digit_names[i].end());
                  if (it <= first_digit_name_iter) {
                      first_digit_name_iter = it;
                  // ...
          

          And in reverse:

              for (int i = 0; i < 10; i++) {
                  auto it = std::search(line.rbegin(), line.rend(), digit_names[i].rbegin(), digit_names[i].rend());
                  if (it <= last_digit_name_iter) {
                      last_digit_name_iter = it;
                  // ...
          
          1.  

            My figuring on what is and isn’t “overkill” is: AoC is ranked by when you submit your solution, so that’s time to write the code plus time to run it. If something is really the wrong tool, the challenge will prove it by making your solution take an hour, or a terabyte of RAM, to run. But if I’m using a language where regexes are “right there” and they make my solution take 100ms instead of 10ms, I’m not bothered.

            1.  

              I like AOC because everyone can have their own goal! I’m impressed by people who can chase the leaderboard. I always personally aim for the lowest possible latency. Managed to get both parts today in under 120 microseconds including file reading and parsing.

      3.  

        SPOILER…

        The difficulty, IMO, is that the problematic lines aren’t in the sample.

        I did overlapping regexp matches. It was easy, once I caught why my first attempt didn’t work. Another solution would be to just do a search with indexOf and lastIndexOf, for each expected words, but you have to be careful to sort the results.

        1.  

          Yeah, it’s pretty nasty for a day 1.

        2.  

          There’s a subtle hint, because the first sample does include a case where there’s only one digit (the “first” and “last” match are the same, or you could say they overlap completely). When you get the part 2 spec you have an opportunity to ask yourself “hmm, what changes about overlaps now that we’re matching strings of more than one character?”. Or at least it gives you a good first place to look when things go wrong.

          Apparently some peoplle tried to solve part 2 using substitution (replace words with digits and then feed that into the solution for part 1), which also suffers from problems with overlaps, but in a way that’s harder to dig yourself out of.

        3.  

          Yes. My implementation passed all the sample tests, but it returns an incorrect value. Not easy at all.

    3. 4

      I just tried pressing escape in every app I have handy. None of them exited.

      1. 3

        I read that as “close a modal dialog”, but the text does say application. It is a rant, though.

    4. 16

      Half-Life was such a literal game changer:

      • Going from 256 to 16k colours made a huge difference to the immersion
      • Much more varied levels than Quake and Doom
      • An intro worthy of a movie
      • It ran smooth as glass
      1. 8

        Much more varied levels than Quake and Doom

        This is true, but in one respect there was one step backwards: the levels and overall progression became substantially linear.

        It ran smooth as glass

        Anecdote: when I launched it for the first time and played for half an hour or so I thought it looked and performed merely “pretty good”, and wondered if it was over rated. Later I realised it had launched with the software renderer, but I hadn’t noticed because they’d implemented stuff like coloured lighting in the software renderer, which even the Id games hadn’t done. Once I launched open gl my jaw was properly on the floor.

        1. 5

          It was linear, but what I remember most, besides the incredibly spooky slow arc from clean right-angled rationality to goopy organic madness, is that there was no pause when a new level loaded. There were no stopping points, so it felt like a page-turner novel that won’t let you put the book down and go to sleep.

          1. 4

            There are loading points and they’re noticeable. Sometimes they’re at the chapter transitions, sometimes they’re just at a chokepoint. They did a good job of keeping the loading pretty fast, and keeping things integrated so that the player isn’t thinking about going from one map to another, but it’s not actually seamless. There are five times in the initial train ride where the motion hitches and LOADING… prints across the middle of the screen. On a modern machine it’s maybe a tenth of a second, on contemporary hardware it’s more like a couple seconds each time.

            1. 3

              The way I remember it, the standard for other games was to blank the screen with a progress bar for 30 seconds and then hop the player to a totally different context. So technically there was a tiny loading pause, but it was so much shorter and better integrated into the gameplay that it felt like there wasn’t.

              1. 1

                Yeah, like I said. They did better than most. Better than some even today. But it isn’t “no pause” by any means.

          2. 3

            Half Life’s level transitions were much much less jarring than other games. They hid most of them in corridors. They would have identical copies of that corridor in both the “from” and “to” maps and an entity in game that represents the location of the level transfer. That entity would be in exactly the same place relative to the corridors in both the “from” and “to” levels. They’d translate the player coordinates so the player would find themselves in the same location before and after the switch.

            1. 2

              I’m disappointed that most of the games industry still hasn’t progressed beyond that. Half-Life had “seamless” level transitions in 1998, and in 2001, Jak and Daxter pretty much just didn’t have level transitions at all. But today, even open world games are still putting in loading screens or at least fastish “seamless” level transitions in many places. Basically no progress in 25 years.

              1. 1

                No Man’s Sky is as open world as it gets, and has no loading screens even when e.g. descending onto a planet. The high-quality textures loading in is noticeable, though (at least on my Deck).

              2. 1

                I’ve played some recent open world games lately ( Elden Ring, Far Cry 3 and 5) and while the loading times are obnoxious (FC5 compared to 3 especially), once you’re in the open world transitions are mostly seamless.

              3. 1

                The new Legend of Zelda: Tears of the Kingdom has no loading screens (apart from when you’re teleporting). It is a pretty amazing experience to be able to skydive from the highest point in the map (where you can see from one end of Hyrule to the other) all the way down to the ground and through it further down to the underworld, all in one seamless motion.

              4. 1

                There has definitely been progress, but I don’t think some studios really get deeply involved with it.

                I found Starfield especially jarring there, I refunded the game due to how bad it overall was put together but really the loading screens are doing a lot of lifting in that opinion. Doing simple side missions may involve going through 8-7 loading screens (4-5 for going to a location; location -> spaceship -> orbit -> other star system -> orbit -> planetary landing site -> dungeon).

                Meanwhile games like NMS, E:D, SC etc have no loading screens between scenes. Well, no visible ones, you can sometimes catch it loading stuff, but it’s well masked. The newer god of war games hide loading screens with crawl sections.

                Masking a loading screen is work but IMO its well worth it because it feels like wasting less of the players time (if done well, looking at you Calisto Protocol). But it’s way simpler and cheaper to just bring up the progress bar.

      2. 4

        “It ran smooth as glass”

        100% …why did it feel so smooth? Did it run at a better framerate than others at the time?

        1. 5

          Half-Life ran on an upgraded version of the Quake 1 engine, which was a couple years old at that point. In the 90s, hardware was advancing so fast (particularly graphics) that two years was a very long time. Upgrades included 16-bit color, skeletal animation, beam effects, much better enemy AI, and audio processing like reverb, so the burden was greater than Quake. But it came several months after the first release of Unreal, which was a technical showcase in all those ways and more and was expensive to run well. Half-Life was not as nice to look at in stills, but it ran better and had a mature feel and coherent narrative that made it the favorite.

          1. 1

            Pretty sure it was using an upgraded version of the Quake II engine. It made heavy use of features like skeletal animation, something the Quake I engine didn’t have. Also the way lighting worked is a dead giveaway

            1. 3

              They had access to the Q2 codebase but skeletal animation & lighting was entirely them: Half Life’s Code Basis

            2. 2

              If my memory serves, and it doesn’t necessarily, Quake 2 used baked radiosity lighting and had colored dynamic lights in GL, while Half-Life still used Quake’s baked and dynamic lighting but the baked lights had color.

              Half-Life used skeletal animation (skinned meshes, vertices with bone weights) but Quake 2 did not; instead it would interpolate vertex positions between the same kind of vertex-animated frames that Quake 1 used. That was also true of Quake 3. It still didn’t use skeletal animation but did split character models into head, torso, and legs parts to get some of the benefit.

              Wikipedia:

              GoldSrc (pronounced “gold source”), sometimes called the Half-Life Engine, is a proprietary game engine developed by Valve. At its core, GoldSrc is a heavily modified version of id Software’s Quake engine.

              1. 2

                Wow, that makes it all the more amazing what they managed to do with that tech!

                1. 3

                  Absolutely. They didn’t just use the engine off the shelf, they used it as a starting point and got busy turning it into what their game needed. Compared to most game creators I think of Valve as vertically integrated in the same way as Apple and Nintendo—they’ll own and operate as much technology as they need to in order to satisfy a novel product vision. Although they didn’t invent all the pieces they integrate, no one can make the same end result they do.

        2. 3

          It might have been the first 3d-accelerated game you played. Quake, for example, was usually played without any 3d-acceleration (was opengl support there at release or did it come later? I don’t even know)

          1. 12

            OpenGL support came half a year after initial release as a separate executable, GLQuake.

            Maybe you know this but I can’t resist a history lesson since it was such an exciting time:

            Quake became established as the killer app that justified a consumer’s first purchase of a 3D accelerator card, as we called them. In that way, Quake was a major factor in OpenGL finding support at graphics card manufacturers in the first place. Carmack was communicating with manufacturers and telling them what capabilities would be of best benefit to offload to hardware in their next product.

            Along with Glide and Verite, OpenGL was one of several early 3D APIs supported by Quake, with the notable exclusion of Microsoft’s Direct3D. The Quake engines’ ultimate dedication to OpenGL was a lever intended to prevent Direct3D from becoming the de facto standard 3D graphics hardware abstraction layer—a very good thing in light of Microsoft’s domination of the software market.

            1. 1

              Thank you for answering. I couldn’t find it through googling.

              Boy do I remember wanting a graphics card for glquake. Good times!

    5. 3

      I agreed to (at least try to) do a volunteer programming project on a short deadline, while already overly busy with work and home stuff (and Thanksgiving isn’t the kind of holiday that generates copious free time). Which is stupid on its face, but it’s a project I want to do, and a person I want to do something for, so shrug.

      Oh, also: the other day I murdered the boot drive of my main desktop machine with a wayward screwdriver (I was changing a video card, and the lever for that PCIe retention latch is in a spot that’s impossible to get a finger into when the card is actually installed, so I tried to push the lever with a screwdriver while pulling the card with the other hand… it didn’t go so well). I didn’t have a backup, because I consider boot drives noncritical ­— everything that I would really miss is on /home. But it turns out that while nothing essential was lost, reconstituting the system is still annoying.

    6. 3

      This looks really useful for passing jobs between CI things, but I’m not sure how the ‘ephemeral’ bit works. Docker and OCI images are built of layers and each layer is a delta applied to the one below. Most container registries refcount layers and delete them once they are no longer referenced. This works really well if, for example, a thousand containers all use the same Ubuntu base[1] because they will then share a single higher level. Does this do the same thing? Does this mean that I can push a CI build there and then keep pushing new layers to get unbounded storage?

      [1] Well, kind of. In practice, almost all of these images do apt update && apt upgrade -y as the first line and so all have slightly different (large) second layers.

      1. 9

        The ephemerality is attached to the tags, a tag is deleted when its expiration time says to, and everything behind that tag is refcounted in the normal fashion. The underlying store is registry with nothing interesting in the config. I don’t see anything that would prevent the sort of abuse you’re imagining. In fact the whole thing is like two hundred lines of code.

    7. 5

      Instead of .... I have aliases like cdd, cddd, etc. because I have so much muscle memory to instantly type cd, and it’s easier to just add a “d” when I realize that’s what I need.

      And another tip: !$ expands to the last argument of the previous command. So if you don’t want to make a function like the one that does mkdir and cd, you can do

      mkdir -p some/long/dir
      cd !$
      

      and it’ll work the same

      also works with other things:

      • touch foo.sh followed by then chmod +x !$
      • convert foo.heic foo.jpg and then open !$
      • etc etc
      1. 3

        What also often works is pressing alt+. To get the last argument (at least it works in bash and fish). This way it’s a bit more interactive and you can also change the text. After some adjustment period I like the alt+. approach more.

        1. 1

          In Fish alt+. is actually even more elaborate: it does a search for the currently entered argument (or its part) in the history, defaulting to the last one and then cycling to all the older matching ones (with no entered argument: just all older ones). It’s basically an argument-wise version of the up arrow in Fish which searches the history for a matching line not unlike ctrl+r does in Bash.

      2. 2

        Mine has been u uu uuu … for like 15 years

        I think I just made that one up and it stuck! u for “up”

      3. 1

        When I first got into Linux one of my biggest stumbling blocks was the fact that it didn’t have DOS’s (weird) behavior of allowing cd.. as a synonym for cd .., or the Windows (I think win95) extension of allowing cd... as a synonym for cd ../.., etc. Haven’t thought about that in a lot of years though.

        DOS actually let you skip the space after cd as long as the next character was punctuation, so you could also do cd\, or cd\dos, etc.

    8. 3

      Thanks for posting this! It prompted me to write up what I learned working the GH merge queue at my previous job: https://boinkor.net/2023/11/neat-github-actions-patterns-for-github-merge-queues/ - in short, you can in fact have different sets of test jobs run at different times (on the queue and off). It’s annoying as hell to write all that YAML, but it is possible.

      1. 2

        It’s becoming even more clear that GHMQs could do with a few tweaks for usability sake!

      2. 1

        jq '. | to_entries | map([.value.result == "success", .value.result == "skipped"] | any) | all'

        How about this?

        jq 'map(.result == "success" or .result == "skipped") | all'

        [] and map operate on objects as well as arrays, producing all of the values and ignoring the keys, so there’s no need for a to_entries that only uses .value of each entry.

        For jq 1.6+ you can also do

        jq 'map(.result | IN("success", "skipped")) | all'

        or

        jq 'map(IN(.result; "success", "skipped")) | all'

        if you prefer (note that IN is a different operator from in), but I think the version with or reads best when there are only two options.

        1. 1

          Also I’m 99% sure you could combine the “transform outcomes” and “check outcomes” steps (and simplify the shell stuff) using jq --exit-status. true and false as last output values get mapped to 0 and 1 exit codes respectively.

          1. 1

            You absolutely can, but splitting the steps makes it easier to see the outcome yourself in the GitHub actions ui. If they’re combined, you have to rely on printing the value somehow (eg, via set -x), but split up, the value being worked with is in the step context’s list of environment variables.

    9. 4

      mkdir -p “$1” && cd “$1” || return 1

      What does || return 1 here achieve?

      1. 4

        It normalizes all error returns to 1, but I don’t see any particular use for that. I think the author is just in the habit of using return 1 anywhere a function should fail, and didn’t pay any mind to the fact that this one doesn’t need an early-out.

      2. 3

        Ahh, I missed removing that. It was added earlier when the script had multiple lines and I did not have a global set -e for the file where these were defined.

    10. 14

      All these attempts to resurrect the “good old days” of the web smack of cargo culting. People are obsessed with the outer forms - Usenet! Blogging! - instead of the socio-economic reality. Back then, writing a blog was the most convenient and popular way of getting engagement online. Now, it’s not. The technology, and most important, the audience has moved on , and the methods of making money from it has too.

      1. 51

        Unless I remember wrong, making money was not a concern for the participants of webrings. Only a (by definition) small and noisy subset of people were trying to optimize “engagement”. Webrings are about discovery, not profit.

        1. 6

          Same here. Most of the websites I used in 1996 were made by people who were proud that they accomplished something that felt difficult or created a resource they had tried to find but couldn’t and wanted to save others time. Profit wasn’t part of it. A lot of these were hosted on space ISPs made available as part of dial-up plans so cost recovery wasn’t even part of it.

          I do think the world has broken that mold though. The amount of stuff that does not exist or is very difficult to find on the web has shrunk drastically. This has reduced the potential for that intrinsic motivation to share useful resources. Similarly, it has become much, much easier to publish content on the web and to create software. This has reduced the frequency of people accomplishing anything that feels difficult and feeling proud enough to warrant building a bespoke website.

        2. 1

          That was also true of blogrolls — in the early days. The for-profit blogger was a relatively late development, and one that contributed to the decline of blogs and the rise of social media as we know it.

      2. 34

        Eh, lots of people are blogging for fun and making dumb web projects like webrings for fun.

        1. 8

          Same was true in the late 90s as well of course

      3. 27

        People blogging for money have moved onto silos like Substack and Patreon. The rest of us are doing just fine, and couldn’t care less about “engagement” or chasing an upwards slope on our metrics dashboards.

        I’m not against being able to make some money off blogging, but I don’t think the best blogging comes from making it a major source of growing revenue.

        1. 6

          Significantly, Substack is marketed as a “newsletter”. The primary alert method is via email.

      4. 18

        The technology, and most important, the audience has moved on, and the methods of making money from it has too.

        This is a pitch, not a warning!

      5. 16

        The technology, and most important, the audience has moved on , and the methods of making money from it has too.

        The methods of making money have moved in. That is the problem, money making took over everything. Everything became about money, people reach content through a handfull of entry points that will give the spotlight to a couple of content makers per topic. All reachable through the same top list. Anything outside that filter will be relegated to oblivion and eventually die.

        YouTube has recently removed the option of uploading videos without ads. Facebook, Instagram et al, have long stopped showing you what your connections are posting in favour or an algorithm that you can’t control nor know that will show you a bunch of stuff you didn’t ask for.

        The opportunity for a small website with a stable following base is gone. Everyone gets sent to the big popular content sources instead. There’s also the network effect catalysing this.

        If you search a topic on YouTube or Instagram, the results will consist of content producers whose success got to the point of them leaving whatever other job they add and become professional content creators. It became a competition for serious business while before there were amazing online resources visited by millions that were ran by some unknown dude in the Bolivian mountains or in deep in Siberia in his spare time. Things got more suited for the masses, but the focus was taken away from the enthusiastic smaller group who did it out of passion.

        The same can be observed in sports. When a sport becomes a successful venture, the participants become business owners. Full with business management made to optimize results at the expense of sportsmanship. But it you follow any less professionalized sport, whenever there is a big international cup, the whole event becomes a genuine celebration of the passion for the sport, with displays of much more honest sportsmanship.

      6. 13

        I feel like something millennials like me miss is that computers and the Internet don’t have the same emotional place in the minds of Gen Z. For us, it was the future. Something the previous generation didn’t understand, the next frontier to explore and have adventures in. Now millennials have bills to pay and children to take care of, so there’s much less room for excitement in our lives and the next generation sees computers and the Internet as a corner that’s already taken. Just another tool of the old world order used to exploit them. I wonder how much this picture contributes to the loss of innocence and beauty online.

      7. 12

        This is why I treasure lobsters and my local hacker meetup. Not quite relics, but successful echos of what things used to be.

      8. 10

        The technology, and most important, the audience has moved on , and the methods of making money from it has too.

        Good. Let them move on. Then we who feel curiosity and passion for the world just for its own sake are among ourselves again. I can do without the salespeople and SEO spam.

      9. 5

        I whole-heatedly disagree. Reading blogs was nicer, more convenient and less weird, and it still is.

        For quite some time I would have agreed with you, in many different contexts. The internet is filled with articles, videos, etc. claiming “you only remember the good bits”, “you only think it was great because it was the best back then”, “you are only being nostalgic”, “if you would try it again it would suck”.

        Just today, co-workers complained about not being able to increase an input field on the website of one of the biggest companies. The other day I tried to contact my bank for the first time, since I needed to send documents I did it online and they have a really low character limit on their input field, so I had to upload my support message as a PDF.

        Or look at how easy it used to be to just download a file into a directory. Try that on your phone when you have to.

        At some point during the heights of the pandemic when I was being nostalgic I decided to put things to the test. Nope, Usenet is still nice, and nicer than alternatives, nope old video games still are nice, nope, doing stuff off the cloud is… not even still good, but better, than it ever was, using a desktop, better than it used to be, building your own desktop, woah, even the cheapest case feels like workstation quality back in the day, etc.

        Managing music on your own system, instead of on some online platform is still so so much nicer, using things like Strawberry.

        And none of those things are ever down, unlike CloudFlare, which made a habit of turning their downtime into an ad, or S3 or Google, or Slack, etc.

        IRC is still better than Discord to get good, quick responses, and with IRCv3 it even got rid of any downsides.

        In general, smaller communities tend to still be higher quality, than.. well, even lobste.rs.

        Meanwhile, on big platforms, it’s all 404s, failing registration, logins, weird white pages, lots of JavaScript errors, with a lot of famous apps and application people started to get used to things not working. I mean not so long ago Video Conferences worked most of the time, but despite the pandemic and what not it’s a great big mess. Meanwhile good old Mumble just works.

        And then the big news sites. Everyone lying about caring about your privacy, everyone wanting you to pay, acting like it’s donations, pretending to somehow be grass roots. While blogs just through out stuff there for free.

        Also I started using Bandcamp for getting my music at the worst possible time I guess.

        So in short: If you think that you probably just are nostalgic and think you are only remembering the good bits, go and verify that thought. There is a real chance that it still works great! Some stuff is even better now, BECAUSE the technology moved on, and because the trolls and self-promoters moved on.

        I don’t know what you mean by cargo culting in that context. Could it be that you used the wrong term here? Maybe you talk about the circle jerking happening in some of those groups. That’s a thing. But sometimes you just have to dig a bit deeper. Or avoid (a big portion of) Gemini fans.

        1. 8

          I don’t know what you mean by cargo culting in that context. Could it be that you used the wrong term here?

          No, I chose it deliberately.

          I sympathize with the desire to move towards a more user-centric internet. I too believe that the current social-media landscape is a morass of user-hostile surveillance and privacy violations.

          But many (not all) of the proponents of “the small web” confuse the map for the terrain. They think the reason the early internet was less commercial was that people built stuff themselves, or only the very technical could host or even use a website. And they believe that if they go back to that technical environment, the web will become less commercial. But it (probably) won’t! Because the world has moved on. Everything is on the internet now.

          This is in analogy with the real cargo cults, who thought that building replicas of the cargo planes that brought prosperity to islands during WW2 could bring them back.

          I believe the best way to deal with internet surveillance capitalism is robust legislation protecting user’s rights to privacy. That’s going to be hard to bring about, and I don’t have the solutions for it. But I do know that if you want voters to demand that, you’ll need to reach them where they are. Shutting yourself in in exclusive networks is probably not going to work.

          1. 3

            But many (not all) of the proponents of “the small web” confuse the map for the terrain. They think the reason the early internet was less commercial was that people built stuff themselves, or only the very technical could host or even use a website. And they believe that if they go back to that technical environment, the web will become less commercial. But it (probably) won’t! Because the world has moved on. Everything is on the internet now.

            Interesting. I see it differently. First of all yes, there was a lot more self-build, non-commercial stuff. I was part of some of these things, from interest groups, to private usenet to private for fun game servers and games. People invested their own money and provided stuff they found fun, just like small private clubs.

            The other thing is that I see a bit of a dis-illusionment. I don’t see many people that believe they can change it back. Much rather they try to separate to be able to do their own thing. You can still rent a server, a webspace, etc. and just run your own stuff. At least to some degree.

            I think yes for some stuff (privacy) legislation is the right way, but for the fact that companies intend to squeeze money out of you. That’s somewhere between hard to impossible to fix. Every now and then a company is created with another goal than making money, but actually to satisfy a need, but typically at some point you end up with people that don’t care and just want to get the maximum amount of money with the least amount of work (which is fine).

            Some stuff like having your small community does work. It’s why everyone tells you to join a user group and stuff.

            I agree that making the web less commercial is going to happen, but I also don’t think any form of legislation is going to change that.

      10. 4

        That, and I suspect there were a lot fewer people, and the people there were very driven to explore and express themselves with the new shiny thing at the time, and were willing to put up with the higher barrier to entry of the mediums available at the time.

        Nowadays, there’s a bigger proportion of people with not much to say, and what they do want to say, they can say what they want on a medium they don’t have to maintain (modern social media) - and they like it this way.

      11. 3

        It’s a deliberate distancing from “the way things are now”. Sure, you’re missing out on 99.9% of the audience. Lobsters is small (16k users, and far fewer active) because it decided to set itself apart from the norm, in a rather retro way. Yet people find it worth their while to post and comment here. It’s not because we’re driving those millions of ad impressions.

    11. 28

      Also consider the blogroll: After webrings and hit counters died out, blogs and webcomics would often list in a sidebar several links their readers might like. It was just as low-tech and could develop mutually-boosting relationships between site owners, but it didn’t rely on multiple sites coordinating to get started.

      1. 6

        I love blogrolls, in my experience of the early internet and blog scene, being able to bounce from one website to another via their blogrolls made the experience feel more like exploring or digital spelunking.

        I “recently” created the Hyperlink Club “webring” for personal websites with a blogroll https://github.com/photogabble/hyperlink-cafe as my small contribution to keeping blogroll’s alive.

      2. 5

        It’s funny, this is just a list of likes, or “friends” in the FB context, or “follows” in a Twitter context, etc.

        And yet they feel different.

        I think we just hate that middleman, that rent seeker, that company mediating these age-old social patterns whereby we expand our networks (of people or ideas or anything else) using the networks we already have.

        1. 2

          It’s different because it’s “liking” someone doesn’t “promote” them to anyone’s “feed”. It’s just a passive recommendation: “hey, try this”. It’s part of a system of reading (I cringe to say “content consumption”) that’s reader-driven rather than algorithm-driven.

      3. 3

        agreed. i didn’t care much for webrings even the first time around; the topology isn’t really conducive to how i want to surf the web. but i loved blogrolls, and the websites that were little more than a large, vaguely categorised soup of the author’s favourite links, and similar “if you like my site go check out some of the sites i like” content.

        the closest thing i can think of these days is awesomelists over on github (actually the “see also” culture is alive and well in open source, lots of projects have links to similar projects in their readme)

      4. 2

        I don’t remember any blogs in computing that have blogrolls, but, of the three mathematicians at whose blogs I remember having looked, two, Terence Tao and Timothy Gowers, had and still have quite large blogrolls, although the third, Andrej Bauer, doesn’t appear to have one. (Of those three, Bauer is the most into computing; I wonder whether that matters somehow.)

    12. 1

      When doing HLS or DASH over HTTP/3, you can put the toothpaste back in the tube. If a client cancels a request, it sends one little packet to shutdown the stream, and the server should stop sending any more data on it immediately.

      “Immediately” means after 1 RTT, because we can’t violate causality and do anything about packets the server sent before it found out that the client is no longer interested, so you’re still somewhat committed, but not to the extent that you would be with TCP HOLB. (And this is a physics problem, so it wouldn’t be significantly different with MoQ or anything else).

      RTT (or more accurately TTFB) comes into play another way if you think that it’s a good idea to change renditions mid-segment: you have to send a request for the new lower quality and wait for it to start coming back, and while you wait you’re not filling your buffer at all.

      It’s possible to do the math on the client to try to make the optimal choice, but sticking with the current rendition until the next segment is going to be the winning choice more often than not, and higher-RTT users are going to need a bigger client-side buffer (and so higher video latency) to avoid disruption, regardless of protocol.

      I’m not hostile towards MoQ but I’d say that the reality is that an evolution of HLS or DASH over HTTP/3 is much more likely to win wide adoption. That path plays nice with existing CDN infrastructure, and is probably 90% as good.

    13. 13

      I’ve never had a project that involved building the kind of app that Django wants to be used to make.

      1. 10

        I have, and built it in Django. It felt like the right tool for the job and it worked out fine.

        I think Django is overall pretty well engineered and good at being what it is (a rails-like big framework).

    14. 5

      TL;DR: the guy runs his dev env at home, then remotes into it with VSCode. My question is about this:

      I’m sure that the latency from, say, Australia will not be great, but editing in VS Code means you’re far less latency-sensitive than using something like VIM over plain SSH - all the text editing is still happening locally, and just the file saving, formatting, and terminal interaction is forwarded to the remote server.

      Is there a neovim plugin of some sort that could do this? Replicate the stuff locally and then do synchronisation under the hub?

      1. 9

        Original vim ships with netrw, which enables stuff like :e sftp://example.com/file/path.c.

        I doubt that works with LSP or the like, however.

        1. 2

          Yes, I’m aware, but I thought that reads/writes directly on the net. What the author in the post is saying, VSCode will make a local copy of the file, then all your operations are fast, and it will silently take care of the synchronization for you. So like if you were to :w the file, it might have some noticable latency, while in VSCode you would not see the latency - you just save the local file, and go on working, while Code does the sync.

          1. 4

            I don’t think that this is what the author is saying. They seem to be saying that with vim over ssh, your keystrokes are sent over the network, so every letter you type gets round trip latency; when you edit with vscode’s remote support, the keystrokes stay local, and only saving and formatting goes remote.

            1. 1

              Yes, exactly, I was a bit imprecise but this is the essence of my question.

      2. 8

        It should be clarified that VSCode doesn’t do “file synchronization”. It does much more than that: all of the language support (indexing, completion, etc.) and many of the extensions you install run remotely too. I’m saying this because I often see it compared to Emacs’ tramp, and I do not think tramp does any of this… or at least I haven’t gotten it to…

        1. 5

          I’m saying this because I often see it compared to Emacs’ tramp, and I do not think tramp does any of this… or at least I haven’t gotten it to…

          tramp does execute most, if not all commands, remotely for remote buffers so things like grepping or LSP tend to work correctly via tramp if the tools are installed on the remote machine.

          1. 1

            Does it? My most recent experience seemed to imply that things like the spell checker were running on my client machine, not the remote… And I’m not sure I ever saw it running rust-analyzer on the remote machine in the past. Is there any magic to configure?

        2. 4

          This has some downsides too. It means that your remote machine has to be capable of running all of the developer tools. This is great for the Azure use case: your remote machine is a big cloud server, your local machine is a cheap laptop. It’s far more annoying for the embedded case: your local machine is a dev workstation, your remote machine is tiny device with limited storage, or possibly a smallish server with a load of boards connected to it.

          1. 1

            Agreed, I was trying to use it to connect to a production server, not for main development but for quick tweaks. It installed so much stuff on the remote server that it slowed it way down. Scared me, didn’t try again.

      3. 5

        I use tramp in Emacs to do this; some brief searchengineering doesn’t find a vim version 🤷

      4. 1

        I don’t use it, but vim-airsync has some stars and looks simplistic but perfectly plausible.

      5. 1

        Does it need to be a text editor feature? I haven’t used it for codebases of any great size but SyncThing is up to the job as far as I know; someone gave a lightning talk at PGCon 2019 about their workflow keeping an entire homedir mapped between two computers.

        1. 1

          Yes, there’s also use case for this. I was curious about neovim specifically in this case though.

      6. 1

        I use the VSCode remote stuff all day every day, but previously I used vim over ssh all day every day, so whatever.

        Also, I sometimes use vi between the US and Australia and it’s really not that bad. I’d rather use something like vim that’s just a fancy front-end to ed/ex. Trans-pacific latency’s got nothing on a teletype…

        1. 2

          Mosh has helped me over a decade to deal with latency issues.

        2. 1

          Yes, I know and I do that occasionally. But I don’t think it would work if I had to do it non-stop, as my primary activity. The latency is barely noticeable but it’s there. I remember that from my operations days.

    15. 1

      That’s a really great article.

      But can someone help me understand why a switch needs a 64-bit multicore processor, 8 gigs of RAM and run Linux (though this is not unique to SN2700, just a general observation)? I was under the impression that switches (both L2 and L3) do all performance-sensitive work in hardware.

      1. 11

        There are some exceptional cases that need to be offloaded to a real CPU, plus you want to be able to support at least a bit of monitoring and statistics. When you’ve got 32 ports and an aggregate switching capacity of 5 billion packets per second, you don’t want that CPU to be too poky, and on a $25,000 device you can probably afford to spend $100 on the CPU instead of $10 if it opens up some flexibility for your customers. And reading the part of the article about switchdev (and knowing a bit about Mellanox’s history with Linux), flexibility was definitely their intent.

      2. 3

        Switches do a bunch of control plane stuff, things like STP, LLDP, VXLAN, etc. usw. Dunno how much if that is in the data plane on this device :-)

        Switches also need some kind of CLI for configuration, and it makes sense to use Linux for that. It can also act as the front-end processor for the data plane, e.g. feeding it firmware at boot time.

      3. 2

        The last 50G firewall I ordered is also a bunch of mellanox cards and an EPYC processor - if you hit the CPU with your traffic for whatever reason (things you can’t offload to the network cards), then you better have enough compute for that..

        You can do way more than VLANs on such a thing, like NAT, VPN, VRF and other routing stuff. For firewalling you might also hit the CPU, depending on what you want to filter (and what your hardware offloading can do).

      4. 2

        It really depends what “all performance-sensitive work” means. Sure, the packet-flinging is done in hardware, and is obviously very performance-sensitive. But running routing protocols, collecting and reporting statistics, deciding what to do with packets that the hardware plane can not handle, … all very suddenly become “performance-sensitive” as soon as your management CPU turns out to be to slow to do them all and gets overwhelmed, because suddenly the device does not behave as expected anymore.

        1. 1

          Interesting. So switches actually perform non-trivial amount of “management work”, as well as being a fallback for special cases (like things that cannot be offloaded to the NIC). Good to know.

    16. 3

      That’s really amazing. I wish such switch support was much more common and included options that have 1G copper ports.

      1. 1

        Switches with rtl83xx/rtl93xx chipsets might offer what you’re looking for. [1] has a lot of detail on them, and [2] lists OpenWRT support. The Ubiquiti ER-X[3] is also an option if you’re looking for something smaller.

        I’m using a Zyxel GS1900-24E and two Ubiquiti ER-Xs at home and am pretty pleased with them. They’re in a very different league from the Mellanox SN2700 (both in terms of switching capability and DSA support) but they’ve worked very well for my basic use so far.

        1. https://svanheule.net/switches/
        2. https://openwrt.org/docs/techref/targets/realtek
        3. https://openwrt.org/toh/ubiquiti/edgerouter_x_er-x_ka
        1. 1

          One note: the ER-X is both a switch and a router. Like the Mellanox, but on a much smaller scale, it can have its ports all together on the silicon switch, or peel them off and have them show up as separate interfaces to Linux, or mix and match the two. It can easily switch 1Gbit/s, but it doesn’t have enough CPU to route 1Gbit/s.

          The next thing up from the ER-X in Ubiquiti’s product line is the ER-4, which has enough CPU to route 1Gbit/s (as long as you don’t get too fancy with DPI or QoS), but doesn’t have any switch fabric at all, so you end up wanting to put a switch on the LAN side of it.

    17. 5

      Why not just write it as a multiplication with 257, as usual and easily mathematically derivable for other depth transforms ((2^16-1)/(2^8-1))?

      256+1=257, so we can see the bitshift and added original value easily. This is not magic.

      1. 3

        Because bit shifts can be computed faster than multiplication. This is especially important in computer graphics contexts.

        1. 2

          Benchmark it. The compiler will probably turn the multiplication into bitshift+or anyway, or leave it.

          1. 5

            A quick check on quick-bench shows that it compiles to the same assembly with O3 on latest clang and GCC.

            1. 3

              You’re not wrong, but your test is broken. Both versions are just storing a constant into memory n times, because the value of small is known a priori, so the computation of big is optimized out entirely. The DoNotOptimize enforces that the value of big is considered “used” (otherwise the loop would have no observable effects and could be removed entirely), but movw $0x2727, 0xe(%rsp) is enough to satisfy that. It doesn’t force the computation of big to be executed.

              1. 1

                Ah, you are right. I redid the code by making small a random number continually changed each pass of the loop. It does still come out to the same assembly with either implementation but now the all important shl $0x8,%eax is there.

                https://quick-bench.com/q/VS7of8NLsjf60uH3XF_M010wFwY

      2. 3

        That’s a cool way to think about it, thank you for bringing it up. I think both direct bit copying and multiplication need an explanation anyway if you aren’t familiar with the problem and its solution, so it’s not clearly a win clarity-wise.

        When it comes to performance, well bit operations are always fast so at least you’ll get a peace of mind when opting for those, even if it doesn’t matter in the end.

      3. 2

        I played around with this and seems like you can’t do it with multiplication if the high bit depth isn’t divisible by the low one. For example RGB565 pixel formats are common and you need to expand 5-bit channels to 8-bit ones to display them on screen.

        I don’t think you can do that with integer multiplication because you need to “fill” 3 low bits and you only have integer factors of a 5-bit number at hand. I added a mention to the article.

      4. 1

        Multiplying by 257 looks like magic (although less so if you write it 0x101 or 0b10000001). Shift-and-or tells you exactly what you really need to know: 00 becomes 0000, FF becomes FFFF, everything in between is monotonic.

    18. 13

      On the one hand: I agree with the headline, and the middle of the article. It’s a good idea. It’s about how long you can hold complex ideas in your brain, and swapping parts of ideas out to more-permanent storage so that you can keep working with and evolving other parts. When you can do that quickly, you can work with bigger ideas comfortably, and even make fewer mistakes than you would by “moving slower”. When I don’t have a real keyboard available I feel stupid. My brain works less efficiently.

      So typing is definitely a skill that’s worth improving, in my book. It might not be easy to recognize the dividends it pays, but they’re there.

      On the other hand: I have a pretty dubious feeling about the beginning and the end, and what it seems to be implying. 80wpm is about as good as most people ever achieve — it’s something like a 90th-percentile speed among the general population, and is (or used to be) the cutoff for a “professional typist”. I can hit 125 or 130 in a typing test, which puts me right up in the top fraction of 1%, but that’s a sprint. It requires psyching myself up and pushing hard. My speed when I’m doing something ordinary like typing this message hovers around 90, and that’s fine.

      So, if you can’t make 80, definitely practice and see if you can get there. You’ll feel the benefits for sure. If you can do 80, try for 85. If you can do 85, try for 90. It never hurts to practice and to have goals. But setting your sights on 120+ is probably silly, and by no means necessary to be a good developer.

      1. 5

        I agree. There are other cutoffs that happen earlier. I started programming before I was a confident typist and one of the (many) reasons that my code was terrible was that typing longer words was noticeably more effort. As a result, I used very short names for variables and functions, which made my code difficult to read. I also avoided writing comments because they tool a lot of time.

        I just did a quick speed test using the first typing test that I found on the Internet and hit 87 WPM. This would have been a bit faster because it was copying from the line above and so I didn’t realise I needed to hit enter in one place and paused for a while. I’m not sure how representative reading and typing at the same time is of my overall typing speed, I definitely feel like I’m typing faster now than in that test, but I’m also aware that perceived speed of interaction with computers often doesn’t match actual speed.

        After typing daily for several years, I reached the point where writing out a long comment was easier than keeping the whole idea in my head. I think that was a big transition in terms of code quality.

        I rarely find typing a bottleneck: I’m often pausing to think in the middle of writing code. In particular, the typing is fairly asynchronous. My fingers (well, presumably my CNS) provide a load of buffering. I queue up some ideas and then think about something else while they’re flushed into the terminal. I can’t fill the buffer as fast as my fingers can drain it. That might just mean that I’m getting old, but I don’t think I could when I was younger either.

      2. 3

        But setting your sights on 120+ is probably silly, and by no means necessary to be a good developer.

        Agreed. At that point, I would suggest spending some time practicing with your favorite text editor. At least with Vim / Neovim, there are significant rewards to such practice. I find it beneficial to be able to re-arrange program code quickly. After decades of use, moving pieces of code around is nearly as natural as walking and talking.

      3. 2

        It’s about how long you can hold complex ideas in your brain, and swapping parts of ideas out to more-permanent storage so that you can keep working with and evolving other parts

        This is kinda why a white-board beats typing speed anyway.

      4. 1

        Largely agreed; though I also think about Dan Luu’s 95%-ile isn’t that good all the time here. The shape of the curve means that hitting that top 1–5% in buckets like this gives you considerable advantages in any area where these things matter (I chalk a very great deal of my own professional success up to the fact that I can write very quickly, for example, and therefore can get positive feedback cycles from writing!).

        1. 1

          Yeah, you’re not wrong there. But I’ll agree with what ansible-rs said: there are probably other aspects of your programming life that would give you a better ROI to train.

    19. 1

      Does this make the following setup possible? Single vdev, start a RaidZ3 setup with 4 disks and thus a usable space of 25%. Incrementally RaidZ expand to 12-15 Disks as demand grows.

      1. 9

        Yes! I hadn’t done a test though and your idea seemed interesting so i went and did just that on my devbox https://gist.github.com/Davis-A/9c8f0287355dda236dad6267e9d57494

      2. 1

        It does, but it’s a very strange reliability profile. At the start, you can handle 3/4 of the drives failing. At the end 1/5.

        1. 2

          Well sure, but clearly you’ve chosen z3 from the beginning because you intend to get big and you want to still have a 20% failure tolerance when you hit 15 disks. So you shouldn’t look at it as placing any particular value on the 75% failure tolerance at the beginning, just as being willing to tolerate really bad space efficiency at the beginning.

    20. 1

      Interesting to read that applications were not supposed to change the palette. I wonder how common place changing it was. (Fairly sure most DOS games did)

      1. 3

        That statement doesn’t mean what you might think at first glance. There are two different sets of palette registers on the VGA: the “internal palette” and the “external palette”.

        The “internal palette” is EGA-compatible, has 16 entries, and functions as a lookup table into the external palette. The “external palette” resides on the RAMDAC and has 256 entries. The external palette is the one that that Q&A is about.

        What IBM means when they say “the internal palette of the video subsystem is not used to select colors. It is set by BIOS and should not be changed” is that if you’re in the 256-color mode, the 16-color internal palette doesn’t do anything, and you shouldn’t play with it because at best you’ll do nothing and at worst you’ll leave the user with messed-up colors when they go back to text mode. Modifying the appropriate palette for whatever mode you’re in is documented and allowed.