Threads for Boojum

    1. 8

      Undo/redo feature [P0]

    2. 15

      I don’t use an ad blocker at all, but I do use anti-tracking tools. I don’t mind if you want to show me ads, I do mind if you want to spy on me. If your ads rely on violating my privacy then that’s your problem. If you detect that as an ad blocker and tell me to turn it off to access your site, I’ll happily go elsewhere.

      1. 3

        I take a similar approach; I mainly just use NoScript. If you can show me a basic ad without third party JavaScript then I’m fine with that. It’s not so much the ads themselves that I mind so much as the megabytes of scripts running on my machine, using my bandwidth, and phoning home.

      2. 7

        While I respect your personal opinion on this matter, just by posting it publicly you advocate for economical exploitation of other peoples attention.

        Please give a read.

        1. 38

          If you think that him saying he’s okay with ads is “advocating for economical exploitation”, then you don’t actually respect his opinion.

          1. 18

            It’s a form of rhetoric that is becoming popular. Instead of dealing with nuances and respect, things are phrased in opposites. “If you don’t 100% agree with my opinion, you are the enemy.”

            Each time I see such statements, I have this feeling that a small brick of society has just fallen away. Or maybe I’m just becoming old and unnecessarily dramatic…

          2. 1

            I have been going over my older threads and this sure blew in my face. Anyway.

            • I believe that David is an intelligent person and as usual on this site I assume he is 50% more intelligent than me.
            • I truly do respect David’s personal opinion on this matter. I have personally seen many people who are able to develop banner blindness and can work in a distracting environment. I believe him that he does not mind. It is OK not to be bothered by ads.
            • I don’t mind him objecting to tracking. I dislike tracking as well, but since I suffer from executive dysfunction and colorful or moving objects grab my attention very strongly, I would love to see them gone forever.
            • I fail to see why would David feel the need to step up to say “we should ignore people who suffer from ads and concentrate on tracking”, except maybe because he did not yet receive enough information to come to the conclusion that grabbing other peoples’ attention by underhanded means and not to their benefit is actually a very bad thing and people should be able to opt out. Thus the link.
            • I don’t believe the he is of the opinion that ads are unconditionally OK and people who block them are bad. He would probably have argued as such if he believed that.

            I feel that, in general, rising your voice to say “I don’t mind” turns a discussion about “these people are being harmed, can we prevent it?” into a discussion about “how many percent of people feel that it’s bearable?”. I believe that this shift is highly unfair and that people who feel harmed should be heard and their case considered independently of what others consider business as usual.

            So in my opinion, stepping up to say “I don’t mind the status quo” is – in fact – advocating for the status quo. Specifically by marginalizing the issue. And I feel that status quo truly is a systemic economical exploitation of other peoples’ attention. Not by “selling their data”, but by “manipulating them to think thoughts that are economically advantageous to somebody else”.

            1. 1

              I fail to see why would David step up to say “we should ignore people who suffer from ads and concentrate on tracking”

              David never said that. Peter asked what adblock people used and David said he didn’t use any, but did use anti-tracking tools. This post wasn’t about “these people are being harmed” and he never said that we should ignore the harm. That’s an important discussion to have, but this wasn’t the place for it.

              1. 1
                • “How did you negotiate higher wage?”
                • “I don’t mind the poor wage, but I think we should do something about safety.”

                • “How do I get away from husband who beats me?’
                • “ I don’t mind the beating, it’s mostly my fault anyway, but I would welcome some tips on how to get him to stop drinking.”

                • “So I can’t legally pick up our son from kindergarten, because gays cannot marry and our little one is on paper only his. Any ideas?”
                • “That never happened to us. But have you seen yesterdays news? They are increasing taxes on us again!”

                I don’t care anymore. I probably exhausted my ability to care. We all came with different expectations and the Internet is no place to sort them out. In the end we just talk over each other’s heads. It’s especially hard for us non-US people without compatible connotations.

        2. 13

          just by posting it publicly you advocate for economical exploitation of other peoples attention

          Really? He said “I don’t mind if you want to show me ads” not “People should not mind if advertisers want to show them ads.” He seems to be stating a personal preference rather than advocating for anything.

        3. 5

          I have been using adblockers for years, but lately I’m starting to think that I should turn them off. Rather than using adblockers I should avoid visiting websites that are moldy. Adblockers are like adding a bunch of salt to spoiled food. They don’t actually make commercial websites fresh. I remember the web before the ads took over. %90 of content was put up by people for the love / interest and things were better that way. isn’t better because it doesn’t have ads, it doesn’t have ads because it is better.

          1. 4

            I had the same thought but I am not sure I lasted even an hour. Everything I’d like to read which is NOT a specialized interest – so various news and centralized places for info and entertainment – is blasted not just with ads but popups, pop-unders, pop-overs, floating elements that leave you about 40% of your phone’s screen space, various “reminders”… Basically an hour after I turned off my PiHole I was like “what the hell was I thinking”, re-enabled it and never looked back since.

            As a techie I have zero sympathy for these organizations. I have known people who hosted an entire forum (~2500 users, ~500_000 posts/comments, 6-7 years of history) on a $300 spare computer for years, with a power bill of $30 - $40 a month. Come on now, hosting news and almost anything (except social media) is in reality at least 1000x less expensive than what we’re led to believe, and that number is not an exaggeration.

            F.ex. something like Twitter’s or 9GAG’s or Tumblr’s hosting bill is in fact 98% image / video hosting and their API backend I can replicate in 4-8 weeks of work with much better latency and reliability compared to theirs. And let’s not even mention the news outlets like BBC, NYT, WaPo and many others – they are an order of magnitude easier. I’ve been in many orgs and I can’t imagine how 5 really good sysadmins can’t make their monthly hosting bill in the 4 digits space.

            If they truly “care about the internet and people being informed” then they can take one for the team and swallow $2000 a month. Furthermore, there are websites that prove that if you use much less intrusive ads (say, a single line of text with a distinct color between an article’s title and contents) actually works, people don’t block them, there are impressions and buys, so even that $2000 / month expense can be nullified pretty easily.

            It’s just them being insanely greedy and myopic, that’s all there is to it, and I am not gonna pretend otherwise. From where I am standing, they are the ones who facilitated creating the PiHole and uBlock Origin. Had they been more subtle and didn’t try to get ALL the money all the time, we wouldn’t be in this mess in the first place.

            The current state of things is not our fault. It’s theirs. If they want a change of the status quo then let them show true signs of being more reasonable. Until that happens, the forever cat-and-mouse game and an arms race will continue.

            Or they will enforce DNS-over-HTTPS everywhere and then things will get truly ugly. We’ll see in the next few years, I suppose.

        4. 2

          This is factually incorrect, emotionally manipulative, anti-democratic, and content that is unacceptable in many places (most notably Hacker News, which usually has lower standards for content than Lobsters), hopefully including Lobsters.

          1. 5

            I found it to be fairly reasonably argued. And it has been posted on hackernews twice (

            1. 2

              My comment was a response to their comment. Not to the linked post, which while it has its own set of problems and fallacies, is at least somewhat civil.

      3. 2

        Which ones?

        1. 2

          I’m not david, but I just have the default Tracking Protection enabled in Firefox and the experience is pretty much as david describes it.

          I do keep on standby for the worst offenders, as a last resort option.

          1. 10

            I started using ad blockers while I was working for a company involved in serving ads, not so much because I found the ads annoying or was worried about them tracking me, but more about performance and security - I realised just how easy it is for someone to get their shonky or malicious javascript onto high profile sites. Nobody audits the js that gets delivered alongside ads - at that scale nobody could. Aside from the wasted traffic and cpu, I still can’t think of a better way to get an exploit in front of millions of potential victims. I tried only allowing js on certain sites, but found too much of the internet broke, so disabling ads seemed the only way forward.

            I actually feel ethically I shouldn’t be blocking ads - it doesn’t seem fair that paid creators don’t get money from me consuming their content. I don’t even mind the idea of targeted ads, because if the ads are relevant to me then I might find them more useful and everyone wins. The problem is the industry hasn’t done enough to protect me from scams and malware, but has gone too far trying to trick me into clicking. So, ad blockers it is.

            1. 8

              Nobody audits the js that gets delivered alongside ads - at that scale nobody could.

              I find this completely insane as someone in the industry. Ads at large sites should be sold the way ads have always been sold: someone from the ad department looks at it and gives it the thumbs up or thumbs down. The idea that you can just let ads show up with zero vetting is brand poison, but everyone does it anyway.

            2. 5

              I actually feel ethically I shouldn’t be blocking ads

              I struggle with this too. I’m willing to pay for content. And some ads aren’t obtrusive and are potentially relevant to me. But I continue to block both ads and javascript because time, performance, and bandwidth are real constraints. Web ads are a commercial transaction where I wasn’t given voice. At least when I drive down the highway there is a standard billboard size and I can choose to ignore it. It doesn’t make driving any more difficult. Web ads are like taking those billboards and sticking them in the middle of the road and having me swerve to avoid hitting them.

        2. 2

          As @Tadzik says, I mostly rely on the built-in protections now. I used Disconnect and, before that, Ghostery. It’s more a philosophical stance though. If sites are ad supported, blocking ads feels unethical, but tracking me without my consent also feels unethical and so the balance for me is to allow them to show me ads but block attempts at tracking. I use Consent-O-Matic on some machines to automatically navigate through some dark-pattern GDPR notices that try to make me toggle 100 checkboxes to opt out of tracking. I stopped reading Questionable Content (which was one of my favourite web comics and one where I’ve bought all of the print editions that they’ve published) because they introduced something like this before Consent-O-Matic came out and, while I want to fund them (and have, by buying aforementioned dead trees), I don’t want them to share my personal data with 100 different companies.

          I do have a User CSS file, but I mostly use that for adding [pdf] and [doc] superscripts to links that go to PDFs and .doc[x] files and a big red ‘[brain damage warning]’ to any link that goes to a Facebook domain so that I don’t accidentally click on these (the PDF one predates Safari having good built-in PDF support - this file hasn’t changed much in 15 years).

          1. 2

            I wouldn’t block ads if the ads were merely banners and not trackers/malware. I don’t mind seeing ads, but I feel zero guilt stopping a site from trying to track me or worse.

      4. 1

        This neatly characterizes my attitude. I just block trackers, not ads, and if you detect that I’m blocking those trackers and hide your site from me as a result, I’m happy not to see your site. Good luck.

      5. 1

        This is the only ethical approach. If you access content where the creator has declared that the value exchange is “you view ads in exchange for consuming the content” then your options are to either consume the ads or not consume the content - to block ads is theft-adjacent, where you are taking content from the creator without compensating them for it.

    3. 4

      Anyone have invites? This is intriguing to me, I’m hopeful about the broader move away from VC funding, at least for things that aren’t really capital intensive.

      1. 4

        Sure, I have 10. I’ll DM one to you. If anyone else wants one, please DM me here.

        1. 2

          Update: I’ve given them all away.

        2. 2

          I also have invites if pushcx runs out

          1. 3

            I apparently have an account with invites too. DM me for one.

            1. 2

              I also have invites if someone is interested

              1. 2

                I also have invites.

                …also, anyone we invite has invites.

                1. 2

                  I’m interested if anyone still has any left, thanks!

                  1. 1

                    If you got one could you send one to me?

                    1. 1

                      hey caleb, you may have already gotten this or moved on but in case you haven’t :)

                      1. 1

                        thanks I will use it!

                2. 1

                  I’d be interested if anyone still has, thanks!

                3. 1

                  Can I have one?

                  1. 1

                    Can you send me an invite if you got yours?

              2. 1

                If you (or anyone else) have one remaining, I would be interested

            2. 1

              Update, I have now distributed 10 invites to users here. Have fun!

      2. 2

        There’s a limited-invite thing on their subreddit about every week or so, and you can also email (don’t know the success rate with that one).

        1. 2

          I can attest that I have received an invite after a couple of days.

      3. 1

        I had an email invite like 10 minutes after posting this, thanks ya’ll!

      4. 1

        I’ve got 5 if anyone wants one

    4. 2

      For higher precision on the smaller numbers:

      • generate 52 bits of entropy and put them in the mantissa
      • set the biased exponent to -1, so that you’ve got a number in the range [0.5, 1.0)
      • while(coinflip() && n > 0.0) { n = n / 2; }
      • return n
      1. 4

        You can use __builtin_clzll() to make that loop run 64 coin flips per iteration :-)

        1. 1

          Yep. [Handwaving:] Instead of dividing by two each iteration, just decrement the biased exponent by that amount. Then pair that with the mantissa.

    5. 9

      Read the constraints at the top of the post carefully if you want to use this algorithm! I saw a ‘bug’ in the Set method, which a commenter noticed too:

      Maybe I’m missing the obvious here, but what if a thread query a value while another one is in the process of setting the corresponding key but was preempted between the 2 atomic calls? The key is set in that case but not the value

      It will load the initial value of 0, and return 0, which is equivalent to the key not being found. This is the “(key, 0)” case I described in the previous post.

      A slightly less simple hash table that lacks this limitation (i.e. 0 is a valid value distinguishable from “missing”) is the “Folklore” table described in “Concurrent Hash Tables: Fast and General(?)!” [Meyer, Sanders, Dementiev, 2019]. They named it that because they’ve seen the same algorithm reinvented many times. It uses double-word atomic operations (64-bit in this case) to keep the key and value in sync. I’ve implemented this with 64-bit keys and values since the CPUs I cared about all support 128-bit atomics.

      That paper goes on to implement more capable tables: “ Our starting point for better performing data structures is a fast and simple lock-free concurrent hash table based on linear probing that is, however, limited to word-sized key-value types and does not support dynamic size adaptation. We explain how to lift these limitations in a provably scalable way and demonstrate that dynamic growing has a performance overhead comparable to the same generalization in sequential hash tables.”

      1. 2

        I noticed that and was worried by that too. I remember once helping a junior engineer troubleshoot a nasty bug which involved the misuse of a concurrent hashmap leading to a race condition double-constructing a large, expensive object that was only supposed to be constructed once per key. The second instance overwrote the first in the map, but not before other objects had already been built with references to the first while the second instance was being constructed.

        I’m wary of such things now. Just give me a lightly-contended lock if possible and have done (at least until profiling shows that’s a bottleneck).

    6. 8

      I really don’t like the model where editing immediately modifies the file.

      I have three separate backups (to two types of media), but not because of hardware or power failure, but because the biggest hazard to my data is myself. I accidentally trash/edit/delete files far more often than my disk gets corrupted or my power is lost. Because of that, I really appreciate the mental step of “save the file once I’m happy with what I’ve done.”

      When I think about opening a disk file in an editor, I imagine the editor copying the file into memory so I can make changes to the memory image. When I’m satisfied of my changes I can save them back into the disk file, replacing the contents. If I try to close the window without saving, I’m prompted “do you want to save?”. If I’m surprised by a “do you want to save?” prompt, then I somehow modified the editor’s copy without realizing it, and can figure out what to do. It all works.

      On the other hand, if the OS/editor treats “editing” as “directly changing the file”, then I’ve lost that protection. Sure, it’s nice to preserve data in the case of power loss or a crash, but that can be done in other ways without breaking the model I find so helpful.

      1. 2

        Agreed! To me, my mental model is that save really means commit. I really want to be able to discard and roll back my changes if I don’t like where they are going. (I hate it when apps don’t give me a revert option.)

        The auto-saves that really drive me nuts, though, are the ones where it’s to a shared resource on a server or a cloud drive. Then I get paranoid that a stray keypress might have accidentally introduced a typo or deleted a chunk of text in a shared document that others will be viewing.

        1. 2

          I have the opposite problem, auto-save is not enough for me if it’s purely time based and I hate remembering to explicitly save. I have a kakoune hook that writes all buffers onto disk when window loses focus. Thus I don’t have to worry about having files in inconsistent state when I open gitui, run git, run formatter etc. And for the typo/revert paranoia (that I also have) I just use git, I don’t really edit files without version control. And for config files I use sudoedit.

      2. 2

        For Etoile, we aimed for both. We gave you unlimited persistent undo, but the ‘save’ analogues were ‘name’ and ‘export’. You could name a specific revision of the document by attaching metadata to it that would let you find it later. Exporting was intended for interoperability and gave a flattened version of the file in an interchange format (e.g. PNG, HTML, PDF, ODF, whatever).

    7. 4

      I love the way you call them, “alien artefacts”.

      In my experience so far, their fate is invariably the same, they get rewritten as soon as they become a nuisance to business, and you cannot evolve them past a given point.

      In a minority of cases, I’ve seen them isolated, and their functionality augmented with modern code, in a Frankensteinian turn of events.

      1. 6

        I’d argue that anything that can be rewritten easily isn’t an “alien artifact” … the interesting cases are the ones that last decades because they are too useful to give up (produce too much revenue etc.), and too hard / risky to rewrite, or the organization lacks the expertise to rewrite

        1. 2

          I agree. I didn’t put it explicitly in the post, but one of the things I was alluding to with the name “alien artefact” is that they are rare. In my 20 years in this business I’ve only encountered a handful I would classify as such, and only at two of the six jobs I’ve held in that time.

          1. 3

            I’ve seen quite a few in the VFX industry, usually an arcane bit of Perl that’s both been code golfed to hell and is doing something arcane and inscrutable to begin with… if they didn’t have a commit history — always a single event, showing up fully formed from out of the ether — attached to the name of a long-retired but known human grey beard, I’d think they were from Ceti Alpha V.

            1. 1

              Funny you should mention that. I used to work at an feature animation studio with a coworker who had previously worked at a competing studio. He once told how the they’d basically lost the institutional knowledge for how to make any significant changes to their humanoid rigging system. Which led to a period with many of their shows having a very specific look to their humanoid animations. That was the first thing I thought of on seeing this.

              (To be fair, I think they must have either figured it out or replaced it, judging by more recent shows.)

    8. 7

      Apart from different ways of seeing non-technical problems, people that fit on Type 2 will fall on Type 1 easily when motivation and/or energy starts to dry out, so mentality isn’t the only value here.

      Type 2 requires a bigger sense of responsibility, exploring a little bit further, not caring about something taking more work if that means the problem is truly solved.

      IMHO we should aspire to be the Type 2, the one that doesn’t allow technical debt to pile up and doesn’t let weird exceptions to the rules unattended.

      1. 2

        I’ve also had cases of delivering a Type 1 solution as a matter of expedience while on the way to Type 2.

        E.g., a customer needs a new feature from our software to meet their own deadlines. I develop it and it works for their needs, but maybe poorly interacts with some other features (which maybe we know they’re not using, or not needing for this context.) We provide them with a pre-release and just document around the rough edges. Then I continue to do the hard work to put up proper guard rails or make it work correctly with the other features. That then gets rolled out as the general release.

        Sometimes a Type 1 solution is just a not-yet-finished Type 2 solution. And sometimes customers are okay with a “just do X” solution if it helps them meet a deadline.

    9. 8

      I really wish that we could move past the “how dare people train AI on code that we gave away for free” thing.

      Imagine, if you would, that Microsoft used cheap labor in Elbonia (some fictional nation, cribbed from the days when Dilbert was something to have on your cube wall) to implement copilot. In this thought experiment, these people spend all day reading code, getting some facility with it (arguable), and then mechanical turk questions submitted using knowledge gained by their reading.

      If that’s okay, or okayish, we need to find some other reason to be upset about our current reality where we have replaced the Elbonians with AI–and I think that there is probably a good argument, but dear Lord I haven’t seen it presented as of yet.

      1. 49

        I really wish that we could move past the “how dare people train AI on code that we gave away for free” thing.

        Only public domain / CC0 code is given away for free. When I give away my code under the MIT license, it is on the condition that you acknowledge me when you create a derived work or build anything from my code. If you create anything that is a derived work of my code then you have a legal obligation to credit me.

        Imagine, if you would, that Microsoft used cheap labor in Elbonia (some fictional nation, cribbed from the days when Dilbert was something to have on your cube wall) to implement copilot. In this thought experiment, these people spend all day reading code, getting some facility with it (arguable), and then mechanical turk questions submitted using knowledge gained by their reading.

        If you pay a team of Elbonians to create derived works of my code without attribution then I would have standing sue you. If they read my code but then produced things that are independent creative works that are not derived works then I would not have standing to sue you.

        The question for the courts is which of these the current systems are more like.

      2. 17

        “code that we gave away for free as long as you follow these rules” - that’s where a lot of the concerns and unhappiness is coming from

      3. 12

        Besides @david_chisnall’s excellent answer (which to me is the main one), I feel like there’s also a fundamental difference in scale. Similar to arguments about “it’s just metadata” and mass surveillance, I believe that there’s a phase change that occurs somewhere along the way once things get big enough. A few Elbonian’s reading some code here and there to learn to programmight be one thing. Million’s of Elbonian’s memorizing every line of code ever published on GitHub might be something else.

        There’s also something that feels kind of skeevy about taking my free code and then selling it back to me as a service. I put my code out there because I want to see people make cool stuff with it and only ask that they give me credit for my part in their success. If someone wants to use my code as a component in an app (where the thing the app does is much more than just my code) with attribution, that’s one thing. A for-pay service that’s built around just selling that code back to me or to someone else without attribution is another thing entirely.

      4. 7

        I think people can be upset with things regardless of what things they’re not upset at.

        It’s fine to posit different scenarios but as soon as you make that scenario a discrediting argument then you are using a bad faith argument tactic known as a whataboutism. (It’s also a logical fallacy).

        When I see argument tactics my brain short circuits and I get a bit sad. I want a community where we can hear others and express our opinions without these tactics. Can you consider reframing your thought experiment as a personal one rather than one that is discrediting OP?

      5. 6

        I agree with your experiment as posed, but let me make a small twist: “In this thought experiment, these people spend all day reading code, getting some facility with it (arguable), and then cut-and-paste excerpts of the code to answer questions submitted using knowledge gained by their reading.” To me that’s not OK.

      6. 6

        What if the elbonians read the code out verbatim? That’s the main problem here.

        The second problem is that the ai model is a derivative work of the code it was fed; at least some of that code is agpl and the model has not been open sourced under the agpl.

      7. 2

        Is it ok? I think that we have pretty heavyweight procedures for clean room engineering because it’s not ok.

      8. 2

        I really wish that we could move past the “how dare people train AI on code that we gave away for free” thing.

        Okay, here’s my presentation of it: I think that a machine learning model trained on my code is a derivative of that work. Most of my code is released under licenses that require reciprocation, so training proprietary models on that code is a violation of its license.

        Microsoft / GitHub have fundamentally betrayed the trust of the community that created, used, and promoted GitHub in the open source / free software world.

        I don’t think we should move past it, and in fact, I think that wherever possible, we should withdraw our support. I only maintain a GitHub account to contribute to open source projects that remain there; otherwise, I’m on Soucehut now, like the author.

    10. 17

      I think it was Audacity that had added telemetry …. in the form of Sentry bug collecting. People really got super pissed off and I was honestly a bit flummoxed. Surely bug reports are reasonable at some level?

      It does feel like the best kind of telemetry is the opt-in kind. “Tell us about this bug?” Stuff like Steam has user surveys that are opt-in. It’s annoying to get a pop-up, but it’s at least respectful of people’s privacy. I have huge reservations about the “opt in to telemetry” checkbox that we see in a lot of installers nowadays, but am very comfortable with “do you want to send this specific set of info to the developers” after the fact.

      1. 25

        IIRC, Steam also shows you the data that it has collected for upload and gets your confirmation before sending it.

        I also appreciate that they reciprocate by sharing the aggregated results of the survey. It feels much more like a two-way sharing, which I think really improves the psychological dynamic.

      2. 8

        Unfortunately, bug reports are just a single facet of product improvement that seems to get glossed over. If you can collect telemetry and see that a feature is never used, then you have signals that it could be removed in the future, or that it lacks user education. And automatic crash reporting can indicate that a rollout has gone wrong and remediation can happen quicker. Finally, bug reports require users to put in the effort, which itself can be off-putting, resulting in lost useful data points.

        1. 2

          If you can collect telemetry and see that a feature is never used, then you have signals that it could be removed in the future, or that it lacks user education.

          But it can be very tricky to deduct why users use or do not use a feature. Usually it can not be deduced by guessing from the data. That’s why I think surveys with free-form or just having some form of channel like a forum tends to be better for that.

          A problem with both opt-in and opt-out is that your data will have biases. Whether a feature is used by the people who opted in is not the same question as whether people (all or the ones who pay you) make use of it. And you still won’t know why so..

          There tends to be a huge shock when people make all sorts of assumptions and then because they try them and they still fail they start talking to users and are hugely surprised by thing they never thought off.

          Even with multiple choice surveys it’s actually not the easiest. I am sure people that participate in surveys of technologies know how it feels to when the data is prevented give wrong assumptions as interpretation.

          It’s not so easy and this is not meant anti-survey, but to say that this isn’t necessarily the solution and it makes sense (like with all sorts of metrics) to compare that with actual (non-abstract/generic) questions to end up implementing a feature, investing time and money only to completely misinterpret the results.

          And always back things up by also talking to users, enough of them to actually matter.

          1. 7

            But it can be very tricky to deduct why users use or do not use a feature. Usually it can not be deduced by guessing from the data. That’s why I think surveys with free-form or just having some form of channel like a forum tends to be better for that.

            Asking users why they do/don’t use every feature is extremely time consuming. If you have metrics on how often some feature is getting used, and it is used less than you expect, you can prepare better survey questions which are easier for users to answer. Telemetry isn’t meant to be everything that you know about user interactions, but instead a kick-off point for further investigations.

            1. 1

              I agree. However that means you need both and that means that you cannot deduct a lot of things simply by using running some telemetry system.

              Also I am thinking more of a situation where when you make a survey and add (optional) text fields to provide context. That means you will see things that you didn’t know/think about, which is the whole point of having a survey in first place.

      3. 1

        That’s something I’m not so sure about either though. I don’t really have a problem with anonymous usage statistics like how often I click certain buttons or use certain features. But if a bug report includes context with PII I’m less keen on that. Stack variables or global configuration data make sense to include with a bug report, but could easily have PII unless it’s carefully scrubbed.

    11. 4

      I’m a big fan of code formatters, but I haven’t been able to get them to pull off custom alignment for things like groups of vertices, e.g.

      static GLfloat cube_verts[] =
          -0.5f,  0.5f,  0.5f,
          -0.5f, -0.5f,  0.5f,
           0.5f,  0.5f,  0.5f,
           0.5f, -0.5f,  0.5f,
           0.5f,  0.5f, -0.5f,
           0.5f, -0.5f, -0.5f,
          -0.5f,  0.5f, -0.5f,
          -0.5f, -0.5f, -0.5f,

      This is something I’d love to be able to teach a code formatter to recognize and do automatically. Right now I format them manually, and either mark the regions with a code annotations that tell the formatter to ignore it (which feels a little gross) or just avoid using a formatter altogether.

      If this is something Topiary could handle easily I would be very interested!

      P.S. If anyone has an elisp snippet that can achieve that formatting, I’d love to see it :)

      1. 5

        My reaction against formatters comes from exactly this sort of code. Maybe graphics code is just more antithetical to sledgehammer formatting?

        The idea that I’ve long had for formatters to detect this kind of thing is to compare character-by-character on each pair of lines and see if there’s an indentation for the second line of the pair that makes more than a certain percentage of the characters on the lines match exactly. If so, just set the indentation of the second line to that and do nothing else. And if that requires less indentation than the lines above it in the block, go back and add that extra bit of indentation so that the left-most statement has the correct indentation.

        So in your example, between these lines:

        -0.5f, -0.5f,  0.5f,
         0.5f,  0.5f,  0.5f,

        it would see that by indenting the second line by one space more, it could match each 0 for the 0 above it, each . for the . above it, etc. In this case 22 out of 24 characters (~92%), including whitespace, would match the character above them with that offset.

        Sadly, I don’t have any elisp for this.

    12. 4

      Faithful to the author’s intent: if code has been written such that it spans multiple lines, that decision is preserved.

      Yes, yes! Thank you! I really wish that more formatters got this! I often like to break longer expressions over multiple lines in a way that highlights the structure of the parallel clauses. For example, if I wanted a range check to see if a point is inside a 3D box, I might write something like:

      if (box.min.x <= point.x && point.x <= box.max.x &&
          box.min.y <= point.y && point.y <= box.max.y &&
          box.min.z <= point.z && point.z <= box.max.z)
          // do something

      Often this results in a larger number of shorter lines. So many formatters seem to think that since there’s still plenty of space on each line, they should remove all my line breaks and cram everything onto as few lines as possible. Sure, I can maybe add some extra parenthesis, add empty trailing comments, or do other little tricks to try to cajole the formatter into leaving it the way I want. But fighting just to appease the formatter always feels like an aggravating waste of time when it really wants to make the code less readable.

      (Note that I have no problems with formatters tidying up indentation, removing trailing spaces, normalizing spaces between tokens, etc. But if the lines aren’t too long, then leave my darn line breaks alone!)

      1. 3

        IMO this is the price to pay for consistently formatted code everywhere. In this specific case you think it would be better to group conditions in groups of 2 (and I tend to agree with you), but it’s very subjective and other devs might prefer it otherwise.

        The point of formatters is to remove any subjective decisions, and each degree of freedom it gives goes against this goal. I’d rather give up my freedom on specific examples such as this one than give everyone the freedom to insert line breaks where I would find them very jarring.

        1. 2

          I buy that to an extent. But so much of what we do when we write code involves making subjective decisions. Starting with “What do I name this variable?” and going all the way up to “How should I design and architect this system?” Other devs might disagree with my decisions there. And some of the decisions they make might not be to my taste either, but de gustibus… It just seems a little silly to want to completely remove judgement and taste on this point.

          I suppose that maybe it’s a matter of domain. I work in graphics and have always enjoyed being close to the metal and having the control that comes from that. (And these days, I’m now designing pieces of the metal.)

          1. 1

            To be honest, if there ever existed a tool forcing good variable names if would 100% use it!

            But you make very good points, and I whole-heartedly agree.

            I guess personal preferences come from experience. Mine has been to use the black Python formatter, which gives very minimal freedom to devs, and even though I strongly disagree with some of the formatting choices it makes (formatting a language where indentation is significant is no easy task), letting go has been liberating!

      2. 2

        I tend to feel that when I am “arguing” with a formatter, it’s an indication that the code should be reformatted.

        In your example, if I was reviewing your code, I would be asking you to refactor it regardless of your formatting or the formatter’s formatting:

        withinX := box.min.x <= point.x && point.x <= box.max.x
        withinY := box.min.y <= point.y && point.y <= box.max.y
        withinZ := box.min.z <= point.z && point.z <= box.max.z
        if (withinX && withinY && withinZ)
            // do something

        Which is something a formatter can’t suggest (yet.)

        1. 1

          Sure, that was just an example off the top of my head. And I will sometimes do transformations like that to appease the formatter. But the downsides to something like that are:

          1. It now requires me to spend extra time coming up with good names for the new identifier. I’m pretty picky about names and like to get them just right, so that can be a surprising amount of work sometimes.
          2. Now those identifiers are outside the scope of the if-statement instead of inlined expressions, and now on reading that I’d have to wonder if they might be reused later on. If something is inlined, I know right away that it has minimal scope and can’t leak anywhere.
          3. Adding new identifiers like that makes it more verbose and increases the chance of typos. There’s now a chance that I might accidentally typo if (withinX && withinX && withinZ) or similar. (Granted, some compilers are pretty good now about warning on redundant clauses in boolean expressions.)
          1. 1
            1. It now requires me to spend extra time coming up with good names for the new identifier. I’m pretty picky about names and like to get them just right, so that can be a surprising amount of work sometimes.

            I don’t necessarily think that is bad thing! I also am pedantic about the name - after all, they do matter so much to readability. But the alternative (inlined conditions) means no name at all, and that seems worse to me.

            1. Now those identifiers are outside the scope of the if-statement instead of inlined expressions, and now on reading that I’d have to wonder if they might be reused later on. If something is inlined, I know right away that it has minimal scope and can’t leak anywhere.

            True enough! Hitting “find references” or the equivalent is pretty easy when I need to know though.

            1. Adding new identifiers like that makes it more verbose and increases the chance of typos. There’s now a chance that I might accidentally typo if (withinX && withinX && withinZ) or similar. (Granted, some compilers are pretty good now about warning on redundant clauses in boolean expressions.)

            Agreed, that is a risk. I think in this example, it is more likely to happen due to only a single letter changing (and it took me a couple of reads of your sentence to see the mistake!). In some cases, the editor (lsp/compiler/etc) will tell you that a variable is unused (in Go’s case, will fail to compile), but that doesn’t happen if its also used later in the function. Perhaps automated tests would cover this?

            1. 1

              Naming is hard (it’s one of the two hardest things in computer science, along with cache invalidation and off-by-one errors), so why introduce a name when it’s not needed? The first example reads fine for me.

        2. 1

          I wonder if a compiler will convert that logic to the initial version internally, to take advantage of short-circuiting to avoid executing the later booleans. Probably not a big difference in this specific case, but I often use similar code to avoid expensive calls.

          1. 1

            It seems like it would be something trivial for a compiler to inline, but as to whether they do or not (or rather, which compilers do, and which don’t)…no idea

            1. 1

              Good question! Let’s Godbolt it!

              Here’s x86-64 Clang 15.0 with -Ofast

              Here’s x85-64 GCC 12.2 with -Ofast

              In this particular case, it looks like Clang generates exactly the same thing both ways, but GCC generates slightly more verbose code in the version with the extra variables.

      3. 2

        Faithful to the author’s intent: if code has been written such that it spans multiple lines, that decision is preserved.

        That’s a challenging constraint!

        • On the one hand: for tabular code like you wrote, Boojum, wrapping the lines is about the worst thing a formatter can do. Formatters must leave such code alone.

        • On the other hand: for non-tabular long expressions, wondering how to improve readability by adding enough-but-not-too-many linebreaks … has been a bit of a productivity trap for me. All the more so whenever I’ve had colleagues whose philosophy was “place linebreaks wherever, I know what I mean when I write it”, thus providing a steady supply of productivity traps. Here, having an autoformatter is a godsend, because it provides an obvious choice even when there is no obviously optimal choice.

        • On the gripping hand, distinguishing intended linebreaks from thoughtless linebreaks doesn’t have to be the formatter’s job. I currently mark tabular code as ‘don’t touch this’ llke below, and I’m perfectly happy with that.

          # fmt: off
          # fmt: on
    13. 9

      A little off-tangent, but I really find the Haiku desktop and GUI theme lovely whenever I see screenshots. It’s not flat! It has shading! Icons have color!

      1. 5

        Couldn’t agree more. I can’t stand “flat design”, soulless grey, white space and monochrome icons everywhere. Haiku is a refuge from all that, and is just plain fun to use.

      2. 4

        I agree - it looks amazing and has a bit of an 8bit aesthetic, without relying on pixelation to achieve that feeling.

    14. 5

      I can’t speak to the Rust side of this post, but as far as Emacs goes two really useful key combos to know are:

      • C-x 8 RET (insert-char)
      • C-u C-x = (prefixed what-cursor-position)

      The former lets you insert various unicode character by either name or hex code. When you hit it, it will prompt you for which character and you can type something like SMI and hit tab to see a list of smiles.

      The later, will pop up a help buffer showing you tons of information about the character that the point is on: the unicode codepoint, name, and categories, the encoding, the font that the displayed glyph was actually pulled from, any properties like syntax highlighting, etc. If you do it without the C-u prefix and hit only C-x = then you just get a short one-line summary in the modeline.

      Another thing I use surprisingly often is a convenience function that runs M-x occur RET [[:nonascii:]] RET, to quickly check for and locate things like stray curly quotes that have been inadvertently pasted into a file.

    15. 3

      Wow. This is the first I’d heard of this game, but as a font nerd married to a history (esp. early 16th century) nerd, this looks awesome. Definitely has that labor-of-love look to it!

    16. 1

      As a meta-question, how do you select what to read? I tried following the fire hose that is the Arxiv CS web feed for a while, but the titles were all either hyper-specialized to the point where I knew none of the nouns, or so general that they sounded more like a manifesto than research.

      1. 3

        I used to read “the morning paper” which had a CS paper a day, but that unfortunately stopped. The archives are still up though:

      2. 2


        People post papers here sometimes.

      3. 2

        Semantic Scholar has a recommendation engine that will give you suggestions for new papers related to your interests. Aside from that, I skim conference proceedings for titles that seem interesting.

      4. 2

        Honestly, I just relentlessly google phrases that come to mind, until I find projects / papers / books that I elevate to my personal canon. Once you find major cornerstones like that, you can get pretty far by reading everything related to them and everything that they reference.

        For example, the two biggest cornerstones I’ve found in the last couple of years are TLA+ and the seL4 project. Luckily, both of these have dozens of papers related to them, and each paper references their previous work as well as related work in the field.

        Seriously - try putting into words what you’re looking for, even if it’s very high level. I was googling really vague things like “software correctness,” and even that will get you going. The trick is figuring out exactly what it is you’re interested in, and putting that into words.

        1. 1

          Sounds like something ChatGpt might do even better now. Worth a shot anyway.

          1. 1

            No, I don’t think it would. By finding things “the hard way,” I’ve learned many things and connected many brain pathways that would never be reproduced by just getting a list of answers to your questions right away.

          2. 1

            A couple of weeks ago I experimented with asking ChatGPT for fiction book recommendations. When I asked about for more like one book, it told me that was the first in a trilogy. The trilogy proved to be non-existent. The titles were real, but by other authors.

            I’ve seen mention before about it confidently making up plausible sounding references to fictitious papers. So do be careful.

      5. 2

        This is how I do, start with a paper, then find:

        1. cited papers
        2. papers which cite the same papers
        3. other papers by the same author
      6. 1
        • Find journals, conferences on topics you’re interested in. Go to their pages and see latest editions and read titles/abstracts. Expand from there.
          Here’s an example from ACM with a list of all conferences:
        • Find papers you like and see where they were published and expand from there.
        • Find #papers-we-love on Github, twitter, youtube etc.

        Note, sometimes the PDfs are behind paywalls(unless you’re at a uni that pays licenses). There are ways to get the paper, like from the Authors website( or libgen!)

      7. 1

        To add to the other suggestions by the sibling replies, try seeing if there are any State-of-the-Art Report (STAR) papers on your topic, or other good literature reviews. Those can often give you a nice starting place with an overview of the topic and a ton of links to other worthwhile papers to read.

        E.g.: for graphics.

    17. 10

      Quite good article, as usual from this author, but could probably use a slightly less aggressive title and tone. As it is it’s not quite a rant, but it probably won’t convince anyone whose mind isn’t already made up. But also whenever you keep hearing the same arguments about something it saves a lot of time to be able to put down all the responses in one place and just refer people to specific parts. Rehashing the same arguments all the time is tiring.

      It also makes it extremely hard to integrate Go with anything else, whether it’s upstream (calling C from Go) or downstream (calling Go from Ruby). Both these scenarios involve cgo, or, if you’re unreasonably brave, a terrifying hack.

      While indeed a terrifying hack… The linked article is also a pretty basic work-through of what it takes to make different calling conventions match up with each other. What do you think your compiler and linker do every day of the week? The only difference is they know what they’re doing before hand. The terror mostly comes from breaking Go’s calling conventions for its own good.

      1. 6

        it probably won’t convince anyone whose mind isn’t already made up

        Few words ever do. The real target should be the undecided, and I felt the article did okay on that front.

        1. 1

          Uh, I am not sure if I disagree or not with that. I can see young, naive people reading this and saying, “omg I better not choose Go”, but if they ran into a pro-Go article first, they would do the opposite. So yeah, if inexperienced are the target, it might work.

          To me (not working with Go, but being experienced in other languages), it looks like a lot of cherry-picked straw-man examples. Not all of them, but many of these things are either one-sided or not relevant at all.

          E.g. from the list at the bottom of the article:

          Others use it, so it must be good for us too Everyone who has concerns about it is an elitist jerk

          What kind of argument, is that? Who, apart the aforementioned juniors, picks languages and technologies based on such arguments?

          Its attractive async runtime and GC make up for everything else

          Again, a lot of people picking go say “I like the async runtime”, but i don’t think people tell themselves, “GC will make up for things for me”. Basically all the points are mixed with some good examples and explanations, but the premises and assumptions, and the “we tell ourselves these lies so we can keep using Go” is simply not proven to me, and the lies are just made up to fit the narrative of bashing Go.

          And this has in turn made me think that the author is clearly smart and good, but they are not writing anything for me, but to promote some agenda that they have.

          So again, I’m not sure if the article can or cannot convince readers of anything, and what is it that it is convincing.

          1. 1

            I don’t think the article is self sufficient, you need other sources to make up your mind. Personally, my mind was made since I learned Go didn’t have generics, which in my opinion is unacceptable for any garbage collected language conceived after Java added them. And sure enough, a couple other serious flaws came up, that showed that indeed, they did ignore the last few decades of PLT research.

            As far as I can tell, Go’s legitimate niche is narrow enough that I wouldn’t use it for anything substantial. Same as Python, actually: I use Python often, but never for anything big. Either it has a library that does what I need and my program can stay very short, or I’m using its bignum arithmetic to prototype elliptic curves stuff.

            Others use it, so it must be good for us too Everyone who has concerns about it is an elitist jerk

            What kind of argument, is that?

            It sneers at arguments by popularity. In fact, I suspect that Go would never have caught on if people couldn’t say “Google uses it”. Thing is, even if it’s good for Google, Google is an extremely singular company. Programmers are extremely unlikely to work in a similar enough setting that looking at what Google uses is a good heuristic.

            Who, apart the aforementioned juniors, picks languages and technologies based on such arguments?

            A good chunk of the great many people that use Go and came to regret it? As far as I know juniors don’t pick languages in the workplace, that decision is given to more seasoned devs. Senior folks still make mistakes, of course, but here I feel like many of the flaws that ended up biting them could have been anticipated:

            • No generics? We need to ascertain that maps & slices will be enough.
            • Peculiar error handling? We need to see how it works, what are the pros, and what could go wrong.
            • Poor C interoperability? We need to look at the FFI, and if we can’t reasonably talk to C we need to investigate alternatives (pipes, sockets…).
            • Few platforms are properly supported? We need to try all platforms we are likely to use.

            I suspect corners were cut instead. Prototypes were written, and it went well enough that they found their way to production before the language could be properly evaluated (devs were stuck in honeymoon phase, management wanted to ship, deadlines were tight…).

            i don’t think people tell themselves, “GC will make up for things for me”

            Go’s GC has an unusual performance profile. If I recall correctly, it optimises latency at the expense of pretty much everything else. I guess that means it has more overhead than other GCs, but the low latency remains very useful for real time networking… which is precisely Go’s niche, so not only people could tell themselves the GC makes things up for them, if they use Go for its intended purpose they could even be right.

      2. 2

        Quite good article, as usual from this author, but could probably use a slightly less aggressive title and tone. As it is it’s not quite a rant, but it probably won’t convince anyone whose mind isn’t already made up.

        I would like to problematize the idea that an aggressive tone would prevent people from being convinced. Are our brains really that weak, that conclusions that we should derive from facts are actually a product of the tone with which the facts are presented?

        1. 27

          Are our brains really that weak, that conclusions that we should derive from facts are actually a product of the tone with which the facts are presented?

          Yes. People get defensive when they feel attacked. Unfortunately this is better documented in books about romantic relationships than any material directly relevant to software engineering.

          Some people thrive in environments open to hostility. OpenBSD and the culture Theo de Raadt cultivates around it proves that. But most people aren’t like that.

          1. 5

            This is tone policing, though. A less aggressive tone is usually ignored; it’s common knowledge in PLT/PLD circles that Go is unacceptable for typical tasks, but because it’s said politely, it is not taken seriously as a rationale for discarding Go.

            1. 12

              Personally I think the original article is fine. My problem is with this nonsense notion that people who don’t respond well to aggression are weak-minded. In the software community that attitude is often used as a bad faith justification for being an asshole.

              1. 11

                I’ve also seen people using the tone of the speaker/writer as a justification to ignore their core points, which is lazy at best. I’ve seen people dismissing well reasoned, level headed, professionally delivered arguments just because the same author used a more aggressive tone than they are comfortable with elsewhere. And sometimes, the aggressive tone is actually justified. Not as an argument unto itself, but as a way to stress the importance of the issue.

                My problem is with this nonsense notion that people who don’t respond well to aggression are weak-minded.

                Sure, people who are personally attacked can’t be expected to respond well. I know exactly how that feels. But I’ve also seen people responding poorly to other things. One striking example from a year ago went as follows:

                1. Mr Aggressive gives constructive feedback to Team, suggests what ought to be a fairly easy fix. I perceived no aggression at this point.
                2. Team dismisses the feedback, say the fix would be way too hard, and Mr Aggressive should tone it down.
                3. Mr Aggressive gives up on Team, leaves the conversation.
                4. Some time later, Mr Aggressive takes 4 days of their own time to write a proof of concept that quite thoroughly demonstrates that what they said was easy, is. Then he presents his work on YouTube. This time he’s not being kind to Team. He gave up on convincing them, now he’s using them as an example not to follow.
                5. Quite a few bystanders respond poorly to that. Some say he’s too aggressive and use that as an excuse not to listen. Others rehash the same arguments Mr Aggressive explicitly addressed, and use those as an excuse to ignore Mr Aggressive’s counter-arguments.
                6. Mr Aggressive delivers a level-headed, calm lecture explaining in detail how to implement the fix, why it works, and what’s the broadly applicable meta-reasoning that lead to the fix.
                7. Some people respond poorly to that, because of the aggressive tone back in (4).

                Sometimes, merely pointing out that there is no God will be perceived as an aggression from some religious people. It’s really not. It’s just an attack on their mistaken belief. Yet I’m pretty sure many religious readers would have flinched a little at my exact choice of words right now. Maybe perceived it as a too combative, perhaps even a little aggressive. But that perception should not be used as an excuse to respond poorly. That would be weak minded.

                1. 2

                  Terminals, eh?

                  1. 1

                    I hesitate to confirm or deny anything here, because it would shift the discussion to the particulars of whether this tone or those words were appropriate for these circumstances, and the technical discussion about whether you believe Team or Mr Aggressive.

                    I did forget point 8 though: about a year after the events (a couple months ago I think), Team ended up implementing the fix, or something similar enough that it achieved similar effects.

              2. 2

                You did say “yes,” seemingly agreeing that being swayed by tone rather than facts is a weakness (at least on questions of engineering). And you seem to imply here that this is “not responding well to aggression,” which is about the same. Sure this observation can be used for ill, but I don’t see why anything said here warrants an aggressive response.

            2. 7

              It is possible to make one’s statements aggressive, passionate and convincing without being a douchebag. Hurting people because it’s a convenient shortcut for getting attention is pure laziness.

          2. 3

            So a secret Google agent could easily psyop you into liking Go by writing an overly aggressive rant against it.

            1. 3

              No, that’s absurd.

              1. 4

                It’s a hypothetical.

                1. 9

                  No, it was a feeble and extremely transparent attempt to mock my position. It doesn’t even follow from anything I said, you’re just saying goofy things to paint me as “weak minded.” It’s absurd and childish.

                  1. 4

                    I’m sorry you read it that way. I wasn’t the one who made personal insults though.

                    1. 2

                      Neither did peter.

                      1. 4

                        It’s right there…

        2. 13

          Are our brains really that weak, that conclusions that we should derive from facts are actually a product of the tone with which the facts are presented?

          Yes, without question.

        3. 6

          (Edited out a glib intro)

          If your objective is to try and convince pretty neutral people that something is “bad” you can just share what info you have. At one point implying people are stupid will indeed make the people whose stupidity you are implying feel annoyed at you!

          Would you give the time of day to somebody calling you an idiot? This is obviously not as direct, but it’s on the scale.

    18. 3

      Yep, half-open intervals and zero-based indexes are the way. When I was learning programming with BASIC, I had so many off-by-one errors due to the way that for-loops there are inclusive. When I later learned C and then C++, embracing the half-open interval was helpful. Then, learning the STL-derived part of the standard library really hammered the lesson home. I honestly can’t remember the last time now that I had an off-by-one error when using the half-open interval.

      Also, I work in graphics. Dealing with linearized storage of multi-dimensional arrays at a low level would be so much more of a hassle with closed intervals or one-based indexing.

    19. 2

      Does the AGPL prevent corporations from using mold as a normal command line program without sharing their own source code? I don’t think it does and if not I don’t understand what the motivation would be to purchase a license for the majority of companies. As long as the code of mold itself is not linked with their own product (the most likely use case), they are under no obligation to do anything.

      I think Rui has a few practical options here:

      1. Sell “support contracts” This would allow companies to make special requests of Rui and to add special features.
      2. Sell “special features” Keep advanced features that are likely to be needed by customers out of the open source version and only license their usages to customers.
      3. Relicense all of mold under a more restrictive license. I.e. only for hobbiest/non-commercial usage (not GPL) and sell licenses for commercial use.

      Personally I would go with 2). This is the route that sqlite went. Keep mold core light and sufficient for hobbiest application development and keep a treasure trove of advanced extensions for customer who want to pay. For instance, only support x86-64-{linux,mac,windows} and maybe ARM64 in the open source version (desktop platforms) yet make embedded platform support paid (ARM32, RISCV32, hex output etc.).

      My $0.02 I wish Rui all the best.

      1. 2

        Cppcheck is another software development tool that went to #2 last year with the launch of Cppcheck Premium.

    20. 1

      Ultimately, the M1 is incredibly fast. By being so much wider than comparable x86 CPUs, it has a remarkable ability to avoid being throughput-bound, even with all the extra instructions Rosetta 2 generates.

      Anyone know what “wider” means in this case? Dispatches and handles more instructions at once?

      1. 2

        Yes, basically. I don’t know of an exact agreed-upon definition of ‘width’—save perhaps simply ‘capacity for parallel computation’—but the combination of a great number of execution units, great memory bandwidth (and a decoder that can keep up!), and great reorder buffer serves to ameliorate or completely erase the effect of superfluous junk instructions.

        1. 2

          The M1 is especially known for its wide decoder. It can decode 8 instructions at once, which is unheard of in other mainstream microarchitectures. AMD’s Zen 3 & 4 top out at 4 instructions; Intel managed to push Golden Cove (perf cores in 12th gen CPUs) to 6 instructions.

          This is one of those things where RISCy ISAs provide a tangible advantage: aarch64 instructions are generally 32 bits wide (I seem to remember there being some extension which includes instructions encoded as a pair of 32-bit words?) and start on 4-byte boundaries. With x86(-64) on the other hand instructions are 1-15 bytes and can start at any byte, which makes parallel decoding challenging.

        2. 2

          Having twice the number of architectural registers (as noted in the article) doesn’t hurt either.