Threads for hannu

  1. 8

    Cute benchmark pic, but:

    • No decompression runtime / memory.
    • No lz4 nor zstd comparison points.
    1. 12

      I agree, the benchmark is weird. Besides the fact that some important competitors are missing, I understand that compression implementations make trade-offs between time/memory and compression efficiency, so it’s hard to compare them fairly by just testing one of their configuration. Instead you should plot the graph, over varying configurations, of time/memory consumption and compression ratio. (If the line drawn by one algorithm is always “above” another, then one can say that it is always better, but often they will cross in some places, telling us interesting facts about the strengths and weaknesses of both.)

      1. 2

        Good point. And there should be a tool for running benchmarks and plotting the charts automatically.

        1. 2

          First commit on May 1st this year. Hopefully they’ll get to it!

      2. 2

        They do compare to lz4 and zstd for some of the test workloads, not sure why not for everything. They’re not the comparison I’d like though, I don’t see an xz comparison anywhere. For me, xz has completely replaced bzip2, for the things where compression ratio is the most important factor. lz4 and zstd are replacing gzip for things where I need more of a balance between compression ratio and throughput / memory consumption.

        1. 2

          They do compare to lz4 and zstd for some of the test workloads, not sure why not for everything.

          They added that after my comment… and yeah, it’s odd they didn’t do it on all workloads.

          xz is based on lzma, but not exactly the same. Maybe they thought including lzma was enough.

          1. 1

            Tangentially, I tried using brotli for something at work recently and at compression level 4 it beats deflate wl hands down, at about the same compression speed. I was impressed.

        1. 5

          Doesn’t include anything in the APL language family. BQN is my favorite: it’s modern in many ways and supports other paradigms too. After learning the basics it’s sufficient even for tasks that don’t fit the array paradigm at all, yet much better than traditional languages, numpy etc with n-dimensional arrays.

          1. 2

            Wow I had not known about BQN but I’ve played with J and APL. Thanks so much for this.

          1. 3

            The “single binary” blog thing has mystified me. (I saw a couple recent posts but didn’t comment.)

            I guess this is because rsync has pitfalls? The trailing slash issue was mentioned recently and that is indeed annoying.

            Otherwise there’s no difference between a single file and a tree of files. And in fact the tree of files is better because you can deploy incrementally. This matters for a site like https://www.oilshell.org (which is pretty big now).

            When I update a single blog post it takes about 1 second to deploy because of rsync (and 1 second to rebuild the site because of GNU make, which I want to replace with Ninja)

            Using git would also be better than copying the entire blog, since it implicitly has differential compression.


            Another thing I’m mystified by is people writing their own web servers, setting up their own nginx, managing certificates, etc.

            I just used shared hosting (Dreamhost, and I tried Nearly Free Speech, but didn’t think it was as good). But you can also use Github pages?

            I think shared hosting dropped the ball a lot in terms of marketing and user friendliness. They didn’t figure out a good deployment and tooling story. They left everyone alone with shell and I guess most people are not fluent with shell. To be honest I wasn’t as fluent with it when I first got the account! (way back in 2009)

            But now I think it’s by far the best solution because it gives me a standard / commodity interface. I can use git / rsync / scp, and I don’t have to manage any servers. It’s “serverless”.

            Maybe another problem is that shared hosting was associated with PHP. I think it fell out of favor because it couldn’t run Rails or Django well.

            But then people started writing static site generators, and doing more in client side JavaScript, and it perfect for those kinds of sites. So IMO it’s a more stable Netlify or Github pages.

            I’ve heard people complain that with Github pages you are never sure when the changes will be visible. Did you do something wrong, or is the site just slow? Not a problem when you have shell access.

            I guess the other downside is that it costs some money, like $5 or $10/month, but I view that as a positive because the business model is self-sustaining. Dreamhost has been stable since 2009 for me. There are many fewer dark patterns. The only annoyance is that they advertise a lower rate first year rate for domains, so the second year you’re surprised. But pretty much everyone does that now :-/


            Another trick I use is to bundle 10,000 static files together in a zip file and serve it with a FastCGI script .wwz

            https://github.com/oilshell/wwz

            This is basically for immutable archives and cuts down on the metadata. I have automated scripts that generate 10,000 files and rsync will slow down on all the stats(), so I just put them in a big zip file. And again the nice thing is that I am not maintaining any web servers or kernels.

            1. 4

              Some people prefer to use things they understand, and part of that might be standing up a web server for self-hosting. It’s not rocket science to configure nginx + Let’s Encrypt… And part of it, I suspect, is that shared hosting sites come and go, so not having to be reliant upon a host which may be out of business tomorrow is also a benefit.

              I had mostly negative experiences using early shared hosting interfaces (ugh cpanel..), wordpress was a little better (and you could self host it) but it is waaaay overkill for simple static sites. Not to mention it’s a beast to set up.

              Of course there are risks and additional costs with self-hosting stuff. But I’d expect the person who created a new shell to understand the trade-offs/benefits, generally speaking, between using what already exists and making something brand new :D

              1. 3

                Yeah I think part of it is that early shared hosting sites did suck.

                I remember I used 1and1 and it was full of sharp edges, and the control panel was also confusing.

                Specifically I think Dreamhost is quite good. I have been using it since 2009. Nearly Free Speech is pretty good, but it seems to use some kind of network-attached disk which makes it slow in my limited experience. (Also, they don’t seem to advertise/emphasize that the shell account is BSD! Important for a Linux user.)

                I also maintain nginx and LetsEncrypt on a VPS for a dynamic site I’ve been using for over a decade. (Used to be Apache and a certificate I bought, etc.)

                I think shared hosting is by far superior, although as mentioned I recognize there are a few sharp edges and non-obvious choices. I’m not responsible for the kernel either.

                I would call using shared hosting “using what already exists” … And standing up your own VPS as “making something new”. It will be your own special snowflake :)


                One interesting thing is that nginx doesn’t actually support multi-tenant sites with separate user dirs and .htaccess. So all shared hosting uses Apache as far as I know. That isn’t a problem since I never touch Apache or nginx on shared hosting – I just drop files in a directroy and it’s done.

                But I would say there’s strictly less to understand in shared hosting. I rarely log into the control panel. I just drop files in a directory and that’s about it. My interface is ssh / rsync / git. Occassionally I do have to look up how to change an .htaccess, but that’s been maybe 2-3 times in ~12 years.

              2. 2

                Another thing I’m mystified by is people writing their own web servers, setting up their own nginx, managing certificates, etc.

                I can give you my reason for this one. I run my site and other stuff on a simple vps with nginx. That was not my first ides for a solution, but I discovered that there are hardly any offerings for what I want and what I want is dead simple: I have a bunch of html pages and I want them to be served on my domain. You can not do this with the well known platforms. They all require you to put the files in a git repository.

                I am not going to set up some scripts that automatically commit new versions to a git repository and then push that. There is no need to, the html gets generated from other projects that are already version managed. sudo apt install nginx actually the easiest solution here. I want to be able to do one rsync command because that is all what is needed.

                If you look a bit further, there are some companies that offer this, but the costs are always equal or higher than renting a vps and they will always limit you flexibilty. There are probably some giant tech companies that have a way of getting this done for free or for very cheap, but it will inevitably involve accounts, convoluted dashboards/configuration and paying constant attention not to accidentally use a resource that costs a lot of money.

                Perhaps it sounds complicated for someone who has never seen a shell, but managing a site with nginx, certbot and rsync is to about as simple as you can get for this.

                1. 1

                  Paying $5/mo for a VPS has improved my computing QoL immensely.

                  • It hosts my sites
                  • I use it to host tmux to have IRC access
                  • I can run random net stuff from it

                  All in all a great tool and service.

                2. 2

                  I guess this is because rsync has pitfalls?

                  it’s pretty simple for me: i don’t like using rsync. i don’t like the awkward syntax (trailing slash or no?), i don’t like managing files, putting my SSH keys everywhere, etc. i understand rsync just fine - i use it professionally.

                  but when it comes to my free time, i optimize for maintainability, reliability, and fun. i can’t explain the titillating feeling i get when i see a binary compile & launch my entire contained website - and i don’t have to :D it’s just good subjective fun.

                  it’s nice to look at my website’s code and know that it’s self-contained and doesn’t rely on this-being-here or that-being-there. it launches exactly the same way on my local machine as it does on my remote machine. also, my website isn’t just about writing posts - it’s also about providing tools. for example, i include an rss reader, an rss fetcher, an age decryption utility, an IP fetcher, etc - all without any javascript! and it’s all right there & testable locally! it’s fun to treat my website like an extension of my ideas instead of “yet another blog” - and my one-binary approach really lends itself to that.

                  I can use git / rsync / scp, and I don’t have to manage any servers.

                  very fair. i have a server sitting around that i like poking once in awhile

                  But you can also use Github pages?

                  in my blog post, i have a section dedicated to why i don’t want to use some-other-hosting-platform for my personal website. the idea that my website is almost entirely under my own control is very important to me. eventually, i plan on porting it to a small SBC running in a van! a future post will describe this process :D

                  not knocking anyone who chooses something like github pages btw, it’s just not for everyone - it depends on the persons values.

                  i hope this helps make things less mystifying.

                  1. 2

                    trailing slash or no?

                    You should always use trailing slashes with rsync, that’s it. rsync -a src/ dst/

                    1. 2

                      i think that this is misleading given that trailing slashes on the source can change rsync’s behavior

                      A trailing slash on a source path means “copy the contents of this directory”. Without a trailing slash it means “copy the directory”.

                    2. 1

                      Yes this makes sense for the dynamic parts of the site. I can see why you would want to have it all in a single language and binary.

                      I’m more skeptical about the static content, but it all depends on what your site does. If you don’t have much static content then the things I mentioned won’t come up … They may come up in later in which case it’s easy enough to point the static binary at a directory tree.

                      (Although it looks like Go has some issues there; Python does too. That’s one reason I like “borrowing” someone else’s Apache and using it directly! It’s very time-tested.)

                      1. 1

                        totally! if i had like 10k pages of content i’m sure things would look different. :3

                    3. 2

                      Why? Because Outsource All The Things! limits what you can actually do. I wrote my own gopher server [1][2] because the ones that exist don’t do what I want (mainly, to serve up my blog via gopher). And while I didn’t write my own web server, I do run Apache, because I then have access to configure it as I want [3] and by now, you can pretty much do anything with it [4]. For instance my blog. I have several methods of updating it. Yes, I have the obligatory web interface, but I’ve only used that about a handful of times over the past two decades. I can add an entry directly (as a file), but because I also run my own email server (yes, I’m crazy like that) I can also add blog entries via email (my preferred method).

                      Is it easy to move? Eh … the data storage is just the file system—the hard part is getting the web server configured and my blogging engine [5] compiled and running. But I like the control I have.

                      Now, I’m completely mystified and baffled as to why anyone in their right mind would write a program in shell. To me, that’s just insane.

                      [1] gopher://gopher.conman.org/

                      [2] https://github.com/spc476/port70

                      [3] You would never know that my blog is CGI based.

                      [4] Even if it means using a custom written Apache module, which I did back in the day: https://github.com/spc476/mod_litbook, still available at http://literature.conman.org/bible/.

                      1. 1

                        I find the limitation in outsourcing to often be more about learning than features per se. I could well write a gopher server myself, but I’d expect it to be worse than existing gopher servers. Still a worthwhile endeavor, if I didn’t know how to write servers.

                        In a similar vein, I created a wiki engine that works over gemini and is edited with sed commands. It’s probably the worst wiki UX in existence, but last I checked it was the only wiki running natively on gemini (only). I learned a lot in the process, and some other people also found the approach interesting (even if not useful). Maybe it even inspired some other gemini software.

                        So yes, there are many good reasons for writing software even when the problem has already been solved.

                        1. 1

                          I think either extreme is bad – outsourcing everything or writing everything yourself.

                          Also not all “outsourcing” is the same. I prefer to interface with others using standard interfaces like a www/ dir, rather than vendor-specific interfaces (which are subject to churn and lock-in).

                          Shell helps with that – it’s reuse on a coarse-grained level. I am OK with dependencies as long as I can hide them and contain them in a separate process :)

                          I hinted at similar ideas on the blog:

                          http://www.oilshell.org/blog/2021/07/cloud-review.html

                          More possible concept names: Distributed Shell Scripts and Parasitic Shell Scripts. These names are meant to get at the idea that shell can be an independent control plane, while cloud providers are the dumb data plane.

                          A prominent example is that we reuse multiple CI providers but they are used as “dumb” resources. We have shell to paper over the differences and make programs that are portable between clouds.

                          This also relates to my recent “narrow waist” posts – a static website is a narrow waist because it can be produced by N different tools (my Python+shell scripts, Jekyll, Hugo, etc.) and it is understood by multiple cloud providers (shared hosting, github pages, even your browser).

                          So it’s a good interface to serve as the boundary between code you own and code that other people own.

                          Here is a comment thread with some more links (login required):

                          https://oilshell.zulipchat.com/#narrow/stream/266575-blog-ideas/topic/Comments.3A.20FastCGI.20and.20shared.20hosting

                      1. 10

                        Ugh. I was already unhappy about how Zig programs have to pass an allocator reference all over the place, adding it to function signatures and object fields. But now that reference is going to be twice as big, i.e. taking up two ABI registers in calls and making all those structs 8 bytes bigger. Just to save a few cycles dereferencing it.

                        1. 23

                          Really. The everything-that-allocates-passes-an-allocator thing is one of my favorite pieces of Zig’s design. I like the fine-grained control it gives and it makes manual memory management pleasant.

                          1. 8

                            Passing an allocator is annoying until you want code that doesn’t allocate. This approach is building a bright future for resource-constrained (bare-metal) platforms. It becomes possible to identify allocating parts of code, and people will probably write much more allocationless libraries.

                            1. 8

                              And also opens up fine-grained control of when things are allocated/deallocated by allowing a selection of allocators whenever necessary.

                              Use a limited bump/arena allocator per API request and free everything at the end. Use a totally different allocator on the backend. Use the C allocator when passing pointers to C functions. It’s all good.

                              1. 1

                                It’s all good.

                                Unless I am missing something, with this design all allocations in Zig incur an overhead of a virtual function call . With this change, there is a better chance of devirtualuzation but AFAICS no guarantees of any kind. While this is probably not a big deal if your allocator ends up calling malloc(), for something like a stack-based arena this can be a significant overhead.

                                Contrast this to C++, where an allocator type can be (and is for all the std containers) a template parameter, which means your allocator calls can actually be inlined.

                                1. 1

                                  overhead of a virtual function call

                                  Branch prediction is a funny thing. Here are two functions; both do a lot of work and perform a lot of calls. One performs those calls directly, and one indirectly. On my machine, they take the same amount of time to execute (amd 3960x, about 0.6 seconds).

                                  (Of course that is a microbenchmark and not representative; but I think the point is illustrative. E.G. jump to a thunk for your allocations and you won’t be so likely to blow out your btb. In the limit, use an inline cache.)

                            2. 5

                              It incentivise building abstractions that don’t allocate in perf critical calls.

                            1. 6

                              Kudos for making it reusable and not specific to a single static site generator. I would have just used a Zola shortcode and never even thought anybody else might want it (as I did for a git log renderer used here).

                              1. 3

                                Thanks! The git log render looks really nice, I think it’s great to feature that on your website.

                              1. 9

                                This workflow is just bringing (client-side) Git up to parity with Mercurial as used at Google/Facebook:

                                • Adds in-memory rebases (for performance reasons).
                                • Adds changeset evolution.
                                • Encourages of a “patch-stack”/“stacked-diff” workflow with trunk-based development.
                                • Discourages the use of branches/stashes/staged changes, in favor of commits for everything (where possible).

                                Unfortunately, I’ve found it hard to motivate the collection of features as a whole. But most people can use at least one of the following in their workflow:

                                • git undo to undo operations on the commit graph.
                                • git move as a saner, faster replacement for git rebase.
                                1. 3

                                  I have looked at this project a few weeks ago and I seem to not get it. Maybe it is something with the naming, but how is it branchless? All work in git is on a branch. What is the difference if I have a local checkout and work on feature-xyz instead of main? I still do git fetch && git rebase origin/main regardless of the local branch name. I could not figure it out from the README tbh.

                                  Note that I am from the “rebase, squash and “no-merge-commits” school of thought, so maybe this is going in a similar direction, but I don’t really understand what the usp here is. Maybe it makes more sense to people who have used mercurial?

                                  1. 11

                                    All work in git is on a branch.

                                    Under git-branchless, this is no longer necessary. There are three main ways to not use a branch:

                                    • Run git checkout --detach explicitly.
                                    • Use one of git next/git prev to move along a stack.
                                    • Use git checkout with a commit hash (or git co from the latest source build).

                                    Why is this useful? It helps you make experimental changes to various parts of the commit graph without dealing with the overhead of branch management. Suppose I have a feature feature made of three commits, and I get feedback on the first commit and want to try out various approaches to addressing it. My session might look like this:

                                    $ git checkout -b feature
                                    $ git commit -m A
                                    $ git commit -m B
                                    $ git commit -m C
                                    $ git sl
                                    ⋮
                                    ◇ fcba6182 7d (main) Create foo
                                    ┃
                                    ◯ 9775f9a6 4m A
                                    ┃
                                    ◯ 97b8d332 4m B
                                    ┃
                                    ● 67fa26af 4m (feature) C
                                    
                                    $ git prev 2
                                    $ git sl
                                    ⋮
                                    ◇ fcba6182 7d (main) Create foo
                                    ┃
                                    ● 9775f9a6 6m A
                                    ┃
                                    ◯ 97b8d332 6m B
                                    ┃
                                    ◯ 67fa26af 6m (feature) C
                                    
                                    $ git commit -m 'temp: try approach 1'
                                    $ git sl
                                    ⋮
                                    ◇ fcba6182 7d (main) Create foo
                                    ┃
                                    ◯ 9775f9a6 7m A
                                    ┣━┓
                                    ┃ ◯ 97b8d332 7m B
                                    ┃ ┃
                                    ┃ ◯ 67fa26af 7m (feature) C
                                    ┃
                                    ● 9dba147f 1s temp: try approach 1
                                    
                                    $ git prev
                                    $ git commit -m 'temp: try approach 2'
                                    $ git sl
                                    ⋮
                                    ◇ fcba6182 7d (main) Create foo
                                    ┃
                                    ◯ 9775f9a6 8m A
                                    ┣━┓
                                    ┃ ◯ 97b8d332 8m B
                                    ┃ ┃
                                    ┃ ◯ 67fa26af 7m (feature) C
                                    ┣━┓
                                    ┃ ◯ 9dba147f 58s temp: try approach 1
                                    ┃
                                    ● ef9ea69a 1s temp: try approach 2
                                    
                                    # Check out "temp: try approach 1".
                                    # Note that there is no branch attached to it,
                                    # so we use the commit hash directly.
                                    # (Or we can use the interactive commit selector
                                    # with `git co`.)
                                    $ git checkout 9dba147f
                                    $ git commit -m 'temp: more approach 1'
                                    $ git sl
                                    ⋮
                                    ◇ fcba6182 7d (main) Create foo
                                    ┃
                                    ◯ 9775f9a6 10m A
                                    ┣━┓
                                    ┃ ◯ 97b8d332 10m B
                                    ┃ ┃
                                    ┃ ◯ 67fa26af 10m (feature) C
                                    ┣━┓
                                    ┃ ◯ 9dba147f 3m temp: try approach 1
                                    ┃ ┃
                                    ┃ ● 60ad3014 4s temp: more approach 1
                                    ┃
                                    ◯ ef9ea69a 2m temp: try approach 2
                                    
                                    # Settle on approach 1.
                                    # Hide "temp: try approach 2" from the smartlog.
                                    $ git hide ef9ea69a
                                    $ git sl
                                    ⋮
                                    ◇ fcba6182 7d (main) Create foo
                                    ┃
                                    ◯ 9775f9a6 11m A
                                    ┣━┓
                                    ┃ ◯ 97b8d332 11m B
                                    ┃ ┃
                                    ┃ ◯ 67fa26af 11m (feature) C
                                    ┃
                                    ◯ 9dba147f 4m temp: try approach 1
                                    ┃
                                    ● 60ad3014 1m temp: more approach 1
                                    
                                    # Squash approach 1 into `main`:
                                    $ git rebase -i main
                                    ...
                                    branchless: This operation abandoned 1 commit!
                                    branchless: Consider running one of the following:
                                    branchless:   - git restack: re-apply the abandoned commits/branches
                                    branchless:     (this is most likely what you want to do)
                                    branchless:   - git smartlog: assess the situation
                                    branchless:   - git hide [<commit>...]: hide the commits from the smartlog
                                    branchless:   - git undo: undo the operation
                                    branchless:   - git config branchless.restack.warnAbandoned false: suppress this message
                                    Successfully rebased and updated detached HEAD.
                                    $ git sl
                                    ⋮
                                    ◇ fcba6182 7d (main) Create foo
                                    ┣━┓
                                    ┃ ✕ 9775f9a6 12m (rewritten as 14450270) A
                                    ┃ ┃
                                    ┃ ◯ 97b8d332 12m B
                                    ┃ ┃
                                    ┃ ◯ 67fa26af 12m (feature) C
                                    ┃
                                    ● 14450270 32s A
                                    
                                    # Move B and C and branch "feature" on top of A,
                                    # where they belong
                                    $ git restack
                                    $ git sl
                                    ⋮
                                    ◇ fcba6182 7d (main) Create foo
                                    ┃
                                    ● 14450270 1m A
                                    ┃
                                    ◯ 7d592eae 3s B
                                    ┃
                                    ◯ fec8f0ba 3s (feature) C
                                    

                                    So the value for the above workflow is:

                                    • No thinking about branch names, especially since they’re ephemeral and going to be deleted shortly anyways.
                                      • Some people don’t find this an impediment to their workflow anyways.
                                    • Automatic fix up of descendant commits and branches.
                                      • Works even if there are multiple descendant branches (git rebase moves at most one branch).
                                      • Works even if one of the descendants has multiple children, causing a tree structure.
                                    1. 1

                                      Maybe a dumb question, is there an equivalent to hg heads in git-branchless?

                                      1. 1

                                        Not yet, but it should be pretty easy as part of the task to add revset support (https://github.com/arxanas/git-branchless/issues/175).

                                      2. 1

                                        Thanks for the explanation! This is very different from the git I use. I don’t think I ever had a use case for this sort of workflow, but I can see how people might find it useful. I will keep a bookmark and revisit this again in the future. Maybe it turns out useful.

                                    2. 3

                                      Seems like this workflow tries to avoid interactive rebase (and other tools are recommended to do that in-memory). I… can’t exactly imagine not using it all the time. Especially in situations like maintaining a constantly rebased “patchset” on top of an upstream, occasionally submitting patches to it. In my mind “restack” means “reorder in interactive rebase”, not what the restack tool does.

                                      What does changeset evolution mean btw?

                                      1. 2

                                        It’s not that the workflow tries to avoid interactive rebase, but that it simply doesn’t offer a good replacement for interactive rebase at present. (It should handle in-memory rebasing, rebasing/editing tree structures, and referencing commits without branches.) It’s on the roadmap: cl https://github.com/arxanas/git-branchless/issues/177

                                        Changeset evolution is this feature from Mercurial: https://www.mercurial-scm.org/wiki/ChangesetEvolution. In short, it tracks the history of a commit/patch as it gets rewritten with commands like git commit --amend or git rebase (or their equivalents in git-branchless). It means that we can automatically recover from situations where descendant commits are “abandoned”:

                                        $ git commit -m A
                                        $ git commit -m B
                                        $ git commit -m C
                                        $ git prev 2  # same as git checkout HEAD^^
                                        $ git commit --amend -m A2
                                        # now commits B and C are based on A instead of A2,
                                        # which is probably not what you intended
                                        $ git restack  # moves B and C onto A2
                                        
                                      2. 1

                                        I’ve been interested in this toolkit since I first came across it but I’m reluctant to set up rust on my laptop to install it. Do you have any enthusiasm for packaging releases yourself?

                                        1. 1

                                          Maybe, I haven’t looked into it too much. What platform are you using? Some kind folks have already packaged it for Nix, if you can use that.

                                          1. 2

                                            macOS for the most part. Mac users as a whole would probably most readily try this out if it were in Homebrew. (I have a bit of a grudge against Homebrew for being slow and pulling in outlandish dependency closures with stuff so I tend to prefer graphical installers or copying things into ~/bin when those are provided options.)

                                            (Really if I had ever set up Rust for something else on my work computer I’d probably be trying it out as is instead of griping but y’know.)

                                            1. 2

                                              Something like https://github.com/emk/rust-musl-builder makes building static binaries very easy, if you want to go that way.

                                        1. 2

                                          This doesn’t actually solve the hard problem, which is estimating the financial cost of a less-than-optimal implementation.

                                          1. 3

                                            I think the framework is there: you could calculate a probability distribution of financial cost based on the number of distinct issues and their microdefect amounts. Optimally you’d also use a probability distribution for the cost of a single defect instead of just using an average.

                                            For an organization well-versed in risk management this might just work. But without understanding the concept of probabilistic risk I don’t believe the tradeoffs in implementation (and design) can be managed.

                                            The article seems to focus on just the expected value of microdefects. This might be enough for some decisions, but it’s not a good way to conceptualize “technical debt”.

                                            1. 3

                                              One interesting implication is that if we can estimate the costs of different violations, we can estimate the cost-saving of tools that prevent them.

                                              For example, if “if without an else” is $0.01, then a linter that prevents that or a language where conditionals are expressions rather than statements automatically saves you a dollar per 100 conditionals.

                                              1. 2

                                                you could calculate a probability distribution of financial cost based on the number of distinct issues and their microdefect amounts

                                                My point is, we can’t do that because we don’t know what the average cost of a defect is, and we have no way of finding out.

                                                1. 2

                                                  I think we do (certainly I have some internal numbers for some of these things) the thing that we don’t know is the cost distribution of defects. For example, the cost of a security vulnerability that allows arbitrary code execution is significantly higher than the cost of a bug that causes occasional and non-reproduceable crashes on 0.01% of installs. A bug that causes non-recoverable data corruption to 0.01% of users is somewhere in the middle. We also don’t have a good way of mapping the probability of any kind of bug to something in the source code at any useful granularity (we can say, for example, that the probability of a critical vulnerability in a C codebase is higher than in a modern C++ one, but that doesn’t help us target the things to fix in the C codebase and rewriting it entirely is prohibitively expensive in the common case).

                                                  1. 1

                                                    What sorts of things do you have numbers for, if you can share? I have heard of people estimating costs, but only for performance issues when you can map it to machine usage costs pretty easily, so I’d be interested in other examples.

                                                  2. 1

                                                    It’s true we can’t know the distribution or the average exactly. But if you measured the cost of each found defect after it’s fixed, you could make a reasonable statistical model after N=1000 or so. And note that we do know lower and upper bounds for the financial cost of a defect: the cost must typically be between zero and the cost of bankruptcy.

                                                    1. 4

                                                      if you measured the cost of each found defect after it’s fixed, you could make a reasonable statistical model after N=1000 or so

                                                      You are also assuming the hard part. How are you measuring the cost of a defect?

                                                      1. 1

                                                        It depends a lot on the business you are in. For Open Source it is hopeless because you don’t know how many users you even have. My work is in automotive, where we can count the cost for customer defects quite well. Probably better than our engineering costs in general.

                                                        1. 1

                                                          we can count the cost for customer defects quite well

                                                          Are these software defects or hardware defects? As a followup, if they are software defects, are they the sort of defects that would be described as “tech debt” or as outright bugs?

                                                          1. 1

                                                            Yes, the classification is still tricky. Assume we have a defect. We trace it down to a simple one line change in the software and fix it. Customer happy again. They get a price reduction for the hassle. That amount plus the effort invested for debugging and fixing is the cost of the defect.

                                                            Now we need to consider what technical debt could have encouraged writing that bug: Maybe a variable involved violated the naming convention so the bug was missed during code review? Maybe the cyclomatic complexity of the function is too high? Maybe the Doxygen comment was incomplete? Maybe the line was not covered by a unit test? For all such possible causes, you can now adapt the microdefect cost slightly upwards.

                                                            1. 1

                                                              That’s an interesting idea. And then microdefects would work well, because you average out differences in like how much it costs a customer to be happy that don’t have much to do with the bug itself.

                                                              Do you have a similar process for bugs that don’t affect customers, or correct but inefficient code implementations?

                                                              1. 1

                                                                You are thinking of those “phew, glad we found that before anyone noticed” incidents, I assume. The cost is only the effort here.

                                                                We have something similar. Sometimes we find a defect which has already shipped but apparently the customer (OEM) nor the users seem to have noticed. Then there is a risk assessment, where tradeoffs are considered:

                                                                • How many users do we expect to notice it? Mostly depends on how many users there are and how often the symptoms occur.
                                                                • How severe is the impact? If is a safety risk, the fixing is mandatory.
                                                                • How much will it cost to fix it? Again, the more users there are the higher the cost.
                                                                • How visible is the fix? If you bring a modern car to the yearly inspection, chances are that quite a few bugfixes are installed to various controllers without you noticing it.

                                                                You can estimate anything but of course the accuracy and precision can get out of hand.

                                              1. 4

                                                A very good article.

                                                Of course, for someone just starting out it’s difficult to identify your aptitudes. The article suggests asking other people, but that only works in some environments and possibly only after you’ve studied and used some skill.

                                                My approach has been to try and learn a bit of everything to understand what works well for me. I figure you can spend the first decade of your career trying different things and learning (about yourself as much as technology) and you still have decades left. I’ve spent that decade already, and I certainly know my own skillset and suitable career directions much better than I did five or ten years ago.

                                                How did you go about finding what’s good for you?

                                                1. 6

                                                  I run a normal looking news website. Our homepage looks like any other homepage with a bunch of images in a river. WebpageTest says that it’s on 287KB of images. I think part of that is because the images are marked as lazy loading, so it’s probably not loading the bottom of the river, but still. You can run a normal looking website with decent performance.

                                                  The problem is that Google’s DoubleClick code is really bad and it lets advertisers run arbitrary JavaScript, which is even worse. Once you have those on your page, performance is dead, and so no one even tries anymore. Just don’t use DoubleClick and everything else is easy.

                                                  1. 3

                                                    My measurements (Chrome devtools) show 407kB (transferred, >500kB size) of images on page load in a normal desktop window size. By the time you scroll to the footer it’s 927kB. That’s still great by modern web standards. For the front page of a news site like that I think you’ve found a nice balance and I applaud your work!

                                                    But on pages that only really have a text I think <100kB total data is a reasonable and realistic target. Even that takes time to load on a slow connection. You’re right about the problem: there’s always some reason the game is already lost. And I’ve myself used the arguments “but X already is so big a couple hundred kB means nothing” and “this popular site loads 10MB of assets, at least we’re better than that”. But we as an industry could certainly do much, much better if we just really tried.

                                                  1. 5

                                                    This is all very true. The big question is: why translate at all? There are different reasons that directly affect expectations about quality.

                                                    I live in Finland, which is an officially bilingual country. Finnish is the most common language, but there’s a ~10% Swedish-speaking minority. However, most people understand English, and most Swedish speakers understand Finnish.

                                                    I’ve been in a project where we had a requirement of a Swedish translation, but no real resources to do it. So someone who doesn’t know Swedish would just kind of guess something. And AFAICT no one ever complained.

                                                    I assume that most translation projects are similar. As a Finnish native I abhor Finnish translations of technology, because they suck 95% of the time. Even tech giants like Microsoft and Apple have bad UX in Finnish at times, and smaller companies seem to have no chance unless they’re from Finland.

                                                    1. 3

                                                      As a Finnish native I abhor Finnish translations of technology, because they suck 95% of the time.

                                                      I set my computer’s locale to Japanese for many years as a second language learning thing, but it really only was useful for Apple and other big company apps. Small developers usually had no translations, and OSS usually had translations that were incomplete and/or laughably bad even to me, a non-native speaker. It’s a tough thing though, because unlike Finland, most Japanese people don’t really speak English well enough to just switch the computer to English mode and ignore the bad localization.

                                                      1. 3

                                                        As a Finnish native I abhor Finnish translations of technology, because they suck 95% of the time.

                                                        This was also mentioned in the article and to me this is very sad. What good are computers if they can’t serve their users in their own language?

                                                        I’m curious - do you find much value in good Finnish translations of software? Or do you think this is not good use of time and someone who wants to serve Finns would be better off making a more polished English product?

                                                        1. 2

                                                          What good are computers if they can’t serve their users in their own language?

                                                          An upside to having a “common” language is that the searchability of any string output by the program increases. Of course, if that is at the cost of the user understanding the text at all, then it’s no good.

                                                          1. 1

                                                            Depends on the product and target audience. Maybe if you target only young educated people it doesn’t make a big difference. I think most Finns would benefit from a well-made UI in Finnish, but most also realize translations tend to be bad.

                                                          2. 1

                                                            I assume that most translation projects are similar. As a Finnish native I abhor Finnish translations of technology, because they suck 95% of the time. Even tech giants like Microsoft and Apple have bad UX in Finnish at times, and smaller companies seem to have no chance unless they’re from Finland.

                                                            I’ve only really noticed this attitude much from Europeans who can speak English - usually the ones fluent enough to post on English-language communities like Lobsters. For example, most Japanese users I’ve seen prefer using…. Japanese, because they don’t speak or can barely read English. (I have to imagine the translation quality is a bit better for Japanese than a smaller European language, both because it’s a bigger market and the users have an even more immediate need for it.)

                                                            As with the other commenter, I think it’s better if computers can meet people in the language they’re familiar with, not what language the hackers prefer.

                                                            1. 1

                                                              You obviously hear only the attitudes of people whose language you can understand. I’d guess Japanese people abhor bad translations even more than Finns do, because many of them don’t have the option of reverting to English.

                                                              Maybe this is also a reason Japanese people prefer products made in Japan (I’ve heard).

                                                              I totally agree that technology should be approachable and that includes localization. But that never worked in Finland. Those Finns that don’t know English at all tend to also be technologically illiterate. My grandparents were all smart people even if they were uneducated. They did learn to use dumbphones (Nokia is Finnish, remember) but smartphones and computers always remained a mystery to them.

                                                              1. 1

                                                                Even in many IT jobs a lot of my coworkers just use the Dutch versions of software. I’m often the odd one out with everything set to English.

                                                                A lot of that is due to the spectacularly horrendously bad Dutch translations that existed in many open source projects when I started using BSD and Linux the early 00s, if there were any translations at all. If you just used Windows it was a lot better, because the Dutch version of Windows was always fairly decent.

                                                                A lot has improved since then, but I’m used to English now. It’s like watching or reading something in a language you’re not used to: it just seems all wrong. I read some Asterix & Obelix comics in English some years ago; they’re not bad translations at all and the Dutch version is also just a translated version (from the original French), but I’m used to the Dutch version from my childhood so English seems off 🤷 Similarly, I grew up on the English version of Animaniacs and the Dutch version they have now isn’t bad, but just seems weird to me (I don’t envy having to translate that one by the way).

                                                            1. 7

                                                              It’s interesting that this article describes spatial and temporal redundancy as different things. The best description of video encoding that I ever heard suggested that you think of a video stream as a 3D volumetric image. Adjacent voxels are adjacent voxels, whether they’re adjacent in the x, y, or z axis. It would be weird to treat x and y differently, why treat z differently? A video CODEC is just a 3D image CODEC that’s optimised for extracting planar samples. Modern video CODECs are much more explicit about this, especially those that are not optimised for realtime live encoding and so can use the z dimension in both directions to optimise the encoding of any slice along that axis.

                                                              1. 3

                                                                That’s a really interesting point of view! As I understand it, JPEG is basically a 2D discrete cosine transformation (DCT) for 8x8 blocks. So would a 3D DCT on 8x8x8 blocks make a reasonable video codec?

                                                                1. 3

                                                                  [ Disclaimer: This is from the perspective of someone whose knowledge of signal processing is very weak and prefers to live in a world containing only integers ]

                                                                  For some value of reasonable. JPEG does very badly on sharp discontinuities (because a square wave requires an infinite number of cosines to represent it exactly). In the z dimension, any scene change would show up as a sharp discontinuity and so you’d end up with the same sort of artefacts on scene changes that you get for sharp lines in JPEG. Assuming DCT works in three dimensions how I imagine that it would.

                                                                1. 12

                                                                  This is a very useful thing to do, although magit calls it “instant fixup” (cF) - that’s how I do it.

                                                                  1. 11

                                                                    Similarly I’m a big fan of git-revise for this and similar tasks.

                                                                    1. 9

                                                                      git-absorb too.

                                                                      1. 2

                                                                        git-revise with Zsh completions is awesome. You just press tab and choose the commit to revise from a nice menu. Things that would take three different git invocations and probably some head scratching are now just a couple of keypresses and really easy to get right every time.

                                                                    1. 14

                                                                      In metric units 170F is about 75C, which is literal Sauna temperature. Wow.

                                                                      1. 1

                                                                        A sauna is much more dangerous. Just as the air in a 150 celcius oven is pretty hot, but a 100 degree steam coming out of a pot is going to hurt way more. Temperature isn’t the only relevant factor when it comes to cooking…

                                                                        1. 4

                                                                          You make it sound like a sauna is dangerous. It’s not, because you get out when you get too hot.

                                                                          In Finland there were less than 2 deaths in sauna per 100,000 inhabitants per year in the 1990s. That was a time when on average, Finns spent 9 minutes in a sauna twice a week. That’s one death per 780,000 hours spent in the sauna. And half of that is because people binge drink and go to the sauna.

                                                                          I don’t have statistics for deaths per hours a child spends in a hot car, but it cannot be very high considering reasonable people don’t leave children in a hot car at all yet there are dozens of deaths every year.

                                                                          1. 2

                                                                            In these comparisons I think the deciding factor is the ability/inability to leave the hostile environment…

                                                                          2. 1

                                                                            Saunas are typically dry air (although you can sometimes pour water onto hot stones). There are 100degC saunas which you can sit in for several minutes because heat transfer is so low. But a 100degC steam room would instantly burn you (and so doesn’t exist).

                                                                            1. 1

                                                                              Yeah humidity plays an important role as well. Sadly the post doesn’t show the record high with humidity info, but maybe @JeremyMorgan can enlighten us :)

                                                                              1. 2

                                                                                It looks like the min [1] humidity was 7.8% from one of the pictures.

                                                                                [1]: Humidity and temperature should be inversely related, so as the temperature rises, the humidity should decrease, as more water vapor can be “stored” in the air. Similarly, when the temperature drops, humidity increases, leading to dew in the early morning or fog.

                                                                                1. 1

                                                                                  @bfielder

                                                                                  Humidity levels:

                                                                                  Outside

                                                                                  • Min: 13.6
                                                                                  • Max: 89.3

                                                                                  In the car

                                                                                  • Min: 6.8
                                                                                  • Max: 55.3

                                                                                  Seems like some wild fluctuations. It was an unusual weather event for sure.

                                                                            1. 1

                                                                              This is a good base to build a community on, as is Gemini, as is indieweb. We all can bikeshed but most of us can’t build communities from scratch. So instead of any technical opinions I might have I’ll just say best of luck. We have room for multiple communities like this, and it’s all a great change from the commercial web.

                                                                              1. 3

                                                                                I’ve been using GMail (web+mobile) for a long, long time. But I seem to need plaintext email more and more (ie. git-send-email and discussion of patches on mailing lists) and I just recently started pulling my messages to a Notmuch database.

                                                                                Turns out you don’t really need a MUA at all with Notmuch. When I want to deal with plaintext mail, i just pull it with notmuch new and read with notmuch show (I sent a patch for --format=pretty to make this better). If I want to reply, notmuch reply [msg] > msg, then edit with vim and cat msg | msmtp -t. And of course git send-email is awesome. I’ve used mutt and alpine in the past and I like this simple raw approach better because there’s no cognitive load of a complex TUI with lots of configuration.

                                                                                So what about HTML mail? I’m afraid muggles hate plaintext mail wrapped at 72 characters. I know people have setups that can generate HTML from markdown and whatnot, but right now I’m not sure it’s worth the effort. So I use different clients with different people. If I get rid of GMail at some point I’ll probably switch to another webmail for HTML mail. You get rid of a huge amount of complexity by just using the simplest tool for the purpose. For plaintext it’s pretty much vim+msmtp, for HTML it’s a webmail.

                                                                                1. 14

                                                                                  It’s really tempting to wonder why the author didn’t do X or Y. But I really, really like that they wrote about the mistakes they made. We all make mistakes but most of us are very shy about admitting it.

                                                                                  1. 4

                                                                                    From my perspective, it is not an improvement over the established Gopher protocol.

                                                                                    It is even a downgrade, when it comes to supporting low spec (and retro) hardware. Where Gopher was and continues to be fine.

                                                                                    1. 9

                                                                                      This is my impression as well; as a Gopher successor it makes little sense because Gopher enthusiasts like something they can implement at a low level and without TLS libraries they don’t have on very restricted or old systems. For others, it’s far more restricted than even HTML 1.0 (you know, half the fun of hypermedia is inline links) with little benefit other than posturing.

                                                                                      1. 4

                                                                                        In defense of Gemini, it doesn’t pretend that it can replace the web, or Gopher so I wouldn’t tag it as a Gopher successor. I think these are among the project goals. I can only understand this in the context of someone who would be forced to chose between HTML, Gopher, or Gemini, but are we?

                                                                                        1. 3

                                                                                          Are there Gopher projects that don’t require a TCP stack? I do understand the point about TLS, but I feel the same goes for TCP.

                                                                                          I’d love to one day write a client for any social protocol that runs on bare metal but that seems horribly complicated even for Gopher.

                                                                                          1. 2

                                                                                            It’s certainly possible to write a gopher server without using TCP, at least on Unix. To do so, just write the server using stdin and stdout and have inetd handle the network connections, but aside from that, I don’t really understand what you actually want.

                                                                                            1. 2

                                                                                              So if someone wrote tlsinetd, you could also implement a Gemini server just using stdin and stdout.

                                                                                              I like to understand things well enough that I can implement them. You can do a lot with a microcontroller without an operating system with relatively minor effort. Say, connect a small monochrome display and run graphics demos. But networking is really, really complicated. Very few people have implemented TCP.

                                                                                              So the argument that TLS is too hard to work with basically just means that we have so good, easy-to-use TCP implementations available that we don’t even realize TCP is complicated.

                                                                                              1. 3

                                                                                                So if someone wrote tlsinetd, you could also implement a Gemini server just using stdin and stdout.

                                                                                                And some have done just that. I don’t recall exact details, but that is very possible.

                                                                                                I like to understand things well enough that I can implement them.

                                                                                                I can relate. I wrote my own Lua bindings instead of using pre-existing modules (mainly to learn Lua, but also, most of those modules don’t meet my expectations). But at some point, it becomes prudent to use preexisting work. IP is pretty easy; it’s TCP that’s fairly involved.

                                                                                                So the argument that TLS is too hard to work with basically just means that we have so good, easy-to-use TCP implementations available that we don’t even realize TCP is complicated.

                                                                                                And there is a very good, easy-to-use TLS implementation, libretls. It’s a shame it isn’t more popular.

                                                                                                1. 2

                                                                                                  So if someone wrote tlsinetd

                                                                                                  There’s an inetd mode in stunnel

                                                                                            2. 2

                                                                                              And there are a subset of gopher enthusiasts who want to add TLS to gopher. And there are some gopher sites right now that support TLS. There are gopher clients that attempt to do TLS to gopher servers (I know—I run a gopher server and see the requests). If you add TLS to gopher, you get something close to Gemini. At least with Gemini, you get better error reporting.

                                                                                          1. 10

                                                                                            As far as the “useless tree” or “resistence-in-place” concept goes, doesn’t gopher achieve that goal much better? it’s been around for longer and will survive longer still. Plus, with gopher, you have the history on your side— you’re not just putting on a hair shirt that was made yesterday, it’s an almost 30 year old hair shirt!

                                                                                            1. 10

                                                                                              Gopher is what I always think of when I see this Gemini spam on here.

                                                                                              For a technology that wants to be some weird niche project that barely anyone used, they sure do try to convince everyone else to use it.

                                                                                              1. 9

                                                                                                Gemini came about partly as a response to adding TLS to gopher [1] (at the protocol layer), partly as a response to the ‘i’ selector type [2] (in the gopher directory pages), and partly to the rise of UTF-8 (gopher, as documented in RFC-1436, only specifies ISO-8859-1 and is otherwise silent on character encoding issues). Personally, I view Gemini as a modern take on gopher, while most (NOTE: not all) of the Gemini community appear to view it as a watered-down, neutered web (my own view on the community). It’s amazing how many people want [3] to shoehorn in features found on the web into Gemini.

                                                                                                [1] Which isn’t that straightforward, and it’s interesting to note I extensively quote Solderpunk, the creator of Gemini on that page.

                                                                                                [2] A non-resource selector type meant only to display text on a gopher index page. Some people in gopher reject that, as it’s not documented in RFC-1436 and thus, against gopher itself.

                                                                                                [3] Edit: changed ‘try’ to ‘want’

                                                                                                1. 1

                                                                                                  There are thousands of Gemini users. AFAICT less than a dozen are actively trying to advertise it on the web.

                                                                                              1. 20

                                                                                                Gemini’s obscurity and lack of utility means that there are no analytics, no metrics, no ways to go viral, to monetize people’s attention, build a career or even a minimally-functional web platform.

                                                                                                It’s just not worth it. If I want “no analytics, no metrics”, etc, then I’d install adblock, avoid Twitter and Facebook, and disregard SEO-optimized blogspam, which I do. I don’t want to be forced to use an extraordinarily dumbed-down markup language for that. [0] I still don’t understand why the author thinks that leaving out tables, text styling, images, and forms was necessary in order to avoid Gemini becoming another corporate-monetized cesspool.

                                                                                                [0]: And yes, I could use Markdown with Gemini… but then that kind of misses the point of using Gemini in the first place.

                                                                                                1. 8

                                                                                                  Yeah, I guess I don’t understand how resistance-in-place is referenced in the article. If I limited myself to Gemini, I would be able to consume only a small subset of content. That’s fine. But why is a whole new protocol (and the corresponding new software) required?

                                                                                                  Why not only consume parts of the web that aren’t gross? In fact, someone could create a standard subset of HTML and an index that only lists pages that follow it. The most serious adherents could even use simpler web browsers to access it (but it would still work fine for everyone else).

                                                                                                  Resistance-in-place is about keeping oneself from being appropriated. We don’t need a protocol that itself cannot be appropriated, or one that forcibly prevents use that would be appropriative. We just need individuals who wish not to be appropriated (as best they can). But the web is flexible (in fact, that might be its biggest problem), it can already be used in ways that defy appropriation.

                                                                                                  Edit: clarity

                                                                                                  1. 8

                                                                                                    But why is a whole new protocol (and the corresponding new software) required?

                                                                                                    It’s not! There’s all sorts of opportunities for ‘resistance-in-place’ on the web (most of which I don’t do) – opting out of social media, opting out of Google, creating a plain-HTML website with no analytics, etc. Gemini is just a more radical opting out than that, but I think that people who keep RSS and HTML blogging alive online or self-host git or still use mailing lists, etc to be doing the same sort of thing.

                                                                                                    Section 2.5 of the FAQ covers the “subset of HTML” idea https://gemini.circumlunar.space/docs/faq.gmi

                                                                                                    1. 1

                                                                                                      Why not only consume parts of the web that aren’t gross?

                                                                                                      I’d love to hear everyday thoughts from ordinary people around the world (that I have no connection to). That’s a great way to understand different cultures a bit more. How do I do that on the web?

                                                                                                      On the web, one issue is that everything is optimized for engagement and whatnot. It’s very difficult to find unpopular content. Another issue is that the content on the web is basically eternal, so most people are smart enough not to share their thoughts.

                                                                                                      Gemini is no silver bullet, but its uselessness seems to have its uses at the moment.

                                                                                                    2. 6

                                                                                                      I still don’t understand why the author thinks that leaving out tables, text styling, images, and forms was necessary in order to avoid Gemini becoming another corporate-monetized cesspool.

                                                                                                      (author here) Everyone wants something added to Gemini but disagrees what that something is. Personally, I think it should be in-line images and footnotes, but if Gemini became more complex, it would lose many of the traits that make it interesting. Gemini is a technology that invites us not to try and improve or optimize it, but to accept it as it is and work around its limitations – it is intentionally austere, and this is a feature, not a bug.

                                                                                                    1. 10

                                                                                                      Woah. I had higher standards for conversations in a kindergarten. Laughable. Missing parenthesis are a problem? I can’t help but wonder what would they say if they stumbled upon other features of the language beyond missing parenthesis like traits, metaprogramming, typestate driven design, hygenic macros etc. They have no idea about the language and are completely disinterested. How one can even have a reasonable discussion after a joke of a statement like this. Regardless of what kind of problems Rust is causing or solving, this pathetic, annoying, dismissive, passive-agressive tone will make it very hard to progress. Even with much better driver/kernel module examples and APIs.

                                                                                                      1. 4

                                                                                                        There’s quite a bit of an impedance mismatch at the community level and it seeps through a lot in these scenarios. “Quality discourse” like this will keep happening for a while, because both communities have a lot or work to do in the “letting go of the smug dismissiveness” department. See, for example, this unofficial but pretty popular introductory document on Rust’s ownership and memory management model:

                                                                                                        Mumble mumble kernel embedded something something intrusive.

                                                                                                        It’s niche. You’re talking about a situation where you’re not even using your language’s runtime. Is that not a red flag that you’re doing something strange?

                                                                                                        It’s also wildly unsafe.

                                                                                                        But sure. Build your awesome zero-allocation lists on the stack.

                                                                                                        It’s a story as old as comp.lang.lisp: people who do meaningful work in a language and are otherwise completley uninvolved in language fanboy flamewars get flak because of the more visible fan discourse, which is exactly what you expect from fan discourse.

                                                                                                        1. 1

                                                                                                          Indeed. Many in the kernel community must vividly remember the history of people wanting C++ in Linux. And the swarm of Rust fanboys in 2021 must look very similar to C++ fanboys a couple of decades ago.

                                                                                                          Someone with an intimate knowledge of each of the languages can probably choose the right tool for the job. Meanwhile someone that only knows one language well will always prefer it – which is not unreasonable!

                                                                                                          My own limited experience with no_std Rust tells me it’s not nearly as easy to use as standard Rust, while my own limited experience with Linux tells me that not making mistakes in C kernel code is very difficult for mere mortals. I really hope Rust in Linux will be a great success, but only time will tell.

                                                                                                        2. 3

                                                                                                          Yeah, seriously. It sounded like petulant whining. From someone who’s programmed in one language forever, and hates the idea of change.