Threads for WilhelmVonWeiner

    1. 7

      A particularly nasty one to start with today!

      1. 6

        I apparently lucked into doing it the way that doesn’t run into any of the problems other people had, on a whim. (Spoiler below, stop if you don’t want them, but it’s day 1 so…)

        For part 1, rather than doing a multi-match and extracting the first and last matches in the list, I did a match against the input and a match against the reversed input, which is an old trick.

        For part 2, I kept the same structure rather than rewrite, which meant that I matched the reversed string against /\d|eno|owt|eerht|ruof|evif|xis|neves|thgie|enin/, and then re-reversed the capture before passing it through a string-to-num map.

        And it turns out that that totally sidesteps the problem of “wait, how am I supposed to get 21 out of xxtwonexx?”

        1. 2

          I just used a regular expression, with the leading group as optional. Means you always pick up the trailing “one” in “twone” first.

          1. 3

            I looked for non-overlaping matches and got the right solution. Maybe my input never hit this “twone” edge case by luck!

        2. 1

          For part 1, rather than doing a multi-match and extracting the first and last matches in the list, I did a match against the input and a match against the reversed input, which is an old trick.

          I took a similar approach. I’m using C++, so the natural solution was to use reverse iterators .rbegin() and .rend() that iterate through the elements of a container in reverse order. Rather than use a regex—which seemed like overkill—for part two, I just had an array of digit names I looped through and performed the appropriate search and chose the earliest one:

               for (int i = 0; i < 10; i++) {
                  auto it = std::search(line.begin(), line.end(), digit_names[i].begin(), digit_names[i].end());
                  if (it <= first_digit_name_iter) {
                      first_digit_name_iter = it;
                  // ...
          

          And in reverse:

              for (int i = 0; i < 10; i++) {
                  auto it = std::search(line.rbegin(), line.rend(), digit_names[i].rbegin(), digit_names[i].rend());
                  if (it <= last_digit_name_iter) {
                      last_digit_name_iter = it;
                  // ...
          
          1. 2

            My figuring on what is and isn’t “overkill” is: AoC is ranked by when you submit your solution, so that’s time to write the code plus time to run it. If something is really the wrong tool, the challenge will prove it by making your solution take an hour, or a terabyte of RAM, to run. But if I’m using a language where regexes are “right there” and they make my solution take 100ms instead of 10ms, I’m not bothered.

            1. 2

              I like AOC because everyone can have their own goal! I’m impressed by people who can chase the leaderboard. I always personally aim for the lowest possible latency. Managed to get both parts today in under 120 microseconds including file reading and parsing.

      2. 5

        A part of me wonders whether the creator went out of his way to guard against common LLM usage.

        1. 6

          Maybe a little bit, but it’s a recurring theme in AoC that you have to implement the spec as written, but not the spec as you think it means on a first read.

        2. 1

          I can barely read the trite stuff about Elves as it is and I habitually skim all the text. I think that might just be enough obfuscation against LLMs.

          1. 4

            I think there was a rash of solutions in the early days of Dec 2022 where people were oohing and aahing over that current generation of LLMs solving the problems instantly.

            It died down quite a bit as the difficulty ramped up.

            1. 1

              Oh yeah, I got one very tedious bit of slice manipulation handed to me by copilot but for the rest it’s been mostly saving me typing debug output and the likes.

            2. 1

              Guilty as charged. I messed around with ChatGPT on the first few problems last year. That was right after it came out, and it was pretty amazing how fast it could come up with a (typically slightly buggy) solution.

      3. 4

        SPOILER…

        The difficulty, IMO, is that the problematic lines aren’t in the sample.

        I did overlapping regexp matches. It was easy, once I caught why my first attempt didn’t work. Another solution would be to just do a search with indexOf and lastIndexOf, for each expected words, but you have to be careful to sort the results.

        1. 2

          There’s a subtle hint, because the first sample does include a case where there’s only one digit (the “first” and “last” match are the same, or you could say they overlap completely). When you get the part 2 spec you have an opportunity to ask yourself “hmm, what changes about overlaps now that we’re matching strings of more than one character?”. Or at least it gives you a good first place to look when things go wrong.

          Apparently some peoplle tried to solve part 2 using substitution (replace words with digits and then feed that into the solution for part 1), which also suffers from problems with overlaps, but in a way that’s harder to dig yourself out of.

        2. 2

          Yes. My implementation passed all the sample tests, but it returns an incorrect value. Not easy at all.

        3. 2

          Yeah, it’s pretty nasty for a day 1.

      4. 3

        I feel like it’s only nasty if people approach it trying to fit every problem into a regex shaped hole

        The way i wrote it was pretty simple

        For a given input string:

        1. Check if the first character is a digit
        2. Check is the string starts with a number word
        3. If neither of these checks hold true, remove first character, goto 1

        And for the last character, you just do it in reverse, with the last character and ends with

        Sure it’s probably less performant but it still only took a fraction of a second even with an interpreted language

        1. 2

          I didn’t touch a regex. There’s a funny edge case that’s not in the text or in the sample input.

      5. 1

        I had a very rough time with part 2.

        My solution (which wasn’t using regex) passed all of the sample inputs and some other inputs I had come up with, but failed to produce the correct solution. I was at the point of going through the input file line-by-line in search of what I presumed to be an edge case I wasn’t handling.

        Thankfully I had my friend run my input against his program and show me the results for each line, and that helped me figure out that I had misunderstood how the replacements worked when looking for the last digit.

        Once I actually understood the problem it was quite simple to write a correct solution, but it took a while to get there.

    2. 5

      What language are you all using this year? I kinda wanna do it in a language I’m not familiar with and that’s different enough from what I know, but I don’t have a good pick yet.

      1. 10

        If you want to try https://www.roc-lang.org there are several other people doing it in Roc this year - they’re discussing their solutions in https://roc.zulipchat.com/#narrow/stream/358903-Advent-of-Code/topic/2023.20Day.201

      2. 8

        Factor!

        1. 2

          I did want to try Factor, maybe I will!

      3. 6
      4. 6

        Competing for the leaderboards is stressful and ruins my sleep schedule, but I think I’ll try a few problems at my own pace with Lil, a multi-paradigm scripting language I’ve been working on for a few years. Aesthetically, it’s very similar to Lua, and semantically it’s a nice blend of APL-ish and functional features.

        1. 3

          A friend of mine is doing AoC in Lil as well (it at least is planning to).

          I may give it a shot as well.

          1. 3

            nice to see more people using Lil for AOC. It’s a far more improved language than from when I used it the previous year.

        2. 1

          Is it possible to see the problems without signing up? I looked at AoC a few years ago and you needed to give them an email address and other personal information, which makes sense if you’re competing for leaderboards, but I didn’t want to trigger my hyper-competitive tendencies and wanted to just have fun, so I didn’t want to sign up.

          1. 3

            The puzzle input is bucketed per user, so you need to have an account to access the actual input data (and so the system can verify your answer).

            You can look around earlier year’s repos, a lot of them include the actual puzzle input too.

            You can use OA tokens from Google, Github etc, if you already have an account there. I haven’t gone over the privacy policy for AoC with a fine-tooth comb, but I believe it’s better than most.

      5. 5

        I’m completely biased, but you can try this toy language I developed. ;)

        https://github.com/antimony-lang/antimony

        I actually finished some AoC puzzles with it, but it still needs a lot of work.

      6. 5

        I’m doing it in Rust. It’s the only thing I get to use Rust for.

      7. 4

        I will be doing the puzzles starting with Elixir (my main language) to first make sure I understand the problem. Then I am switching to OCaml and maybe some other languages here and there to get myself more acquainted with them.

      8. 4

        Janet! The language has native parsing expression grammars – sort of a cross between parser combinators and regular expressions – which are perfect for the text-processing parts of the puzzles. It’s a great choice for anyone lisp-curious! Janet doesn’t have a very big community, but there are a few of us trying it this year.

      9. 3

        If I do, probably Flix, or just Rust.

      10. 2

        I just started learning V, so I’m doing AoC with it in order to practice.

        1. 1

          Why V?

          1. 1

            I was looking for a usable cross-platform UI library in a language I don’t despise and found V’s official one. When working with the language, I found it pleasant.

      11. 2

        I’m using Haskell which I learned in the late 90s but haven’t used since then. A colleague is trying uiua but he’s way more hardcore than me.

      12. 2

        Usually I use Rust since it’s what I’m most efficient at. Last year I used Gleam, which was fun. This year I might just use whatever seems best for the day. I did day one with just coreutils, with the exception of zsh for process substitution.

      13. 1

        I used Maude and i plan to use it for the whole month.

        1. 2

          Wrong link? I’m getting an Apache 2 test page. Curious what Maude is :)

          1. 2

            oh, sorry. this is the right link: http://maude.cs.illinois.edu/w/index.php/The_Maude_System

            It’s a reflective language where the execution pattern is determined by rewriting an expression over and over again.

      14. 1

        I’m using Scheme! My own toy Lisp isn’t quite ready for action (and I’ve - perhaps temporarily - lost interest in maintaining it, anyway).

        I’ve also spent the day realizing that I’m quite slow at coding while I’m in bed sneezing my lungs to pieces and nursing a fever.

      15. 1

        I’m going to be using ruby! I’ve been programming in zig a lot and want to switch it up a little

      16. 1

        I’m using Scala, primarily, and then Rust. I know Scala well, and I’m learning Rust.

        IME, if you’re learning the language, and it’s difficult to pick up (the parts for working with strings and collections), you’ll eventually drop the challenge. So, IMO, it’s good to first solve the day with a primary language that you know well, and to keep the pace because there’s motivation in speaking with others about the day’s solution.

    3. 17

      Woo, contains my first package submission (guile-goblins). Here’s to a cutting-edge December 🫡

      1. 3

        I am using this package a lot, thank you for submitting it! 😸

    4. 8

      Really not a fan of the argument= syntax. Code completion can handle that nowadays. I think it adds another way of doing things.

      1. 10

        For JS dictionaries it’s been nice to write {foo, bar} for {foo: foo, bar: bar}. Too bad Python’s set syntax makes this a bit out of reach (though tbf Python’s dictionaries also have way better primitives and kwargs, making this less of a thing one needs)

        I do think the proposed Python syntax could look nicer though. Something like f(*,arg1,arg2) would allow for some symmetry with kwarg-only parameters. Syntax stuff is always a bit bike-sheddy, but I have a really hard time imaging the argument= syntax getting accepted as is.

        1. 3

          It would not make sense for dict literals anyway, since the key can be an arbitrary object. So {foo: foo} create a mapping from the value of foo to the value of foo.

          Rust uses a similar shorthand for struct literals (which is essentially what JS does, {} is an object literal, not a Map), it’s OK, but hardly life-saving.

          I do agree the current proposal is… icky. Your proposal reads pretty nicely. Alternatively, I think it’d find a prefix = clearer than suffix, but I’d rather have neither.

        2. 1

          For JS dictionaries it’s been nice to write {foo, bar} for {foo: foo, bar: bar}. Too bad Python’s set syntax makes this a bit out of reach (though tbf Python’s dictionaries also have way better primitives and kwargs, making this less of a thing one needs)

          Interesting. It might work with dict(arg1, arg2).

          Agree on the second point. I think it’d be really unusual for people coming to Python with zero experience or from another language. Feels like going against the common sense/basics of programming. Then again there are plenty of tools you can use to achieve this goal, namely LSP Inlay Hints. No need to come up with a PEP.

        3. 1

          Raku has a similar thing with :$foo being foo => $foo, which I think even works outside of function calls.

      2. 6

        I think, in practice, Python is no longer serious about having only one way to do things. It was never strictly true (list comprehensions versus filter and map, dict() and set() v. {} (and the resulting ambiguity), etc.), but the syntax sugar that’s been added lately has really shifted the overall feel of the language toward convenience at the cost of duplicated functionality (e.g. kwargs changes, data classes v. namedtuple, async functionality, …).

        I don’t think that’s all bad, necessarily—as someone who’s coded Python since 2005, I appreciate at least the motivation of these changes—but I think fighting on the angle of There’s One Way to Do It is a lost cause at this point.

        1. 2

          I think, in practice, Python is no longer serious about having only one way to do things. It was never strictly true

          It was never true at all, ever. You may want to actually read the relevant stanza of the zen.

          I think fighting on the angle of There’s One Way to Do It is a lost cause at this point.

          That seems obvious since it’s never been a cause at all.

          1. 4

            The Zen statement is a tongue-in-cheek, direct contrast to Perl: “There’s more than one way to do it.”

            And for the most part, Python’s community has made choices—“prefer comprehensions over filter, map, and a for loop instead of reduce.”

            1. 2

              The Zen statement is a tongue-in-cheek

              Regardless of it being tongue-in-cheek, the stanza absolutely does not say “there’s only one way to do it”. So the assertion that this has been abandoned is wrong on its face, it was never a thing in the first place, not even tongue-in-cheek.

              1. 5

                I disagree. Back in the old days they would reject PEPs that would create confusion. Python, the language, felt small, and every feature purposed. Now we just have an interpreted Scala with Dataframes, and I’m using Scala as a euphemism for “your kitchen sink is piled high with dirty dishes.”

                1. 2

                  I disagree.

                  And I disagree with your disagreement.

                  Back in the old days they would reject PEPs that would create confusion.

                  The “proposal” being discussed here is not even a PEP to reject, it’s a post on the discussions forum.

                  And PEPs are routinely rejected to this day (e.g. 559, 637, 645, 713).

                  1. 3

                    And I disagree with your disagreement.

                    And I disagree with your disagreement of my disagreement.

                    The “proposal” being discussed here is not even a PEP to reject, it’s a post on the discussions forum. And PEPs are routinely rejected to this day (e.g. 559, 637, 645, 713).

                    Great! Then there’s only one obvious conclusion to this. The PEP (yet to be written but “being started by the author”) will be rejected because it makes Python a worse language.

        2. 1

          filter and map come from Python 1.x, but list comprehensions were added in 2.0. So this one comes from backwards compatibility. And sets didn’t appear until 2.4. I agree that another notation might have been helpful. These instances of there being more than one way to do it have to do with evolution of the language and maintaining backwards compatibility.

          I agree that data classes and namedtuple are somewhat overlapping though. Having a coherent typing strategy in advance would have probably helped prevent the duplication.

          1. 2

            So this one comes from backwards compatibility.

            If it was only that, they’d have been moved to functools like reduce, or to itertools, or would have been removed. Rather than being converted to lazy and left where they were.

            These instances of there being more than one way to do it have to do with evolution of the language and maintaining backwards compatibility.

            These instances of there being more than one way to do it have never not been part of the language.

            I agree that data classes and namedtuple are somewhat overlapping though.

            They’re not. The point of namedtuples has always been to provide a migration path from regular tuples. That they were abused as convenient ways to create data holders (as well as the relatively widespread use of attrs) is what led to dataclasses being introduced. Dataclasses literally can’t perform the primary job of namedtuples. Data classes and named tuples overlap in the same way regular classes and dicts do.

            1. 1

              These instances of there being more than one way to do it have never not been part of the language.

              I don’t see how you can say that when in fact there were no list comprehensions in Python 1.x, unless you simply mean that there were already for loops in Python. But it seems clear to me that you are overcommited to being confrontational about this rather unimportant point, so I suppose there is nothing further to discuss.

      3. 2

        Instead of encouraging clunky named args, perhaps it would be better to encourage code completion.

        1. 2

          Named arguments are great. Code completion is useless when reading code at rest.

          1. 1

            Maybe it’s all that time spent with Objective-C, but I love named arguments.

          2. 1

            I don’t know about your editor, but I press K and it shows me this kind of thing. Knowing the specific argument order isn’t really useful unless I care about what the function is doing, in which case I’m going to look at the function’s source anyway.

    5. 2

      N.B. “Under the hood”, the Python example is doing something meaningfully (YMMV) different to the Go example. A dict comprehension and a for loop are different tools. Compare the dis output of the Python example, and a Go-like Python example (i.e. write the for loop). Comprehensions tend to execute faster than plain loops because they have specific optimisations. Not sure if it really alters the intent of the article but I thought to point it out.

      1. 2

        If you actually look at the disassembly, they’re not materially different: the main differences are that the “explicit” version needs to store and load the map in which it puts the result, while in the dictcomp case that object is implicit; and the dictcomp has a dedicated MAP_ADD opcode (which I assume is somewhat specialised) while the procedural loop has to make do with a generic STORE_SUBSCR.

        These are really minor differences, and technically a slightly smarter optimiser would be able to generate identical code as we know the output dict is the builtin one (because literal) so this could be specialised.

        A slightly smarter optimiser would likely be able to improve both by not performing unnecessary spills to locals as well, but I assume that would be detrimental to debugging

    6. 8

      Anyone who calls python easy, hasn’t learned much python.

      1. 12

        As someone who works with data scientists who aren’t programmers, they find doing almost anything they need to in Python easy. It is “easy”. Maybe “correctness” is progressively harder in varying degrees, but Python is unequivocally easy, because you can force things to do what you want if you have to.

      2. 4

        Yeah, Python looks easy, but even basic things like “I want to distribute my program” are pretty complicated. There are third party tools that can sort of get you close to a static binary, but the binary will be huge and it won’t include the interpreter, the standard library, or any dynamically linked dependencies (.so, .dll, .dylib, etc). In Go, you just run go build and send the resulting binary to your colleagues with whichever file transfer/sharing mechanism you prefer.

        Similarly, async/await is full of footguns–without a type checker, it’s super easy to forget an await, and there’s nothing stopping you from calling a function that does blocking I/O (or something CPU-intensive) in an async function–indeed, you can’t even tell at a glance whether a function does blocking I/O or not. And when you stumble on these problems, the entire application falls over and you start seeing timeouts on endpoints that have nothing to do with the problem at all except that they share the same event loop.

        Even for magic-y things, where you would think Python would shine, dealing with metaclasses and so on in Python is much more difficult than using reflection in Go (which isn’t to say reflection in Go is easy, it’s just not as complicated as Python’s metaclass stuff).

        1. 7

          Even for magic-y things, where you would think Python would shine, dealing with metaclasses and so on in Python is much more difficult than using reflection in Go (which isn’t to say reflection in Go is easy, it’s just not as complicated as Python’s metaclass stuff).

          wtf don’t need metaclasses to do reflection in Python, there’s literally a bunch of builtins lying around for that. Metaclasses are for programming the types themselves e.g. hooking into the class lifecycle, or providing type-level shared behaviour.

          That’s like saying dealing with a road roller is much more difficult than using a paint roller.

          1. 1

            Probably should have said “metaprogramming” rather than merely “reflection”. My brain was thinking one thing and my fingers were typing a different thing.

    7. 29

      While it is in total hype these days, despite 20 years of programming and having had my first steps in C and C++, I cannot look at a piece of Rust code and say with certainty that I understand what is going on there.

      I beg that people stop making this useless argument. Of course a paradigm shift will be hard for any range of exp.

      In the same line of arguing it could be said that prolog is even more complex than rust because of unfamiliarity. Which is just factually untrue.

      1. 19

        Familiarity with Rust’s syntax hasn’t made it any less of a cognitive load to read for me. It’s not Perl but viewed from outside the bubble it’s a mess of a language.

        (Modern C++ is also a mess. Rust’s only advantage is three decades less cruft.)

        1. 12

          I don’t understand how people can’t understand how hairy Rust’s syntax is.

          1. 8

            Rust’s syntax gets more hairy the closer you look at it. Even its lexical syntax is deeply complicated! I like Rust but I also like perl…

          2. 7

            You say hairy, I say explicit :) Probably just different strokes for different folks. Rust is designed heavily around exposing the decisions to the programmer, and is a major part of why the standard library and much of the ecosystem works the way it does.

            Oftentimes hairy code in older languages is due to legacy features and cruft, but Rust mainly has sheer density of information (turbo-fish operator being the main cruft)

            1. 2

              Exactly! It’s not “syntax”, it’s semantics, of which rust specifies more explicitly. There was even an article showcasing Rust semantics with different syntaxes, and it wouldn’t be better with the same amount of explicitness in JS/Python/whatever-notation either.

              (Can’t find the article now)

              1. 4

                I believe the article you’re referring to is: https://matklad.github.io/2023/01/26/rusts-ugly-syntax.html

                Written by a fellow lobster!

        2. 9

          Perl is more readable, imo

          my @temperatures = (
              { city => 'City1', temp => 19 },
              { city => 'City2', temp => 22 },
              { city => 'City3', temp => 21 },
          );
          
          my @filtered_temps = grep { $_->{temp} > 20 } @temperatures;
          
          1. 10

            Whatever language you’re used to reading and/or writing will be the “more readable” language to you. I find Perl to be mostly unreadable, including the example you provided here, but I also acknowledge that if I worked with Perl on a regular basis, that your example would be obvious and pleasant to read. But for me, at this point, that isn’t the case.

          2. 2

            that is fairly obvious. Weird to see grep as a function call (as a non-perler)

            1. 2

              It’s in spirit of grep(1), not exactly the same. https://perldoc.perl.org/functions/grep

              Perl has a bunch of functions operating on lists which takes an expression or a block and acts on each element in the list in turn. map is the most general, and the List::Utils module offers syntactic sugar for a bunch of others, such as selecting the sum, max, first etc from a list.

              1. 1

                I don’t find it too strange in a Perl context, but it’s an atypical choice because filter is already a well-established name for this particular higher-order function. Though Perl does at least keep the standard name for map.

                1. 2

                  In Perl, “filter” is for changing the source code itself: https://perldoc.perl.org/Filter::Simple#The-Problem

                2. 2

                  Maybe it’s an age thing? Perl has had a grep since at least 1999 afaict.

                  1. 3

                    grep (but not map I think) goes back to perl4 if not earlier.

      2. 16

        I’m also worried about anyone who claims that they can tell with certainty what’s going on in any line of C++ code.

        (I write, or rather delete, C++ code every day)

        1. 5

          Well, do you accept “that’s undefined behavior” as “telling with certainty”? 🙃

        2. 3

          Well you can only delete C++ code if it was allocated with new. If it’s on the stack or allocated with malloc things are different.

          1. 2

            Even though it seems like it’s hard to delete C++ code that’s deep in your tech stack, it’s very easy with Nix!

      3. 15

        The Rust version could look like this:

        let temperatures = [
            ("City1", 19),
            ("City2", 22),
            ("City3", 21),
        ]
        
        let filtered_temps: Vec<_> = {
            temperatures.into_iter().filter(|(city, temp)| temp > 20).collect()
        }
        

        People unfamiliar with Rust’s syntax could be confused by closures or type annotations (especially turbofish), but apart from a few sigils it doesn’t look that much worse than Python.

        1. 13

          The into_iter() and collect() stuff aren’t super obvious either, and on top of that when things can error iterator combinators get complicated in Rust (Python uses exceptions which short-circuit by default). I’m not a Python fan for many reasons, but I think Go is more straightforward than both most of the time. I often find for loops more readable than iterator combinators (of course, Rust has for loops too, but they’re not very idiomatic and there’s a surprising amount of benefit in “everyone does things the same way” IMHO). I still like Rust, but it’s not my get-shit-done goto language.

          1. 1

            I’m not saying Rust is easy, but also it’s not “I can’t tell what’s going on” bad. Writing Rust requires understanding of a bunch of concepts that don’t exist in Python or Go, and that is a barrier. However, when reading it’s not that much worse than Python.

            1. 4

              I don’t really agree. Things like turbofish, clone, Rc<>, RefCell<>, collect, lifetimes, and so on take a fair bit of time to understand, and you encounter these all over idiomatic programs (they’re not “advanced” like Python’s oddities).

          2. 1

            In both the Python & Go variants of the article, one can access fields by name (e.g. temperatures[0].city or temperatures[0]["city"]) and the final filtered result is also a hash with keys of city names. So, I think the Rust version over-simplifies in 2 ways (and is still complicated as you observe, even with the over-simplification). EDIT: Beats me if these are important properties in the larger context of the code he was thinking about, but he does very specifically say “dictionary comprehension”.

            1. 1

              All you need to output a hash is to replace the Vec<_> by HashMap<_, _>.

              The naming part is not complicated but is more verbose: Rust does not have anonymous struct literals, so as in Go you need to define a struct, but then you either need to replace every tuple by a struct instantiation:

              struct CityTemperature {
                  city: &’static str,
                  temperature: f64,
              }
              let temperatures = vec![
                  CityTemperature { city: “City1”, temperature: 10.0 },
                   …
              

              Or you need to define a positional constructor to which you can migrate the tuples:

              impl CityTemperature {
                  fn new(city: &’static str, temperature: f64) -> Self {
                      Self { city, temperature }
                  }
              }
              let temperatures = vec![
                  CityTemperature::new(“City1”, 10.0),
                  …
              

              Alternatively / additionally you could impl From <(&’static str, f64)> for CityTemperature, although in this specific case I doubt it’d be worth it.

              he does very specifically say “dictionary comprehension”.

              That is a misrepresentation. The essay says “such as a list of dictionary comprehension”, the intent is to contrast the convenient succinctness of such syntactic sugar to the procedural nature of the Go code.

              1. 1

                That is a misrepresentation. The essay says “such as a list of dictionary comprehension”[..]to contrast

                I disagree that dict builds are not part of the intended contrast. The essay also does map[string]float64 output in its Go. He seems to contrast both type defs & dict builds (& doing both is among the least problematic aspects of the essay).

                Anyway, thanks for the fixes! The HashMap fix, at least, seems easy enough.

                Also, of code posted here so far, @kornel’s Rust is not the only one skipping named fields. @jlarocco’s C++ & Lisp also do, as does @alexshroyer’s Lisp. The Perl, Ruby & Nim seem more directly analogous.

                1. 3

                  I interpreted the code in the article as “return full records where part of the original matches a condition”. But I suppose the original article could have used tuples or arrays, so point taken. Here’s a K variant with named fields (also K is more APL than lisp):

                   x:+`city`temp!(("City1";"City2";"City3");19 22 21)
                   x@&20<x`temp
                  
                  1. 1

                    Ah, yes.. you even said ngn/K. Sorry. K dialects will often win concision battles. I don’t think it’s a very clear article making good points, but as long as people are sounding off with “Favorite PL” solutions, the playing ground should be level-ish. So, thank you!

                2. 2

                  I think for such a short snippet skipping the fields is sensible, that’s definitely how I would do it: there’s only two values in the row and the types are unambiguous, the field names are very temporary and they scope is so short they don’t really matter.

    8. 38

      Sorry if I sound like a broken record, but this seems like yet another place for Nix to shine:

      • Configuration for most things is either declarative (when using NixOS) or in the expected /etc file.
      • It uses the host filesystem and networking, with no extra layers involved.
      • Root is not the default user for services.
      • Since all Nix software is built to be installed on hosts with lots of other software, it would be very weird to ever find a package which acts like it’s the only thing on the machine.
      1. 20

        The amount of nix advocates on this site is insane. You got me looking into it through sheer peer pressure. I still don’t like that it has its own programming language, still feels like it could have been a python library written in functional style instead. But it’s pretty cool to be able to work with truly hermetic environments without having to go through containers.

        1. 22

          I’m not a nix advocate. In fact, I’ve never used it.

          However – every capable configuration automation system either has its own programming language, adapts someone else’s programming language, or pretends not to use a programming language for configuration but in fact implements a declarative language via YAML or JSON or something.

          The ones that don’t aren’t so much config automation systems as parallel ssh agents, mostly.

          1. 5

            Yep. Before Nix I used Puppet (and before that, Bash) to configure all my machines. It was such a bloody chore. Replacing Puppet with Nix was a massive improvement:

            • No need to keep track of a bunch of third party modules to do common stuff, like installing JetBrains IDEA or setting up a firewall.
            • Nix configures “everything”, including hardware, which I never even considered with Puppet.
            • A lot of complex things in Puppet, like enabling LXD or fail2ban, were simply a […].enable = true; in NixOS.
            • IIRC the Puppet language (or at least how you were meant to write it) changed with every major release, of which there were several during the time I used it.
        2. 14

          I still don’t like that it has its own programming language

          Time for some Guix advocacy, then?

          1. 8

            As I’ll fight not to use SSPL / BUSL software if I have the choice, I’ll make sure to avoid GNU projects if I can. Many systems do need a smidge of non-free to be fully usable, and I prefer NixOS’ pragmatic stance (disabled by default, allowed via a documented config parameter) than Guix’s “we don’t talk about nonguix” illusion of purity. There’s interesting stuff in Guix, but the affiliation with the FSF if a no-go for me, so I’ll keep using Nix.

            1. 11

              Using unfree software in Guix is as simple as adding a channel containing the unfree software you want. It’s actually simpler than NixOS because there’s no environment variable or unfree configuration setting - you just use channels as normal.

              1. 12

                Indeed, the project whose readme starts with:

                Please do NOT promote this repository on any official Guix communication channels, such as their mailing lists or IRC channel, even in response to support requests! This is to show respect for the Guix project’s strict policy against recommending nonfree software, and to avoid any unnecessary hostility.

                That’s exactly the illusion of purity I mentioned in my comment. The “and to avoid any unnecessary hostility” part is pretty telling on how some FSF zealots act against people who are not pure enough. I’m staying as far away as possible from these folks, and that means staying away from Guix.

                The FSF’s first stated user freedom is “The freedom to run the program as you wish, for any purpose”. To me, that means prioritizing Open-Source software as much as possible, but pragmatically using some non-free software when required. Looks like the FSF does not agree with me exercising that freedom.

                1. 11

                  The “avoid any unnecessary hostility” is because the repo has constantly been asked about on official Guix channels and isn’t official or officially-supported, and so isn’t involved with the Guix project. The maintainers got sick of getting non-Guix questions, You have an illusion there’s an “illusion” of purity with the Guix project - Guix is uninvolved with any unfree software.

                  To me, that means prioritizing Open-Source software as much as possible, but pragmatically using some non-free software when required.

                  This is both a fundamental misunderstanding of what the four freedoms are (they apply to some piece of software), and a somewhat bizarre, yet unique (and wrong) perspective on the goals of the FSF.

                  Looks like the FSF does not agree with me exercising that freedom.

                  Neither the FSF or Guix are preventing you from exercising your right to run the software as you like, for any purpose, even if that purpose is running unfree software packages - they simply won’t support you with that.

                  1. 5

                    Neither the FSF or Guix are preventing you from exercising your right to run the software as you like, for any purpose, even if that purpose is running unfree software packages - they simply won’t support you with that.

                    Thanks for clarifying what I already knew, but you were conveniently omitting in your initial comment:

                    Using unfree software in Guix is as simple as adding a channel containing the unfree software you want. It’s actually simpler than NixOS because there’s no environment variable or unfree configuration setting - you just use channels as normal.

                    Using unfree software in NixOS is simpler than in Guix, because you get official documentation, and are able to discuss it in the project’s official communication channels. The NixOS configuration option is even displayed by the nix command when you try to install such a package. You don’t have to fish for an officially-unofficial-but-everyone-uses-it alternative channel.

            2. 4

              I sort of came to the same conclusion while evaluating which of these to go with.

              I think I (and a lot of other principled but realistic devs) really admire Guix and FSF from afar.

              I also think Guix’s developer UI is far superior to the Nix CLI, and the fact that Guile is used for everything including even configuring the boot loader (!).

              Sort of how I admire vegans and other people of strict principle.

              OT but related: I have a 2.4 year old and I actually can’t wait for the day when he asks me “So, we eat… dead animals that were once alive?” Honestly, if he balks from that point forward, I may join him.

              1. 3

                OT continued: I have the opposite problem: how to tell my kids “hey we try not to use the shhhht proprietary stuff here.

                I have no trouble explaining to them why I don’t eat meat (nothing to do with “it was alive”, it’s more to help boost the non-meat diet for environmental etc reasons. Kinda like why I separate trash.). But how to tell them “yeah you can’t have Minecraft because back in the nineties people who taught me computer stuff (not teachers btw), also thought me never to trust M$”. So, they play Minecraft and eat meat. I … well I would love to have time to not play Minecraft :)

        3. 9

          I was there once. For at least 5-10 years, I thought Nix was far too complicated to be acceptable to me. And then I ran into a lot of problems with code management in a short timeframe that were… completely solved/impossible-to-even-have problems in Nix. Including things that people normally resort to Docker for.

          The programming language is basically an analogue of JSON with syntax sugar and pure functions (which then return values, which then become part of the “JSON”.

          This is probably the best tour of the language I’ve seen available. It’s an interactive teaching tool for Nix. It actually runs a Nix interpreter in your browser that’s been compiled via Emscripten: https://nixcloud.io/tour/

          I kind of agree with you that any functional language might have been a more usable replacement (see: Guix, which uses Guile which is a LISPlike), but Python wouldn’t have worked as it’s not purely functional. (And might be missing other language features that the Nix ecosystem/API expects, such as lazy evaluation.) I would love to configure it with Elixir, but Nix is actually 20 years old at this point (!) and predates a lot of the more recent functional languages.

          As a guy “on the other side of the fence” now, I can definitely say that the benefits outweigh the disadvantages, especially once you figure out how to mount the learning curve.

        4. 7

          The language takes some getting used to, that’s true. OTOH it’s lazy, which is amazing when you’re trying to do things like inspect metadata across the entire 80,000+ packages in nixpkgs. And it’s incredibly easy to compose, again, once you get used to it. Basically, it’s one of the hardest languages I have learned to write, but I find it’s super easy to read. That was a nice surprise.

        5. 3

          Python is far too capable to be a good configuration language.

        6. 3

          Well, most of the popular posts mainly complaint about the problems that nix strive to solve. Nix is not a perfect solution, but any other alternative is IMO worse. The reason for nix’s success however is not in nix alone, but the huge repo that is nixpkgs where thousands of contributors pool their knowledge

      2. 8

        Came here to say exactly that. And I’d add that Nix also makes it really hard (if not outright impossible) for shitty packages to trample all over the file system and make a total mess of things.

      3. 6

        I absolutely agree that Nix is ideal in theory, but in practice Nix has been so very burdensome that I can’t in good faith recommend it to anyone until it makes dramatic usability improvements, especially around packaging software. I’m not anti-Nix; I reallly want to replace Docker and my other build tooling with it, but the problems Docker presents are a lot more manageable for me than those that Nix presents.

      4. 4

        came here to say same.

        although I have the curse of Nix now. It’s a much better curse though, because it’s deterministic and based purely on my understanding or lack thereof >..<

      5. 2

        How is it better to run a service as a normal user outside a container than as root inside one. Root inside a container = insecure if there is a bug in docker. Normal user outside a container typically means totally unconfined.

        1. 7

          No, root inside a container means it’s insecure if there’s a bug in Docker or the contents of the container. It’s not like breaking out of a VM, processes can interact with for example volumes at a root level. And normal user outside a container is really quite restricted, especially if it’s only interacting with the rest of the system as a service-specific user.

          1. 10

            Is that really true with Docker on Linux? I thought it used UID namespaces and mapped the in-container root user to a pin unprivileged user. Containerd and Podman on FreeBSD use jails, which were explicitly designed to contain root users (the fact that root can escape from chroot was the starting point in designing jails). The kernel knows the difference between root and root in a jail. Volume mounts allow root in the jail to write files with any UID but root can’t, for example, write files on a volume that’s mounted read only (it’s a nullfs mount from outside the jail and so root in the container can’t modify the mount).

            1. 10

              I thought it used UID namespaces and mapped the in-container root user to a pin unprivileged user.

              None of the popular container runtimes do this by default on Linux. “Rootless” mode is fairly new, and I think largely considered experimental right now: https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/

              https://github.com/containers/podman/blob/main/rootless.md

            2. 8

              Is that really true with Docker on Linux?

              Broadly, no. There’s a mixture of outdated info and oversimplification going on in this thread. I tried figuring out where to try and course-correct but probably we need to be talking around a concept better defined than “insecure”

            3. 4

              Sure, it can’t write to a read-only volume. But since read/write is the default, and since we’re anyway talking about lazy Docker packaging, would you expect the people packaging to not expect the volumes to be writeable?

              1. 1

                But that’s like saying alock is insecure because it can be unlocked.

                1. 1

                  I don’t see how. With Docker it’s really difficult to do things properly. alock presumably has an extremely simple API. It’s more like saying OAuth2 is insecure because its API is gnarly AF.

        2. 3

          This is orthogonal to using Nix I think.

          Docker solves two problems: wrangling the mess of dependencies that is modern software and providing security isolation.

          Nix only does the former, but using it doesn’t mean you don’t use something else to solve the latter. For example, you can run your code in VMs or you can even use Nix to build container images. I think it’s quite a lot better at that than Dockerfile in fact.

        3. 2

          How is a normal user completely unconfined? Linux is a multi-user system. Sure, there are footguns like command lines being visible to all users, sometimes open default filesystem permissions or ability to use /tmp insecurely. But users have existed as an isolation mechanism since early UNIX. Service managers such as systemd also make it fairly easy to prevent these footguns and apply security hardening with a common template.

          In practice neither regular users or containers (Linux namespaces) is a strong isolation mechanism. With user namespaces there have been numerous bugs where some part of the kernel forgets to do a user mapping and think that root in a container is root on the host. IMHO both regular users and Linux namespaces are far too complex to rely on for strong security. But both provide theoretical security boundaries and are typically good enough for semi-trusted isolation (for example different applications owned by the same first party, not applications run by untrusted third parties).

    9. 6

      Swift programmers: Write a utility to lock the keyboard for a specified number of seconds, with unlock combination.

      Forth programmers: Press the “sleep” button :^)

      On topic: Cool little tool.

    10. 5

      I find the idea of lacing source code with an elaborate array of booby-traps incomprehensibly bizarre. If you’re really so desperate to avoid the possibility of anyone making use of your code, what’s the point of even sharing it online?

      1. 17

        Because sharing code is a good thing, and shared code can be fairly attributed by honest conscious actors. LLMs cannot fairly attribute code, because they don’t know there’s anything to attribute (they don’t know anything to be fair, poor guys) so they need to be forced to.

      2. 2

        The tightrope to walk is making your code extensible and usable by others when copied verbatim or tinkered with by a person, who can understand their obligation to credit you appropriately, while making to fail the sort of blending performed by LLMs when they’re not regurgitating the content verbatim (verbatim is fine since it retains your credit inline). Preventing copyright laundering is a good thing for both content producers and the future of generative AI, I reckon, and this might be hard for the state of the art to cat-and-mouse its way out of. Certainly will be interesting to see how long it works.

      3. 1

        Some folks are package-oriented; a package contains not just code, but also various elements of continuity: addresses of upstream repositories, author and maintainer contact information, versioning, lists of dependencies, etc. Some package authors and maintainers are quite jealous and protective of their control over the package’s downstream consumers.

    11. 4

      At the end of the page:

      … This post has not been Lobstered. …

      Uh, is this a concern of people (bloggers) in general? lobste.rs isn’t a big site like Reddit, HN, or Slashdot (back in the day). I guess I don’t understand.

      1. 6

        I believe the intent is to link out to active discussions of the article, and the author is promoting those two options as ones they would support. If it did get discussed on either some fediverse page or lobste.rs they would add links. If there were a discussion on Reddit they would not.

        I guess.

        1. 1

          That makes sense. The “site has been slashdot’ed” meant that the server was overloaded, and that was obviously a bad thing.

      2. 5

        I especially don’t understand it because the author submitted the story themselves on lobste.rs.

        1. 20

          Maybe it’s a meta-joke?

          1. write blog post
          2. publish blog post
          3. submit a link (i.e. pass it BY REFERENCE) to lobste.rs
          4. the original does not update itself to reflect its existence on lobste.rs, proving that links are actually NOT references!
          1. 5

            yes! not a joke though. I forgot to re-upload the site with the link after Step 3

      3. 2

        It’s too late for that. I joined 5 years ago, and even then I remember lobsters linkbacks in this fashion

      4. 2

        it has now been lobstered! check the article again!

        “This post has not been Lobstered.” means that the post has not been lobstered! The text is applied automatically!

        I also don’t post on the YC site.

    12. 9

      I don’t get the point of this article. None of these is a lifesaver, and you learn all of this early on in your learning of the language. This article could make sense as an advocacy piece of a kind “look at these cool things you can do in this language,” but Python is already ubiquitous these days, so it is difficult to imagine someone who would be interested in what Python has to offer but just didn’t happen to notice this tiny obscure language just yet…

      1. 7

        I hope you enjoyed reading this and had the opportunity to learn something new

        I think that sums it up. There’s always something new to learn.

        I’m not a Python developer, so I found the article very interesting, even though (in my very biased opinion), Ruby does it much better.

      2. 5

        I work with data scientists who write Python and they learn little things like this all the time.

        For notes on the article:

        1. Also cool to use reversed(string) when you don’t have an idea of the length, then "".join(reversed_string) later.
        2. Don’t make this a lambda, write a def, Guido’s orders. def is_even(x: int): return x % 2
        3. For sure
        4. Don’t even turn it back to a list! Pass around generators, they are great!
        5. Don’t do this! Using len() is normal and I think the length of a string is stored alongside the string in memory
        6. You can do this like 3, set(a) ^ set(b) should be an empty set. (I think that’s what’s being achieved here.)
        7. Cool!
        8. Cool?
        9. Cool.
        10. Didn’t know this, nice
        11. Sure
        12. As @spookylukey said
      3. 2

        I’ve had a lot of colleagues that would write a for loop for most of those idioms, resulting code both slower and less concise. That said, the list could have used the standard library more, and avoided mistakes such as the quadratic contains_all.

      4. 1

        every day somebody is born who doesn’t know about my_list[::-1]!

        In the abstract, writing out things more clearly is Better(TM). But if you’re really verbose about certain things, then suddenly your function that could have been 4 lines is now 12 lines, or it could have been 10 and is now 25… and at one point length becomes a legibility concern!

        I don’t necessarily “agree” with a all the listed one-liners, but there is value in context to terser code, especially if it’s idiomatic (-> easier to read for other pythonistas). If only because terse code + a comment might be even clearer than a more verbose approach (that might be hiding bugs).

    13. 28

      Why are all my favorite tech bloggers trans asks area “man” who is about to realize why.

      1. 10

        What does this mean?

        1. 20

          It’s a meme. I’m trans. I just haven’t switched this account to my new name yet.

          1. 4

            Now you did, congrats! :3

            1. 4

              Thanks. Lobsters has always had such good trans representation. 🏳️‍⚧️

              1. 4

                I’ve found that many Lobsters are indeed good eggs.

        2. 14

          “Why are all my favorite tech bloggers trans?” asks area man (!) who is about to realize why.

          1. 2

            Thanks. Their response didn’t really help their lack of punctuation.

        3. 11

          The implication is that if an assigned-male-at-birth person finds themselves in social contexts where they are interacting with or choosing to consume the cultural output of out trans women, that AMAB person may themselves be transgender in a way they have not consciously realized yet, because whatever attracts them to these social contexts is also predictive of the sort of unusual relationship with their embodied gender that makes one consider gender-transition.

    14. 12

      I owe my online presence to Valve, as Steam was the first “social media” I had as a kid. Half Life begat Half Life 2, Half Life 2’s modding scene begat Garry’s Mod, and YouTube circa 2007 showed off Garry’s Mod to a curious kid.

      The rest is history. Half Life has a special place in my heart. I remember speculating about the lore with friends in Steam group chats. Memories..

      1. 2

        I was in a TFC clan with Garry. He seemed fun. He was kicked out, but I can’t remember why.

      2. 2

        The GMod Idiot Box and it’s ilk is probably the only reason I’m a programmer today.

    15. 16

      I find it incredibly disrespectful companies like this are still referring to the BUSL (Business Source License) as BSL (or here, BSL/BUSL) despite the Boost Software License’s long, fruitful history. These corporations can’t even take the first step towards respecting the Free Software ecosystem and properly refer to licenses - there is a “culture war” in software licensing that the corps will lose, because their only culture is producing profit.

      1. 6

        Citations:

        • SPDX identifier BSL-1.0 for the Boost Software License 1.0, which has been OSI-approved since 2008
        • SPDX identifier BUSL-1.1 for the Business Source License 1.1, whose license text is copyrighted 2017
        • searching for “BSL license” with Google Search or DuckDuckGo, all software-license-related results in the first page are about the Business Source License (BUSL)
    16. 4

      Weird use of i as an index in the summation formula. I’d have used k to avoid confusing it with the imaginary unit.

      1. 10

        it is ok.

        1. 1

          I dunno, I was confused. Especially as imaginary i is often used in conjunction with e.

          It tells me author is more used to computer science use of i as iteration variable than the math use case.

          1. 13

            I believe that the author is fairly aware of the “math use case”.

          2. 2

            Sigma_{i = 0}^n body introduces the variable i into scope for body. that would shadow other values of i, similar to a lambda.

            1. 3

              I know how the summation notation works. I’m not saying it’s wrong, it’s just that if you read about something that’s pertaining to e and you see i in a formula, you’re confused for a moment as to why the imaginary unit is in a summation. That’s all.

      2. 3

        I like to use italic i for variables and non-italic i for the imaginary unit to distinguish them.

      3. 2

        Having graduated EE, I’m just lucky he didn’t use j

    17. 6

      How does base Nix not isolate your projects? How is Nix not supported by direnv when you can echo "use flake" >> $PROJECT/.envrc? How is configuring with JSON an improvement to Nix? You’re also going to have an easier time with Nix if you aren’t trying to keep a lot of software pinned at specific versions instead of using a Nixpkgs pin where everything was built & tested to (usually) work together. With most of the pros being inherited from Nix, I don’t get the appeal of abstracting over it in a rigid way rather than learning the tool underneath which will unlock a lot more for you.

      1. 3

        There are a TON of people who would love the joy of isolated, reproducible dev environments but aren’t willing to learn and use Nix. Devbox is for them (and me). If that isn’t you that’s totally cool.

        EDIT: oh and:

        How does base Nix not isolate your projects?

        I mean, yeah you can build that for sure. I was just picturing how on my NixOS machines I install packages globally whenever I need them and I’m not using project based flakes. The whole Nix column in the comparison was too a small a space to actually convey any information like that. shrug

        1. 5

          But you are willing to learn to use Devbox-Nix-in-JSON? At this rate of Nix adoption it just looks like delaying the inevitable result of writing Nix directly. Aside from that, it seems incredibly misleading to say Nix does not do per-project isolation just because you’re not doing it already. It’s not just for flakes, if you have a shell.nix for your project, does that count? (Yes, it totally does.)

          1. 2

            But you are willing to learn to use Devbox-Nix-in-JSON?

            Executing trivial Devbox commands is so much easier than writing Nix flakes it’s not even funny. I don’t even edit the JSON file.

            it seems incredibly misleading to say Nix does not do per-project isolation just because you’re not doing it already.

            Sorry, to be clear, I wasn’t justifying that part of the comparison. I was explaining why I made an error that I intend on correcting when I’m next at my laptop, and I’ll revisit the rest of the column keeping flakes in mind.

            1. 4

              Learning the tools you use is a part of the job. What happens when I need to apply a patch to tool or add an overlay in Devbox? It’d be nice to have a configuration language for the task…

              There’s also another important missing part to this: Nix can build projects stateless… not just a stateless environment to build stateful projects inside that environment. A dev shell can be used in some extent to onboard someone to the system, but they should eventually be reaching for Nix to build the project too. When you throw a JSON config & abstraction layer atop Nix, you aren’t exposing the user to better parts (packages, apps, & overlays) of Nix that a flake.nix would eventually lead that user to playing with. It’s getting folks into an elevator cab of a good idea & then taking away the buttons to reach a higher floor.

              1. 4

                This is a false dichotomy. At the bottom of the article are links to using flakes in a devbox project: path: path/to/local/flake or github: ... etc. So, users have a fairly accessible path to using native flakes if they want to get the full power of nix tooling.

                1. 2

                  I don’t see the links but adding new flake inputs is not the same as adding overlays & overriding derivations with patches which necessarily require some manual intervention.

              2. 4

                As Savil said, you can just use Devbox as a way of orchestrating flakes if you want. It’s pretty easy to do.

                But more fundamentally we live in different worlds. I’m not interested in bringing Nix to the masses. I’m not interested in learning Nix lang. I’m not interested in converting my entire build pipeline to Nix. I’m only interested in solving the concrete problems of developer environment rot, bootstrapping, config sharing, etc. and Devbox solves all my problems.

                1. 1

                  How can you see the advantages of a reproducible dev shell & not see the next-step value in the output also being reproducible? Is Nix, not even a complex language, really that big of a barrier?

                  1. 1

                    If you’ve decided that I desperately need the One True Pure Nix™️ in my development environment, and anything short of that is worse than useless, without knowing anything about how I build software, than I think that says more about how much Nix koolaid you’ve been drinking and less about how I build software.

                    1. 2

                      In the time you spent learning & arguing for Devbox you would already know enough Nix to do most things you needed—even beyond a dev shell. There’s nothing Kool-Aid about the language—it’s configuration in an ML dialect + wrapper around Bash & is like picking up Make or YAML.

                      You can do as you please, but I don’t think taking the Devbox route is good a long-term recommendation for most folks.

        2. 2

          I was a bit confused about what this offers relative to an OCI container. If you have a Dockerfile / Containerfile to build your development environment then there’s a load of off-the-shelf tooling that works with it already. If you have a .devcontainer directory in your repository then VS Code or IntelliJ will grab it automatically and use it and so you can provide the environment with GitHub Code Spaces.

          This is what we do with CHERIoT. We have a custom LLVM (which needs a moderately fast machine or a lot of patience, and a fair amount of free disk space) and a couple of simulators that are slightly annoying to build (needing ocaml or verilog tools that most people won’t have installed), so we build a dev container. We then run out CI in the container (both GitHub Actions and Cirrus CI have a ‘run in a container created from this image’ option, so there’s no additional config needed) and people can hit a couple of buttons in the browser from GitHub to have the full developer environment where they can build firmware images and run them in a simulator.

          It’s increasingly safe to assume that developers have an OCI container runtime installed.

          1. 3

            I just don’t want to develop inside a Docker container inside a VM (Docker Desktop).

            I don’t want my tooling to all need to be “devcontainer” aware. I don’t want my battery life to suffer from running a VM, etc etc.

            I just want software to be running on my machine, contextually available depending on what project I’m working on. It’s all the benefit and none of the drawback.

            1. 2

              I just don’t want to develop inside a Docker container inside a VM (Docker Desktop).

              Because…?

              A lot of my development over the last 20 years has been in VMs and there’s nothing that I miss from not running bare metal. Containers on top of that give me an easier way of isolating the environments for different projects and pulling pre-built development environments. It also has the advantage that my flow for local and remote development is the same, so I can easily move to working on a big server or cloud VM if my laptop is too slow.

              I don’t want my tooling to all need to be “devcontainer” aware.

              It doesn’t need to be. I can pull a container image, spin up a container and use vim in it. I can bind mount the project from the host and use external editors for the build. Or I can use something that knows about the dev container and have it manage container lifecycle.

              I tend to do the former, but recommend the latter for onboarding. Last week I taught a compartmentalisation workshop with CHERIoT and it took about two minutes to have everyone in the room launch a GitHub code space connected to a dev container using our image. No other approach that I’ve tried comes close.

              I don’t want my battery life to suffer from running a VM, etc etc.

              My battery suffers far more from building LLVM or from running Stable Diffusion than anything VM related. I have a Linux VM managed by Docker, a FreeBSD VM managed by Podman, and a FreeBSD VM that isn’t used for containers. These don’t even show up in the energy consumption monitor when idle, the virtualisation cost is in the noise when they’re actually doing work.

              1. 1

                You can do almost all of that in Nix without the container layer. Common docker pattern is to run package updates in the container making it stateful/not reproducible, which is not the most common Nix pattern.

                Cointainer integration with other services is better but that’s a maturity/uptake thing rather than a techincal limitation.

              2. 1

                I just don’t want to develop inside a Docker container inside a VM (Docker Desktop).

                Because…?

                I’ve spent most of my life in VMs and containers so this isn’t a casual preference. My very first remote development environment used SFTP to mount a remote file system where I would edit my PHP files directly on a server. I started my career with several PCs under my desk, each with one working copy of Microsoft Office’s development environment, ready to be remoted into over Microsoft RDP. When I worked at Meta/Facebook my development environment was a number of ephemeral, monster VMs that I could request and release, that I would remote into through VS Code. My personal setup at home constantly changes (I love playing with development environments almost as I love rewriting my personal website’s tech stack), but my last setup before this was an lxc container running on Proxmox that I would use the SSH plugin to access remotely through VS Code.

                There are so many benefits to these kinds of setups, as you’ve said:

                • Trivial on-boarding
                • Reproducibility (not as good as Nix, but better than let’s say instructions on a wiki)
                • Pick-up where you left off, from any machine
                • I could go on for hours

                I am no stranger to all the pros. But they also come with a lot of little drawbacks that are annoying. Most of them are super obvious, like remote development environments requiring an internet connection.

                But as a single example of a non-obvious one, that affects both local and remote containerized development environments: I love my external diff tools like Beyond Compare. Most of my setups made a GUI diff tool impossible (or at least untenably unwieldy), unless it was baked into my primary, development-environment-aware tool, like VS Code’s built-in (but not up to par) diff tool.

                I don’t want my tooling to all need to be “devcontainer” aware.

                It doesn’t need to be. I can pull a container image, spin up a container and use vim in it. I can bind mount the project from the host and use external editors for the build. Or I can use something that knows about the dev container and have it manage container lifecycle.

                If you are still using tools on your local desktop to get GUI tools, like my example of an external git diff tool, then you’re splitting your setup between your containerized development environment and your local machine, deciding when to bridge the gap between the two. You’d have to setup git, Beyond Compare, on your local machine. You’d have to replicate your dotfiles. Then you’d bind mount between the two so files are in both places. Just so you can diff some code.

                My battery suffers far more from building LLVM or from running Stable Diffusion than anything VM related.

                Sure, that’s true of some people. But I’m a web developer and the right tools can mean the difference between 2 hours of battery life or 15.

          2. 1

            It’s a bit of a false dichotomy, they just seem similar on the surface. Imagine this being a layer below the OCI container. In fact, devbox generate devcontainer straight up generates a .devcontainer folder you can use in the way you described.

            Your .devcontainer by itself is far from reproducible and might fail at any time. If for example you apt-get install your packages in the Dockerfile, then a new team member building the container might find out that the currently distributed package in that distro has a bug that breaks compatibility with your project. The rest of you, working off cached layers and images, might not even have noticed.

            1. 1

              Your .devcontainer by itself is far from reproducible and might fail at any time.

              The .devcontainer bit can do one of two things:

              • Build from a Dockerfile
              • Pull from a container registry.

              I would assume that you would always do the latter for anything non-trivial. We don’t even include the Dockerfile in our repo, it’s in a separate repository and we have CI to build it (and test that it can build our project with it before pushing it to the container registry). It’s always built in the equivalent of --no-cache mode (no local cache on the CI machines).

              This is the kind of thing that CI is designed to do: test whether something works and produce artefacts from it if it does.

              Imagine this being a layer below the OCI container. In fact, devbox generate devcontainer straight up generates a .devcontainer folder you can use in the way you described.

              So it’s another way of producing OCI containers? I guess the more the merrier (I personally like Buildah). Or does it require building locally? I consider it a failure of our infrastructure if anyone ever has to build the devcontainer (they may choose to build it, or separately build any of the things in it, but they should never need to). About the only reason anyone does at the moment is the person using PowerPC64 Linux: we don’t (yet?) have CI infrastructure for building a PowerPC image.

    18. 6

      What is the “Archive” icon even supposed to depict? The lower part of a printer, with a sheet of paper sticking out?

      It’s a bankers box. It’s a common way to store archival papers in America.

      1. 6

        So, a product that’s sold across the world chose an icon representation that makes sense only to people in one country?

        HCI books from the ‘80s talk about that as a bad idea. The common example is the use of an owl, which means wisdom in many European cultures but black magic in some other parts of the world. Picking a locale-specific physical object is even worse.

        Mind you, Outlook still can’t do quoting right in replies, in spite of people complaining about it since I was a child, so I have very low expectations for that team. They’ve completely rewritten the app at least twice without fixing basic functionality.

        1. 3

          The concept of boxes into which documents are placed for longer-term storage is not unique to the US. Nor as far as I’m aware is the particular form factor — the term “banker’s box” may be the US-specific thing here.

          I absolutely have seen documentaries of museums and archives in other countries with boxes of extremely similar form factor. And clerical/office staff (the traditional target users of much “office” software) would historically have been quite familiar with such boxes.

          The real issue here is almost certainly temporal — the archival storage box is now an anachronism on par with the floppy disk save icon. It’s a metaphor for a physical-world thing that in the physical world is no longer a common object.

          1. 4

            The concept of boxes into which documents are placed for longer-term storage is not unique to the US. Nor as far as I’m aware is the particular form factor — the term “banker’s box” may be the US-specific thing here.

            I did some consulting for a company that manages warehouses for long-term document storage (and also did fun things like taking tapes from banks mainframes and printing their daily audit results on microfiche). They had a lot of boxes in their warehouses but very few looked like the ones in the icon. I actually owned a few boxes like that (Staples used to sell them), but I would never associate them with archiving (in part because they ended up being stored in a basement and nothing in them survived).

        2. 3

          I don’t know how common Bankers Boxes are in other countries. I know the author is Swedish, so that might affect their perspective. I do know that Manila folders are uncommon outside the US, and becoming less common in the US as computers replace filing cabinets.

          1. 1

            I can attest to that. I’ve literally never seen one until I was well into my twenties. They are so uncommon that any equivalent term for it in my native language is ambiguous; pretty much every word you can use to translate the word “folder” also means “file”. We’ve settled on an awkward convention at some point in the late ‘90s – awkward because the word used for “file” also dentoes box or a locker that you put folders in, not the other way around – but it’s a convention that’s entirely specific to computers, it has no real-life correspondent.

            My hot take on the subject is that it’s a fun anecdote but a largely irrelevant design problem. The icon is weird for sure but it takes about two double-clicks to figure out what it’s for. Other than making localisation via automatic translation weird (Google Translate & co. don’t know about the conventional, computer-specific translation of those terms, so they end up using the equivalent terms for “file” and “folder” interchangeably) it has no discernible effect on computer usage. Like all technical terms, and like all symbolic representations of abstract or technical concepts, they’re just things you learn.

        3. 1

          The common example is the use of an owl, which means wisdom in many European cultures but black magic in some other parts of the world.

          That makes me really want to use owl imagery in any arcane documentation I write. Two (correct!) meanings in one :)

        4. 1

          It’s not a US-centric thing. An insurance broker I’ve known since being a kid in the 90s has a room chock-full of these boxes, and I’m from the UK.

      2. 4

        You are both wrong, that is obviously a Lego man with a mustache, wearing a flat cap and looking towards the left.

        What truly baffles me in that picture is why the junk bin icon is next to the Delete label, rather than next to the one saying, like Junk :-).

        1. 3

          Oh, that’s called a delete bin. It’s a common way to store unused papers in America.

          Sorry, couldn’t resist.

          On a more serious note, I suspect the reason for a whole lot of these terrible designs are branding. Companies desperately want their products to be different from everybody else. Especially anyone with near-monopoly power, to really milk that cognitive dissonance your users will get from trying to use anything else, and to force your competitors to take a huge opportunity cost trying to keep up with the changes. Using the same icon and naming for things could be considered being a follower, rather than a leader, or some such BS.

          1. 2

            Oh, that’s called a delete bin. It’s a common way to store unused papers in America.

            Half-serious, but all the delete bins I’ve ever seen have grid netting, or are translucid/solid. That one looks just like an old rubbish bin, hence my joke :-).

            Hidden behind my entirely unclassy joke is actually my equally unclassy professional opinion that, like most graphical conventions based on stylised concepts and symbols, software icon representations are entirely conventional, based on conventions specific to various cultures or niches, and are efficiently disseminated by external adoption, like virtually all symbolic representations in this world, from mathematical and technical symbols to Morse code for the Latin alphabet. Consequently, there is far more value in keeping them constant than in chasing magic resonant energy inner chakra symbolic woo-woo intuitiveness or whatever the word for it is today. Half the road signs out there are basically devoid of inherent meaning for most drivers. They work fine because you learn them once and, in most cases, you’re set for life. Left untouched, icons would work fine, too.

            1. 1

              Very good point. Like how everybody recognised the “save” icon, even if they’d never seen a floppy disk. On a related note, I wonder how we could salvage the situation, and get back some consistency. We’d need to somehow shift the incentives of companies intent on “branding” everything in sight. Maybe an accessibility org with a bit of clout could start certifying the accessibility of applications, and deduct points for any unrecognisable permutations of well-known patterns?

              1. 1

                I don’t know if all-round, universal standardisation is possible. There are standards specific to certain niches, e.g. ISO 15223 for medical devices labeling or ISO 7000-series standards for use on equipment, or to specific equipment, e.g. IEC 60417 for rechargeable batteries. But diversity of function inherently limits their application; lots of devices have functions only they perform, so standardising their representation is pretty much impossible.

                IMHO it’s not something that can be solved through regulatory means. It’s a problem of incentive alignment. The reason why we see this constant UI churn in commercial applications is that most organisations that develop customer-facing software or services have accrued large design & UX teams (on the tech side) affiliated to large marketing orgs (on the non-tech side), which lend a lot of organisational capital to their managers – because they’re large. These people cannot walk into a meeting room and say okay, we have 1M customers, we’re basically swimming in money, we’re no one’s touching the interface and making a million people learn how to use our app again on my watch. If they did, half their team would become redundant, at which point half their organisational capital would evaporate.

                All branches of engineering get to a point where advancing the state of the art requires tremendous effort and study. Computer engineering is no exception. 180 years ago, advancing the state of the art in electric motors mostly required tinkering and imagination; doing that today usually happens via a PhD and it’s really hard to explain the results to people who don’t have a PhD in the same field.

                Perpetually bikeshedding around that point (or significantly below it) on the other hand is accessible to most organisations. It doesn’t help that UX and interface design are related to immediate and subjective sensory experiences, so everyone is entitled to an opinion on these topics, which makes them susceptible to being influenced primarily by loud people and bullies in their orgs.

                1. 1

                  I don’t know if all-round, universal standardisation is possible.

                  Yeah, I argued for certification rather than standardisation for that reason. Just like a lot of things can’t easily be standardised in an objective and easily transferable format, having a trusted arbiter is probably more useful to achieve cohesion.

      3. 2

        Which means icons should(?) be a part of a localisation process too. Although it would bring a whole other set of new problems along too.

        1. 1

          I’ve just realised that icons are being partly localised already. Rich-text editors’ [B]old, [I]talic, and [U] in English are [N]egrita, [C]ursiva, and [S]ubrayado in Spanish. Consequentially, they have different keyboard shortcuts too.

          1. 2

            Same in Swedish-localized Office apps, which is annoying because Ctrl-F gives bold (”Fetstil”), not Find.

      4. 1

        Oh, now that you say it, I see it and that makes sense. But I didn’t know what it was supposed to be either before.

    19. 1

      Leaves me embarrassed to be in my 20s with no idea wtf is going on after overwriting the first msg_msg with a msg_msgseq