1. 25

    Go doesn’t need async/await, it has goroutines and channels.

    1. 12

      +1

      c := make(chan int)      // future
      go func() { c <- f() }() // async
      v := <-c                 // await
      
      1. 1

        I had a negative knee-jerk reaction when I saw the async/await naming. But if you rename/rejig the api a bit, it seems a lot less bad. See: https://gotipplay.golang.org/p/IoHS5HME1bm w/context https://gotipplay.golang.org/p/Uwmn1uq5vdU

      2. 2

        this uses goroutines and channels under the hood …

        1. 8

          why do you need to abstract-away goroutines and channels?

          1. 4

            I have no idea. It’s like seeing someone recite a phone book from memory: I can appreciate the enthusiasm without understanding the why

        2. 1

          They said Go didn’t need generics either :)

          I get your point though. Hence why almost every bit of this repo screams “experimental.” I have been just playing around with the pattern in some work/personal projects and seeing how it works ergonomically and seeing if it improves areas with lots of asynchronous operations.

          But, only a matter of time until more folks begin trying to abstract away the “nitty-gritty” of goroutines/channels with generics. I personally point goroutines/channels out as Go’s greatest features, but I have seen others really want to abstract them away.

          1. 4

            Goroutines and channels are there to abstract away asynchronous code.

            1. 5

              Goroutines and channels are abstractions that are a marked improvement on the state of the art prior to Go, but I find that they tend to be too low-level for many of the problems that programmers are using them to solve. Structured concurrency (or something like it) and patterns like errgroup seem to be what folks actually need,

              1. 5

                Yeah, I also long time ago thought, that one area where generics in Go could hopefully help, would be in abstracting away channel patterns - things like fan-out, fan-in, debouncing, etc.

                1. 2

                  honestly I just want to be able to call select on N channels where N is not known at compile time. A cool thing about promises is being able to create collections of promises. You can’t meaningfully create collections of channels. I mean sure, you can make a slice of channels, but you can’t call select on a slice of channels. select on a slice of channels is probably not the answer but is a hint at the right direction . Maybe all := join(c, c2) where all three of those values are of the same type chan T. I dunno, just spitballing I haven’t given that much thought, but the ability to compose promises and the relative inability to compose channels with the same expressive power is worth facing honestly.

                  I actually fully hate using async and await in JS but every time I need to fan-out, fan-in channels manually I get a little feeling that maybe there’s a piece missing here.

                  1. 3

                    I just want to be able to call select on N channels where N is not known at compile time.

                    You can.

                    https://golang.org/pkg/reflect#Select.

                    1. 2

                      the argument that I’m making is that promises have ergonomics that channels lack, and that although I don’t think Go needs promises, that the project in question is reflective of how promise ecosystems have invested heavily in ergonomics in many scenarios that Go leaves for every developer to solve on their own. Calling reflect.Select is not a solution to a problem of ergonomics, because reflect.Select is terribly cumbersome to use.

                    2. 1

                      honestly I just want to be able to call select on N channels where N is not known at compile time

                      That’s still too low-level, in my experience. And being able to do this doesn’t, like, unlock any exciting new capabilities or anything. It makes some niche use cases easier to implement but that’s about it. If you want to do this you just create a single receiver goroutine that loops over a set of owned channels and does a nonblocking recv on each of them and you’re done.

                      every time I need to fan-out, fan-in channels manually I get a little feeling that maybe there’s a piece missing here.

                      A channel is a primitive that must be owned and written to and maybe eventually closed by a single goroutine. It can be received from by multiple goroutines. This is just what they are and how they work. Internalize these rules and the usage patterns flow naturally from them.

                      1. 2

                        loops over a set of owned channels and does a nonblocking recv on each of them and you’re done.

                        How do you wait with this method? Surely it’s inefficient to do this in a busy/polling loop. Or maybe I’m missing something obvious.

                        Other approaches are one goroutine per channel sending to a common channel, or reflect. Select().

                        1. 1

                          Ah, true, if you need select’s blocking behavior over a dynamic number of channels then you’re down to the two options you list. But I’ve never personally hit this use case… the closest I’ve come is the subscriber pattern, where a single component broadcasts updates to an arbitrary number of receivers, which can come and go. That’s effectively solved with the method I suggested originally.

                        2. 1

                          I’ve been programming Go for ten years. I know how channels work.

                          Promises can be composed, and that is a useful feature of promises. Channels cannot be composed meaningfully, and that is rather disappointing. The composition of channels has much to give us. Incidentally, the existence of errgroup and broadly most uses of sync.WaitGroup are the direct result of not having an ability to compose channels, and channel composition would obviate their necessity entirely.

                          What is it that sync.WaitGroup and errgroup are solving when people generally use them? Generally, these constructs are used in the situation that you have N concurrent producers. A common pattern would be to create a channel for output, spawn N producers, give every producer that channel, and then have all producers write to one channel. The problem being solved is that once a channel has multiple writers, it cannot be closed. sync.WaitGroup is often used to signal that all producers have finished.

                          This means that practically speaking, producer functions very often have a signature that looks like this:

                          func work(c chan T) { ... }
                          

                          Instead of this:

                          func work() <-chan T { ... }
                          

                          This is in practice very bothersome. In the situation that you have exactly one producer that returns a channel and closes it, you could do this:

                          for v := range work() {
                          }
                          

                          This is great and wonderfully ergonomic. The producer simply closes the channel when it’s done. But when you have N producers, where N is not known until runtime, what can you do? That signature is no longer useful, so instead you do this:

                          func work(wg *sync.WaitGroup, c chan T) {
                              defer wg.Done()
                              // do whatever, write to c but don't close c
                          }
                          
                          var wg sync.WaitGroup
                          c := make(chan T)
                          for i := 0; i < n; i++ {
                              wg.Add(1)
                              go work(&wg, c)
                          }
                          
                          done := make(chan struct{})
                          go func() {
                              wg.Wait()
                              close(done)
                          }()
                          
                          for {
                              select {
                              case <-c:
                                  // use the result of some work
                              case <-done:
                                  // break out of two loops
                              }
                          }
                          

                          That’s pretty long-winded. The producer written for the case of being 1 of 1 producer and the producer written for the case of being 1 of N producers have to be different. Maybe you dispense with the extra done channel and close c, maybe you use errgroup to automatically wrap things up for you, it’s all very similar.

                          But what if instead of N workers writing to 1 channel, every worker had their own channel and we had the ability to compose those channels? In this case, composing channels would mean that given the channels X and Y, we compose those those channels to form the channel Z. A read on Z would be the same as reading from both X and Y together in a select statement. Closing X would remove its branch from the select statement. Once X and Y are both closed, Z would close automatically. Given this function, we could simply have the worker definition return its own channel and close it when its done, then compose all of those, and then read off that one channel. No errgroup or sync.WaitGroup necessary. Here is an example of what that would look like:

                          func work() <-chan T {}
                          
                          var c <-chan T
                          for i := 0; i < n; i++ {
                              c = join(c, work())
                          }
                          
                          for v := range c {
                              // use the result of some work
                          }
                          

                          Here is a working program that implements this concept at the library level: https://gist.github.com/jordanorelli/5debfbf8dfa0e8c7fa4dfcb3b08f9478

                          Tada. No errgroup necessary, no sync.WaitGroup, none of that. The producer is completely unaware that it is in a group and the consumer is completely unaware that there are multiple producers. You could use that producer and read its results as if it’s just one, or one of many in the exact same way.

                          It makes consuming the result of N workers much easier, it makes it so that a worker may be defined in the event that it is 1 of 1 and 1 of N in exactly the same way, and it makes it so that consumers can consume the work from a channel without any knowledge of how many producers that channel has or any coordination outside of seeing the channel closed. Of course, implementing this at the library level and not at the language level means adding an overhead of additional goroutines to facilitate the joining. If it could be implemented at the language level so that joining N channels into 1 does not require N-1 additional goroutines, that would be neat.

                          This implementation is also subtly broken in that composing X and Y to form Z makes it so that you can’t read off of X and Y on their own correctly now; this is not a full implementation, and there’s certainly a question of implementation feasibility here.

                          1. 1

                            Channels cannot be composed

                            I don’t think I agree. It’s straightforward to build higher-order constructs from goroutines and channels as long as you understand that a channel must be owned by a single producer.

                            The problem being solved is that once a channel has multiple writers, it cannot be closed.

                            It doesn’t need to be closed. If you have 1 channel receiving N sends, then you just do

                            c := make(chan int, n)
                            for i := 0; i < cap(c); i++ {
                                go func() { c <- 123 }()
                            }
                            for i := 0; i < cap(c); i++ {
                                log.Println(<-c)
                            }
                            

                            This means that practically speaking, producer functions very often have a signature that looks like func work(c chan T) { ... }

                            Hopefully not! Your worker function signature should be synchronous, i.e.

                            func work() T
                            

                            and you would call it like

                            go func() { c <-work() }()
                            

                            Or, said another way,

                            go work(&wg, c)

                            As a rule, it’s a red flag if concurrency primitives like WaitGroups and channels appear in function signatures. Functions should by default do their work synchronously, and leave concurrency as something the caller can opt-in to.

                            But what if . . .

                            If you internalize the notion that workers (functions) should be synchronous, then you can do whatever you want in terms of concurrency at the call site. I totally agree that goroutines and channels are, in hindsight, too low-level for the things that people actually want to do with them. But if you bite that bullet, and understand that patterns like the one you’re describing should be expressed by consumers rather than mandated by producers, then everything kind of falls into place.

                            1. 1

                              It’s clear that you didn’t read the gist. Your example falls apart immediately when the workers need to produce more than one value.

                              Your worker function signature should be synchronous, i.e. func work() T

                              That’s not a worker. It’s just a function; a unit of work. That’s not at all the problem at hand and has never been the problem at hand. Maybe try reading the gist.

                              The workers in the gist aren’t producing exactly 1 output. They’re producing between 0 and 2 outputs. The case of “run a function N times concurrently and collect the output” is trivial and is not the problem at hand.

                              The workers are producing an arbitrary number of values that is not known in advance to the consumer. The workers are not aware that they’re in the pool. The consumer is not aware that they’re reading from the pool. There is nothing shared between producers to make them coordinate, and nothing shared with consumers to make them coordinate. There is no coordination between producers and consumers at all. The consumer is not aware of how many workers there are or how many values are produced by each worker, they are only interested in the sum of all work. The workers simply write to a channel and close it when they’re done. The consumer simply reads a channel until the end. That’s it. No errgroup requiring closures to implement the other half of the pattern, no sync.WaitGroup required to manually setup the synchronization. Just summing channels. The case of 1 worker and 1 consumer is handled by a worker having signature func f() <-chan T. The case of 1 worker and N consumers, N workers and 1 consumer, and N workers and M consumers are all handled with the same worker signature, with no additional coordination required.

                              1. 1

                                It’s clear that you didn’t read the gist.

                                I mean, I did, I just reject the premise :)

                                That’s not a worker. It’s just a function; a unit of work

                                Given work is e.g. func work() T then my claim is that a “worker” should be an anonymous function defined by the code which invokes the work func, rather than a first-order function provided by the author of the work func itself.

                                The workers are producing an arbitrary number of values that is not known in advance to the consumer . . . the consumer simply reads a channel until the end.

                                Channels simply don’t support the access pattern of N produers + 1 consumer without a bit of additional code. It’s fair to criticize them for that! But it’s not like the thing is impossible, you just have to add a bit of extra scaffolding on top of the primitives provided by the language.

                  2. 2

                    I think generics will make channels much easier to use correctly. The shenanigans required to handle cancellation, error reporting, fan-out with limits, etc etc etc means that very few programs handle the edge cases around goroutines. Certainly when I wrote go, I wouldn’t follow the patterns needed to prevent infinite go routine leaks, and often I’d decide to panic on error instead of figuring out how to add error channels or result structs with a null error pointer, etc.

                    What I like about Promise is that it’s a Result[T] - but ideally I’d be able to get the composition boilerplate of structured CSP stuff out of the way with generics instead of adopting the Promise model wholesale.

                    (My history: I loved writing go for many years but eventually burned out from all the boilerplate and decided to wait for generics)

                1. 1

                  One of my “not enough time to fit it into this one life probably” ideas from some time ago already is to try and make an email client so good, that ~everyone would be using it (or at least enough people to start having critical mass of social leverage), and then do a “constructive EEE maneouver” of upgrading everyone to some better protocol. Yes, I told you, “somewhat” big idea, including the fact that nobody really even is sure how a “better email” could look yet; but no need to worry about that later step at all for me for now, if I already don’t expect to have enough time to even try writing this better email client. (Including, but not limited to, due to me having some other ideas too, that have somewhat, if not necessarily much, bigger chance of actually panning out…)

                  1. 5

                    Planning the next release of https://bupstash.io/ / https://github.com/andrewchambers/bupstash . I am way behind schedule but want to do a closed beta of managed repositories.

                    Also thinking a bit about a new programming language I want to work on for fun - A combination of standard ML drawing heavy inspiration from the https://github.com/janet-lang/janet runtime.

                    I also have some ideas kicking around for my peer to peer package tree - https://github.com/andrewchambers/p2pkgs .

                    So many things to do - clearly I need to avoid spending time on the less important things - I just have trouble reigning in what my mind is currently obsessing over.

                    1. 2

                      Also thinking a bit about a new programming language I want to work on for fun - A combination of standard ML drawing heavy inspiration from the https://github.com/janet-lang/janet runtime.

                      Do you mean you want to reimplement StandardML but on top of Janet’s like runtime? Or is there something specific to Janet which can influence the SML language itself?

                      I’m myself contemplating a compile to LuaJIT ML-like language: the efficiently of LuaJIT and its convenient FFI + ergonomics of ML (though I’d want to experiment with adding modular implicits to the language).

                      I also have some ideas kicking around for my peer to peer package tree - https://github.com/andrewchambers/p2pkgs .

                      Is this related to Hermes (sorry for lots of questions but you have so many interesting projects)? Are you still using/developing it?

                      Some time ago I was working on designing and implementing esy which is a package manager + meta build system (invoking package specific build system in hermetic environments) for compiled languages (OCaml/Reason/C/C++/…). It looks like Nix but has integrated SAT solver for solving deps, we rely on package.json metadata and npm as a registry of package sources (though we can install from git repos as well).

                      Personally, I think there’s a real opportunity to make a “lightweight” version of Nix/Guix which could be used widely and Hermes seems to be aimed to at this exact spot.

                      1. 1

                        Do you mean you want to reimplement StandardML but on top of Janet’s like runtime? Or is there something specific to Janet which can influence the SML language itself?

                        The way janet has great CFFI and a few things like compile time evaluation and the language compilation model. I also enjoy how janet can be distributed as a single amalgamated .c file like sqlite3. My main criticism of janet is perhaps the lack of static types - and standard ML might be one of the simplest ‘real’ languages that incorporates a good type system, so I thought it might be a good place to start for ideas.

                        I’m myself contemplating a compile to LuaJIT ML-like language: the efficiently of LuaJIT and its convenient FFI + ergonomics of ML (though I’d want to experiment with adding modular implicits to the language).

                        Yeah, the way it complements C is something I would love to capture. I am not familiar with modular implicits at all - but it sounds interesting!

                        Is this related to Hermes (sorry for lots of questions but you have so many interesting projects)? Are you still using/developing it?

                        Yes and no - p2pkgs is an experiment to answer the question - ‘what if we combined ideas from Nix with something like homebrew in a simple way?’ I think the answer is something quite compelling but it still has a lot of tweaking to get right. p2pkgs uses a combination of a traditional package model - so far less patching is needed to build packages - it also is conceptually easier to understand than nix/hermes - while providing a large portion (but not all) of the benefits. You could consider p2pkgs like an exploratory search of ideas for ways to improve and simplify hermes. The optional p2p part was kind of an accident that seems to work so well in practice that I feel it is also important in it’s own way.

                        1. 1

                          while providing a large portion (but not all) of the benefits

                          Could you possibly elaborate on which benefits are carried over, and which are not? I’m really interested in your explorations in this area of what I see as “attempts to simplify Nix”, but in this particular case, to the extent I managed to understand the repository, it’s currently very unclear to me what it really brings over just using redo to build whatever packages? Most notably, the core benefits I see in Nix (vs. other/older package managers), seem to be “capturing complete input state” of a build (a.k.a. pure/deterministic build environment), “perfectly clean uninstalls”, and “deterministic dependencies” including the possibility of packages depending on different versions of helper package. Does p2pkgs have any/all of those? It’s ok if not, I understand that this is just a personal exploration! Just would like to try and understand what’s going on there :)

                          1. 2

                            seem to be “capturing complete input state” of a build (a.k.a. pure/deterministic build environment)

                            Yes it does, builds are performed in an isolated sandbox and use none of the host system.

                            “perfectly clean uninstalls”, and “deterministic dependencies” including the possibility of packages depending on different versions of helper package.

                            Packages are currently used via something I called a venv, this is more like nix shell, so has clean uninstalls - each venv can use different versions of packages, but within a venv you cannot - this is one of the downsides.

                            it’s currently very unclear to me what it really brings over just using redo to build whatever packages?

                            It uses redo + isolated build sandboxes + hashing of the dependency tree in order to provide transparent build caching, this is not so far removed from NixOS, which is why i feel nixos might be over engineered.

                            One thing p2pkgs does not have is atomic upgrades/rollback unless it is paired with something like docker.

                            All that being said, I think i oversimplified it to the point where the UX is not as good as it should be, so i hope to shift it back a bit to look a bit more like the nix cli - I think that will make things more clear.

                            1. 1

                              Thanks! I was confused how redo works (had some wrong assumptions); now I start to understand that the main entry point (or, core logic) seems to be in the pkg/default.pkg.tar.gz.do file. I’ll try to look more into it, though at a first glance it doesn’t seem super trivial to me yet.

                              As to venv vs. NixOS, does “a linux user container” mean some extra abstraction layer?

                              Also, I don’t really understand “container with top level directories substituted for those in the requested packages” too well: is it some kind of overlayed or merged filesystem, where binaries running in the container see some extra stuff over the “host’s” filesystem? If yes, where can I read more about the exact semantics? If not, then what does it mean?

                              Back to “input completeness”: could you help me understand how/where can I exactly see/verify that e.g. a specific Linux kernel version was used to build a particular output? similarly, that a specific set of env variables was used? that a specific hash of a source tarball was used? (Or, can clearly see that changing one of those will result in a different output?) Please note I don’t mean this as an attack; rather still trying to understand better what am I looking at, and also hoping that the “simplicity” goal would maybe mean it’s indeed simple enough that I could inspect and audit those properties myself.

                              1. 1

                                As to venv vs. NixOS, does “a linux user container” mean some extra abstraction layer?

                                Like Nixos, it uses containers to build packages, they are not needed to use packages - but they are helpful

                                Also, I don’t really understand “container with top level directories substituted for those in the requested packages” too well: is it some kind of overlayed or merged filesystem, where binaries running in the container see some extra stuff over the “host’s” filesystem? If yes, where can I read more about the exact semantics? If not, then what does it mean?

                                The build inputs are basically put into a chroot with the host system /dev/ added with a bind mount - this is quite similar to nixos - You can see it in default.pkg.tar.do

                                Back to “input completeness”: could you help me understand how/where can I exactly see/verify that e.g. a specific Linux kernel version was used to build a particular output?

                                Nixpkgs does not control the build kernel, not sure why you seem to think it does. Regardless - You can run redo pkg/.pkghash to compute the identity of a given package - which is the hash of all the build inputs including build scripts - again, much like nix. I suppose to control the build kernel we could use qemu instead of bwrap to perform the build. To see the inputs for a build you can also inspect the .bclosure file which is the build closure.

                                similarly, that a specific set of env variables was used? that a specific hash of a source tarball was used?

                                Env variables are cleared - this can be seen by the invocation of bwrap which is a container - much like nixpkgs. I get the impression you might be misunderstanding the trust model of NixOS - nixos lets you run package builds yourself - but it still relies on signatures/https/trust for the binary package cache - You can’t go back from a given store path and workout the inputs - you can only go forward from an input to verify a store path.

                                also hoping that the “simplicity” goal would maybe mean it’s indeed simple enough that I could inspect and audit those properties myself.

                                The entire implementation is probably less than 700 lines of shell, i think you should be able to read them all - especially default.pkghash.do and default.pkg.tar.gz.do .

                                1. 1

                                  Thank you for your patience and bearing with me! I certainly might misunderstand some things from NixOS/Nixpkgs - I guess it’s part of the allure of p2pkgs that their simplicity may make those things easier to understand :) Though also part of my problem is that I’m having trouble expressing some things I’m thinking about here in precise terms, so I’d be super grateful if you’d still fancy having some more patience to me trying to continue searching for better precision of expression! And sorry if they’re still confused or not precise enough…

                                  Like Nixos, it uses containers to build packages, they are not needed to use packages - but they are helpful

                                  Hm; so does it mean I can run a p2pkgs build output outside venv? In Nixpkgs, AFAIU, this typically requires patchelf to have been run & things like make-wrapper (or what’s the name, I seem to never be able to remember it correctly). (How) does p2pkgs solve/approach this? Or did I misunderstand your answer here?

                                  The build inputs are basically put into a chroot with the host system /dev/ added with a bind mount - this is quite similar to nixos - You can see it in default.pkg.tar.do

                                  What I was asking here about was the “Running packages in venv” section - that’s where the “container with top level directories substituted (…)” sentence is used in p2pkgs readme. In other words: I’m trying to understand how during runtime any “runtime filesystem dependencies” (shared libraries, etc.; IIRC that’d be buildInputs in nixpkgs parlance) are merged with “host filesystem”. I tried reading bwrap’s docs in their repo, but either I couldn’t find the ultimate reference manual, or they’re just heavily underdocumented as to precise details, or they operate on some implicit assumptions (vs. chroot? or what?) that I don’t have.

                                  In other words: IIUC (do I?), p2pkgs puts various FHS files in the final .pkg.tar.gz, which do then get substituted in the chroot when run with venv (that’s the way buildInputs would be made available to the final build output binary, no?). For some directory $X present in .pkg.tar.gz, what would happen if I wanted to use the output binary (say, vim), run via venv, to read and write a file in $X on host machine? How does the mechanism work that would decide whether a read(3) sees bytes from $X/foo/bar packed in .pkg.tar.gz vs. $X/foo/baz on host machine’s filesystem? Or, where would bytes passed to write(3) land? I didn’t manage to find answer to such question in bwrap’s docs that I found till now.

                                  Do I still misunderstand something or miss some crucial information here?

                                  Nixpkgs does not control the build kernel, not sure why you seem to think it does. (…)

                                  Right. I now realize that actually in theory the Linux kernel ABI is stable, so I believe what I’m actually interested here in is libc. I now presume I can be sure of that, because the seed image contains gcc and musl (which I currently need to trust you on, yes?), is that so?

                                  Env variables are cleared (…)

                                  Ah, right: and then any explicitly set env vars result in build script changes, and then because it’s hashed for bclosure (or closure, don’t remember now), which is also included in (b?)closures of all dependees, the final (b?)closure depends on env vars. Cool, thanks!!

                                  1. 1

                                    Hm; so does it mean I can run a p2pkgs build output outside venv? In Nixpkgs, AFAIU, this typically requires patchelf to have been run & things like make-wrapper (or what’s the name, I seem to never be able to remember it correctly). (How) does p2pkgs solve/approach this? Or did I misunderstand your answer here?

                                    It replaces /bin /lib but keeps the rest of the host filesystem when you run the equivalent to a nix shell. This seems to work fine and lets you run programs against the host filesystem. This works because on modern linux kernels you can create containers and do bind mounts without root.

                                    If we designed a package installer tool (and a distro?), it would also be possible to just install them like an alpine linux package.

                                    I now presume I can be sure of that, because the seed image contains gcc and musl (which I currently need to trust you on, yes?), is that so?

                                    You can rebuld the seed image using the package tree itself, the seed image is reproducible so you can check the output seed is the same as the input seed. You need to trust me initially though before you produce your own seed.

                                    Ah, right: and then any explicitly set env vars result in build script changes, and then because it’s hashed for bclosure (or closure, don’t remember now), which is also included in (b?)closures of all dependees, the final (b?)closure depends on env vars. Cool, thanks!!

                                    Thats right :).

                        2. 1

                          Some time ago I was working on designing and implementing esy which is a package manager + meta build system (invoking package specific build system in hermetic environments) for compiled languages (OCaml/Reason/C/C++/…). It looks like Nix but has integrated SAT solver for solving deps, we rely on package.json metadata and npm as a registry of package sources (though we can install from git repos as well).

                          I feel like it should be possible to use the same solver ideas or something like go MVS in order to make a distributed package tree - this is another idea I really want to try to integrate in something simpler than Nix. I agree that it seems like a great thing and I definitely want it to be built.

                          edit: I will investigate esy more - it definitely has most of what I want - The big difference seems to be how p2pkgs simply overrides / using user containers and installs them using DESTDIR.

                          1. 1

                            I feel like it should be possible to use the same solver ideas or something like go MVS in order to make a distributed package tree

                            The depsolver is an interesting beast. I’m not satisfied with how it ended up in esy (though we had constraints to operate within, see below) — the feature I miss the most is the ability to create a separate “dependency graph” for packages which only expose executables (you don’t link them into some other apps) — dependencies from those packages shouldn’t impose constraints outside their own “dependency graphs”.

                            Ideally there should be some “calculus of package dependencies” developed which could be used as an interface between depsolver and a “metabuildsystem”. That way the same depsolver could be used with nix/hermes/esy/… Not sure how doable it is though — people don’t like specifying dependencies properly but then they don’t like to have their builds broken either!

                            edit: I will investigate esy more - it definitely has most of what I want - The big difference seems to be how p2pkgs simply overrides / using user containers and installs them using DESTDIR.

                            Keep in mind that we had our own set of constrains/goals to meet:

                            • esy is usable on Linux/macOS/Windows (for example we ship Cygwin on Windows, this is transparent to the users)
                            • esy uses npm as a primary package registry (appeal to people who know how to publish things to npm, an open world approach to managing packages)
                            • esy doesn’t require administrator access to be installed/used (the built artefacts are inside the home directory)
                            • esy strives to be compatible with OCaml ecosystem thus the depsolver is compatible with opam constraints and esy can install packages from opam
                            1. 1

                              The depsolver is an interesting beast. I’m not satisfied with how it ended up in esy (though we had constraints to operate within, see below) — the feature I miss the most is the ability to create a separate “dependency graph” for packages which only expose executables (you don’t link them into some other apps) — dependencies from those packages shouldn’t impose constraints outside their own “dependency graphs”.

                              This is much like in a general package tree - statically linked programs really don’t care - but some programs don’t support static linking, or provide dynamic libraries.

                              Another challenge is when you don’t have a monolithic repository of all packages you now have double the versioning problems to tackle - Each version is really two - the package version and the packaged software version.

                              My goals for a general package tree are currently:

                              • Linux only (it worked for docker).
                              • Doesn’t require administrator for building or using packages.
                              • Allows ‘out of tree packages’ or combining multiple package trees.
                              • Transparent global build caching from trusted sources (like nixos, I don’t think esy has this).
                      1. 7

                        Didn’t know about command -v before and which not being POSIX either! with that said, and sorry if that’s a stupid question and was already considered and discussed extensively (please kindly disregard then!): could it be possible to just make which be alias/oneliner resolving to command -v and be done with the issue?

                        1. 7

                          Note sure about the Debian version, but there are a few differences with the FreeBSD one that may matter:

                          If you run command -v with a shell builtin then it will print the command name with no path. In contrast, which doesn’t know what builtins your shell supports and so will print the path. This means that you’d get slightly inconsistent output, for example:

                          $ which command
                          /usr/bin/command
                          $ command -v command
                          command
                          

                          I don’t know how much that would matter in practice. The fun thing after that was that actually the stand-alone command binary does have a list of known shell builtins but it appears to be limited to the POSIX set (it doesn’t know what my shell is) and the bash built-in one doesn’t understand backslash-escaping that normally lets you run the stand-alone version instead:

                          # Bash knows time is a builtin
                          $ command -v time
                          time
                          # But the system one doesn't
                          $ /usr/bin/command -v time
                          /usr/bin/time
                          # And backslash escaping doesn't seem to make bash run the binary
                          $ \command -v time
                          time
                          

                          Again, I don’t know the extent to which scripts depend on this. The last one was interesting to me. I learned that shell builtins and shell keywords are in some way different.

                          The other minor issue is that which has arguments (at least on FreeBSD). -a prints all locations of a thing, -s doesn’t print output it just sets the return status. These could both be handled by a shell function, but they’re not quite one liners.

                          1. 3

                            As someone who wrote a which for laughs once, GNU which actually supports having something feed it built-ins via stdin. I didn’t bother with that and mine is basically somewhere around FreeBSD’s feature wise.

                        1. 3

                          Oh, that is a really interesting idea!

                          Some thoughts that came to my mind regarding some of the points raised:

                          - one thing that’s not super clear to me, is the situation around images; I read somewhere that git is not great for storing big binary files, though I don’t know details about that; is that true? does anyone know more? (i.e. why things like “git-lfs” were created? would that be a problem if I wanted to host images too over such git-like protocol?) I’m notably especially interested in image galleries - could they work well enough here or would that be a problem?

                          - kinda 2nd level off the point above: if images are ok enough due to being small enough, that would be cool already, and allow blogging and a lot other stuff like scientific articles or image galleries; with that said, what if I also wanted to host e.g. binaries of some programs I compiled? would that make git unfeasible already, or not necessarily, or not at all? how about scientific datasets? this point is kinda “non-critical” I think, but I’m just curious how far this could get;

                          - I wonder about “blog comments” and other “user-submitted content” - including “chats” as in SSB; but I guess it could possibly be actually done by means of git commits authored by different users - with authorship presumably verified by git signing; interestingly, this can “automatically” piggyback on git merge resolution etc. workflow;

                          - I’m also wondering about “how to get newest content” challenge that the author also mentions; though I guess SSB is at a similar place, and their answer seems to be mostly “sneakernet + signing” in theory (which seems like it could work with git as well); this possibly ties somewhat into the URL question posed later in the post, which makes me think of a possiblity for a kinda “IPFS-like” piggybacking on (an) existing URI scheme(s) - i.e., something like: https://[best-effort-url]/gitweb/[commit-hash]/[file-path], and/or similarly gemini://[best-effort-url]/gitweb/[commit-hash]/[file-path] (or however gemini: URIs are composed, I don’t know them well enough); though the idea of magnet links is certainly also interesting;

                          - kinda related & intertwined with the above point, I’m kinda wishing there was possibility for some equivalent of IPNS here; for now, there’s no clear one to me (also I don’t know much about how IPNS works), and all that I can think of is public GPG (or whatsit) key being the “hook” to fetch data by; that makes me wonder if a GPG public key should be included in the above proposed “/gitweb/” URI approach 🤔;

                          - interestingly, AFAIR git provides an option of --depth 1 when cloning, which can kinda undermine the “you always archive everything” notion; which can be arguably in a way good for clients (for not having to store all history), but is a subtle thing to consider when discussing this, I think; regardless, I think there’s no way to protect against some clients trying to “optimize” by just doing --depth 1, so this would need to be discussed at least to some extent;

                          - interestingly, I believe there are already explorations (or even ready to use apps) of how to store & share git repositories over IPFS, DAT/hypercore, SSB, etc; so this “gitweb” approach can kinda “transparently”/easily benefit off those I believe;

                          - I’m not 100% sure there’s “no possibility of tracking which posts you downloaded”; I don’t know specifics of git protocol well enough, but I would say it needs to be verified whether it allows just downloading specific hashed blobs, or mandates downloading whole commits; I wouldn’t be very surprised if it allowed the client to be picky and just clone some files/blobs (“objects” in git parlance I think?); but maybe it doesn’t allow that - I just don’t know; but then if it did, again similar situation as with --depth 1 - it would have some notable pros and cons;

                          Does anyone know if the author of this post has a lobste.rs account if I wanted to try passing this to them? or should I try contacting them via email? I imagine they may be already aware of the aspects I wrote about above, but maybe not, or some of them might be still interesting to them.

                          edit:

                          - a somewhat “nuclear” way of “disowning” (and thus kinda “erasing”) one’s past posts could be by publishing one’s private keys for the particular blog/repo - obviously, assuming they were not used for something else too; it’s certainly arguable if this would be a good enough way, but I think it’s a possibly interesting idea; now this is a vague idea from me, I’m not yet sure if there’s a way to “safely” (at least to some extent) “jump” to a rewritten git history via some alternative set of GPG keys to keep staying in touch with people (could maybe one have a private “master key” that would be used to sign “blog keys”, and those would then be interchangeable? kinda like https/ssl root certs system? or is this already how GPG is typically used?)

                          1. 3

                            Hi,

                            I’m the author of the original gemlog post which prompted the discussion and I’m in direct contact with the author of this post. I will let him/her know that there are comments to read here.

                            1. 1

                              Re: images, it’s not really a big focus for gemini and gopher, from which it sprang. Text is where it’s at!

                              About contacting the author, maybe find their email from the Gemini mailing list, or get your own gemini site, publish a piece titled “RE: ” (as in the link I posted upstream), and submit it to an aggregator like Antenna.

                            1. 9

                              However, the core of software development is reading, navigating, understanding, and at the end of the day writing code.

                              Everything the author is doing makes sense to me, but I find this assertion interesting. In my experience, the core of (high-level) software development is thinking more than any activity involving the actual code. Solving the problem is the difficult/time consuming bit, while the actual implementation of the solution is usually more routine. I’ve seen developers often fall into the trap of starting to write code before they understand what they’re actually trying to implement - I like to remind them that their value is in solving problems through code not writing code itself.

                              Implementation is of course is an important part of being able to solve those problems. Necessary, but not sufficient perhaps is a good way to phrase it.

                              I don’t think this really takes away from the usefulness of the tools and activities presented, more of a philosophical thought.

                              1. 1

                                Have you seen Bret Victor’s “Inventing on Principle” talk, by chance?

                                <enso-fanboi mode="on" disclaimer="&and-soon-to-be-employed;" />
                                Have you seen Enso (neé Luna)?

                                1. 1

                                  I haven’t, but it seems interesting. Will check it out.

                              1. 3

                                To be the stereotypical Lobste.rs Nix enthusiast, you can also do this sort of thing with Nix, though it might require extra work to slim down the closure size.

                                Maybe I’ll try seeing how small of a NixOS image I can make as a weekend project.

                                EDIT: Hm, this depends on whether you can actually build a system on musl, systemd won’t work so it would have to use a different init system which complicates things. It looks like nixwrt is trying this sort of thing - it’s not actually NixOS, but that doesn’t really matter since it has different goals than NixOS proper.

                                1. 1

                                  A basic install of Alpine Linux, which uses musl libc, is something around 8MB (IIRC, but without the kernel). It uses openrc for the init system and busybox.

                                  edit: the alpine docker container page claims the rootfs is 5MB: https://hub.docker.com/_/alpine

                                  1. 1

                                    See also, possibly, https://github.com/cleverca22/not-os; I’m not sure how much that one fits the bill, but at least some parts may be steal-uhm, I mean, reuse-worthy. I’m pretty sure I also stumbled upon some other attempt at bootstrapping an alternative to Nixpkgs for “embedded-like” environments; but I can’t seem to be able to find it in my GH stars nor bookmarks 😞 (I think it was something else yet than nixwrt; or was it? 🤔)

                                  1. 3

                                    The codepage map seems limited to only 16-bit Unicode codepoints; this could be an issue for some content? Doesn’t seem mentioned in the limitations rationale, so it’s not clear if this is by design or an oversight?

                                    1. 11

                                      So, for example, if you want to write a function that modifies a slice, you might instinctively write

                                      func modify[T any](in []T) []T
                                      

                                      This will take in any slice, but if you define a type like type MySlice []int when you call modify with a MySlice, you will get back a plain []int. The answer is to instead write

                                      func modify[S constraints.Slice[T], T any](in S) S
                                      

                                      This is significantly more of a pain in the ass to write, so I think a lot of code that doesn’t need to work with named types will end up using the simpler []T syntax instead.

                                      1. 5

                                        Agreed! However, there’s an open proposal under consideration that makes this a lot nicer, so hopefully it gets accepted. It will let you write constraints.Slice[T] as just ~[]T inline, so will look like this for your second version:

                                        func modify[S ~[]T, T any](in S) S
                                        
                                        1. 3

                                          I read the earlier version of that proposal and was very skeptical of it, but this version looks good. The point about interface{X} being equivalent to interface{interface{X}} is convincing that it’s a good idea.

                                          1. 2

                                            Interesting, thanks for the link!

                                            To clarify a bit for people (like me) who are roughly aware of the coming generics proposal in Go, but not deep in the trenches of all the discussions, syntax, and recent changes — IIUC:

                                            • func modify[S constraints.Slice[T], T any](in S) S — under other currently accepted (?) proposals is a “more ergonomic wrapper” (?) for arguably more verbose (yet IIUC syntactically correct as well?) syntax as below:
                                            • func modify[S interface{~[]T}, T any](in S) S; whereas:
                                            • func modify[S ~[]T, T any](in S) S — is the currently proposed shorter syntax for the above “expanded/verbose” syntax (in accordance with the current title of the proposal being: “allow eliding interface{ } in constraint literals”).

                                            Notably, the new proposal, after a tweak making it what it I describe above (vs. an earlier version), quickly got active interest from the Core Team, with a CL by them being promised, which I see as a good (though not 100%) prognostic for a solid chance of it being accepted, in this way or another (in particular, possibly leading to some other, surprising yet quite cool, improvement).

                                        1. 31

                                          Following up on the article, I found an interesting comment debating it in a nuanced-sounding way on the orange site.

                                          1. 1

                                            Kind of unrelated, but what is the progress on Go 2?

                                            1. 4

                                              IIUC the Go team have decided to introduce new language features without breaking backward compatibility for now.

                                              1. 4

                                                Though, there’s some interesting discussion on how to update [standard library] APIs [like sync.Map] for generics, which may or may not lead to introduction of some generic way of versioning, that could possibly allow gradually introducing more “Go2”-like “breaking changes”.

                                              2. 1

                                                A number of changes that were previously classified under “go 2” have now been implemented in backwards-compatible ways, generics being the most well-known example of this, other examples are the new //go:build directives (the old // +build will remain working indefinitely though), binary literals, _ to separate numbers (1_000), and a few others.

                                                There were never any concrete plan on “Go 2”, and was little more than a placeholder for language changes and incompatible changes. It still is, especially for incompatible changes to either the language or standard library.

                                              1. 13

                                                Genuine comment (never used Nix before): is it as good as it seems? Or is it too good to be true?

                                                1. 51

                                                  I feel like Nix/Guix vs Docker is like … do you want the right idea with not-enough-polish-applied, or do you want the wrong idea with way-too-much-polish-applied?

                                                  1. 23

                                                    Having gone somewhat deep on both this is the perfect description.

                                                    Nix as a package manager is unquestionably the right idea. However nix the language itself made some in practice regrettable choices.

                                                    Docker works and has a lot of polish but you eat a lot of overhead that is in theory unnecessary when you use it.

                                                  2. 32

                                                    It is really good, but it is also full of paper cuts. I wish I had this guide when learning to use nix for project dependencies, because what’s done here is exactly what I do, and it took me many frustrating attempts to get there.

                                                    Once it’s in place, it’s great. I love being able to open a project and have my shell and Emacs have all the dependencies – including language servers, postgresql with extensions, etc. – in place, and have it isolated per project.

                                                    1. 15

                                                      The answer depends on what are you going to use nix for. I use NixOS as my daily driver. I am running a boring Plasma desktop. I’ve been using it for about 6 years now. Before that, I’ve used windows 7, a bit of Ununtu, a bit of MacOS, and Arch before. For me, NixOS is a better desktop than any of the other, by a large margin. Some specific perks I haven’t seen anywhere else:

                                                      NixOS is unbreakable. When using windows or arch, I was re-installing the system from scratch a couple of times a year, because it inevitably got into a weird state. With NixOS, I never have to do that. On the contrary, the software system outlives the hardware. I’ve been using what feels the same instance of NixOS on six different physical machines now.

                                                      NixOS allows messing with things safely. That’s a subset of previous point. In Arch, if I installed something temporarily, that inevitably was leaving some residuals on the system. With NixOS, I install random on-off software all the time, I often switch between stable, unstable, and head versions of packages together, and that just works and easy rollbackabe via entry in a boot menu.

                                                      NixOS is declarative. I store my config on GitHub, which allows me to hop physical systems while keeping the OS essentially the same.

                                                      NixOS allows per-project configuration of environment. If some project needs a random C++ package, I don’t have to install it globally.

                                                      Caveats:

                                                      Learning curve. I am a huge fan of various weird languages, but “getting” NixOS took me several months.

                                                      Not everything is managed by NixOS. I can use configuration.nix to say declaratively that I want Plasma and a bunch of applications. I can’t use NixOS to configure plasma global shortcuts.

                                                      Running random binaries from the internet is hard. On the flip side, packaging software for NixOS is easy — unlike Arch, I was able to contribute updates to the packages I care about, and even added one new package.

                                                      1. 1

                                                        NixOS is unbreakable. When using windows or arch, I was re-installing the system from scratch a couple of times a year, because it inevitably got into a weird state. With NixOS, I never have to do that. On the contrary, the software system outlives the hardware. I’ve been using what feels the same instance of NixOS on six different physical machines now.

                                                        How do you deal with patches for security issues?

                                                        1. 8

                                                          I don’t do anything special, just run “update all packages” command from time to time (I use the rolling release version of NixOS misnamed as unstable). NixOS is unbreakable not because it is frozen, but because changes are safe.

                                                          NixOS is like git: you create a mess of your workspace without fear, because you can always reset to known-good commit sha. User-friendliness is also on the git level though.

                                                          1. 1

                                                            Ah I see. That sounds cool. Have you ever had found an issue on updating a package, rolled back, and then taken the trouble to sift through the changes to take the patch-level changes but not the minor or major versions, etc.? Or do you just try updating again after some time to see if somebody fixed it?

                                                            1. 4

                                                              In case you are getting interested enough to start exploring Nix, I’d personally heartily recommend trying to also explore the Nix Flakes “new approach”. I believe it fixes most pain points of “original” Nix; two exceptions not addressed by Flakes being: secrets management (will have to wait for different time), and documentation quality (which for Flakes is now at even poorer level than that of “Nix proper”).

                                                              1. 2

                                                                I didn’t do exactly that, but, when I was using non-rolling release, I combined the base system with older packages with a couple of packages I kept up-to-date manually.

                                                        2. 9

                                                          It does what it says on the box, but I don’t like it.

                                                          1. 2

                                                            I use Nixos, and I really like it, relative to how I feel about Unix in general, but it is warty. I would definitely try it, though.

                                                          1. 2

                                                            Interesting idea; not 100% sure I’m convinced yet, but certainly food for thought; maybe it’s just so new that it needs to sink in. That said, one thing I’m quite surprised with, is that the author suggests for a “nitpick:” prefix to be ”(…) necessary” — whereas notably the Google Code Review Guide suggests to use “Nit:” to prefix a comment that ”(…) isn’t mandatory”. Personally, I tend quite often to prefix review comments with a somewhat long [optional][nit][style]; not quite sure how I’d be expected to mark it with OP’s proposed approach — nitpick(non-blocking):? Hm; with further thought, if I were to use this, I’d be tempted to bikeshed a bit over shortening the “decorations” from non-blocking/blocking to opt/req — thus e.g.: nitpick(opt): and nitpick(req):; also tempted to bikeshed the (if-minor) to (if-easy).

                                                            1. 2

                                                              In the push(task) in “Going lock-free…” section, there seems to be a reference to a t that is not defined anywhere?

                                                              Also, I’m not sure I understand how the push is expected to work without overwriting another concurrent push; shouldn’t the tail be incremented first, and only then the task stored? or is there something else I’m missing? But maybe things will clear up once the issue of the missing t is fixed? edit: Ok, I get this part now, I forgot that this is a thread-local queue, where only Single Producer can add things. So no races between pushes.

                                                              edit 2: As to the missing t, based on a fragment of the linked source repository, I assume t is intended to be basically equivalent to tail. Given that, again, only the Single Producer will ever write to it.

                                                              1. 2

                                                                Nice catch. Looks like the post is full of grammatical errors. Also yes the t is tail and can be loaded & stored to without rmw synchronization given it’s single producer (SPMC).

                                                              1. 9

                                                                Another language (albeit more high-level than Zig) that has a great C interop story is Nim. Nim makes it really simple to wrap C libraries for use in Nim.

                                                                1. 6

                                                                  One thing that I really like about Zig is how good it is going the other way: making libraries that can be called from C easily (and thence other languages). How does Nim handle that case? It’s an often neglected case.

                                                                  1. 4

                                                                    I used Nim to write a library callable by JNI on Android (https://github.com/akavel/hellomello), totally fine. The current version of the project (unfortunately I believe it’s somewhat bitrotten now) is macroified, but an earlier iteration was showing clearly how to do that by hand.

                                                                    1. 2

                                                                      While I doubt anyone uses it, my very fancy ‘ls’ has as shared library/.so extension system where you can build .so’s either in Nim or in C and either way load them up. A C program could similarly load them up no problemo with dlopen & dlsym. That extension/.so system may constitute a coding example for readers here.

                                                                      1. 1

                                                                        It should be fairly easy, though I can’t attest to that through personal experience. The GNUNet project calls Nim code from their C code IIRC.

                                                                    1. 2

                                                                      Am I the only one more than a little freaked by this throw away comment…?

                                                                      “explore online using sql.js, sqlite3 compiled to webassembly”

                                                                      1. 2

                                                                        What about it freaks you out?

                                                                        1. 4

                                                                          My mental model of the world hasn’t quite caught up the world….

                                                                          My mental model is still struggling to get past the hump of JavaScript is that irritating broken tacked on thing for web form validation which then got hugely abused to work around M$ stalling the development of web standards resulting in a terrifying pile of kludgery that are today’s web frameworks… which is quite a bit behind, “Hey you can compile a massive C program to wasm and run it in a web page.”

                                                                          1. 5

                                                                            You can do cool things with wasm. At my previous job we had a data processing engine written in C++ that ran on the server as well as in the browser. The js blobs where a bit big (20MB), but otherwise it worked well. The browser really is the new OS and wasm makes things possible that used to be very hard.

                                                                            1. 2

                                                                              Crudely simplifying, the 1st order approximation is that wasm is the new Flash, or a more successful take at NaCl. Most notably, in that AFAIU it’s a low level binary format like ELF or PE/COFF. However, compared to Flash, it has some important advantages IIUC: more standard and less company-owned (though I wouldn’t be surprised if Google or other corp does a power play at that), safer (reusing JS engines’ sandboxing & security engineers), and FFI with JS is native, and the default “runtime” (a.k.a. OS API) equals that of JS. The FFI makes it easy to piggyback on humongous heaps of existing JS code (caveat emptor as to quality), as well as gradually introduce into existing JS code. The default runtime being same as of JS makes the browsers now be literally OSes for wasm binaries; notably however, there’s also an effort to establish a second standard runtime, IIRC called wasi, more or less resembling a typical C runtime IIUC, to make it easy to use wasm as a cross-platform bytecode akin to JVM or CLR.

                                                                              I’m curious if someone at some point designs an extension to RISC-V that would be aimed at directly executing wasm binaries, and if it becomes standard enough to make wasm be the de facto universal binary format; possibly thus commoditizing CPU architectures to irrelevance.

                                                                              1. 1

                                                                                http://copy.sh/v86/

                                                                                I don’t know whether to be amazed and impressed or whether I should vomit on my keyboard.

                                                                                1. 1

                                                                                  <shrug /> tool as any other. What I personally like however, is how it kinda undercut NaCl, which was a huge proprietary top-down engineering effort, by going through the kitchen door via a sequence of smaller evolutionary open-source steps, where each of them was actually beneficial to many people & projects. (Not that I mean small steps - e.g. creating emscripten was I believe one person’s “stupid hobby because-why-not project” initially, but definitely not a trivial one. If you don’t know what I’m talking about, how WASM happened was basically: (1) emscripten, (2) asm.js, (3) WASM.)

                                                                                  1. 1

                                                                                    And this train keeps rumbling on…. https://lobste.rs/s/1ylnel/future_for_sql_on_web

                                                                                    1. 1

                                                                                      Ah yes :) however, consider now one more thing together with this, namely distributed/decentralized web (stuff like IPFS or dat/hypercore protocol). I have somewhat recently fiddled a tiny bit in the Beaker Browser 1.0, and most notably its integrated web content editor; I found the experience quite amazing and somewhat mind-opening; for me, it harkened back a bit to a touch of the experience at the early ages of WWW, when I was playing with website editors, excitement and playfulness pumping through my veins. Notably from perspective of my argument, some publicly available Beaker Browser websites already provide JS snippets for implementing a mutable & persistent guestbook, by virtue of hypercore’s JS distributed data storage API. Now, level this up with SQLite’s fully featured abstraction layer, and the seeds are planted for a decentralized internet, where your code (written in any language thanks to WASM) and data can be ~trivially replicated among heterogenous p2p nodes worldwide. One question that’s not fully certain to me is how can this be practically useful over current internet, esp. for non-techy people; but I guess possibly at the beginning of WWW, the future usefulness of it was also not fully fleshed out and clear.

                                                                                      edit: I mean, ok, maybe that’s not the way it “ought to be done”; I guess in theory, instead of WASM in a browser, it “ought to be” some small, platonically idealistic abstract VM kernel, top-down designed & written by an order of enlightened FOSS monks financed by a fund of benevolent worker cooperatives, that’s quietly and memory-efficiently chugging along on a solar-powered RISC-V chip produced locally from fully-renewable materials. But until we’re there, what we have is what we have.

                                                                                      1. 2

                                                                                        This is a promising tech. The problem will to get enough users. Maybe if they managed to embed it in something like firefox so a sizable proportion of the ’net gets it out of the box.

                                                                                        the way it “ought to be done”.

                                                                                        I’d usually point at this quote….

                                                                                        At first I hoped that such a technically unsound project would collapse but I soon realized it was doomed to success. Almost anything in software can be implemented, sold, and even used given enough determination. There is nothing a mere scientist can say that will stand against the flood of a hundred million dollars. But there is one quality that cannot be purchased in this way - and that is reliability. The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay. Tony Hoare

                                                                                        But the people doing this aren’t rich.

                                                                                        They’re “because we can” monks or punks, maybe ponks?

                                                                                        But they are sacrificing reliability, because it sort of doesn’t matter. It’s at the serving up cat pictures level of usefulness. But if enough cat loving “because we can” ponks become peeved with not seeing their kitty, they can patch it into the shape of something that works.

                                                                                        This quote still applies…

                                                                                        There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult. Tony Hoare

                                                                                        Sigh. All strength to the Ponks!

                                                                        1. 4

                                                                          Oh, I didn’t realize you’re the author of ggez, congrats (also on the stepping down) and good luck with garnet! I’m certainly hoping for someone to eventually manage to construct a post-Rust language that would hopefully be easier to use, while being educated by lessons learnt there :)

                                                                          1. 9

                                                                            Will this improve the reader’s next program? Will it deepen their understanding of their last program?

                                                                            This is the big one for me. With open source projects, there’s at least a chance I could look at the code, or if it’s a library actually use it in my next project. If it’s closed source, all I “learn” is that there’s another thing out there that I could buy.

                                                                            1. 5

                                                                              Though notably, even for FOSS projects, posting every single release here can IMO be considered spammy. One interesting case study is andyc’s blog posts about Oil shell; they are much more than just “releases”, they tend to provide interesting insights every time, I love reading each of them with highest attention - yet apparently a notable portion of lobster.rs readers still started getting tired even of them at some point, and I think the current (unspoken?) compromise is more or less that Andy mostly refrains from submitting his own articles, but others occasionally do and that is seen as OK.

                                                                              1. 1

                                                                                If it’s closed source, all I “learn” is that there’s another thing out there that I could buy.

                                                                                Or about the models people use for computation.

                                                                              1. 3

                                                                                For me, the killer feature of vim that made me stick with it (on n-th try) was the “Quickfix list”, and in particular :cex system('some-bash-pipeline'), which lets me easily jump between lines found by any complex Linux pipeline I want. In particular, this typically starts with git grep and goes e.g. like this: :cex system('git grep -n some-regexp \| grep -v something-else') | copen (with the | copen being a shortcut to immediately show the quickfix list and let me move through it and “jump” by hitting Enter). I also have git grep -n as my set grepprg=, and recently learnt that git grep allows selecting/excluding file names, i.e.: git grep -n some-pattern -- ':*.go' to only grep Go files, or ... -- ':!*_test.go' to exclude Go test files.