1. 4

    The bit about how it’s hard to tell what will close a Reader/WriterCloser underneath you is super valid. I’m not sure how you’d manage that without some kind of “reference count” or similar (i.e. only actually close until the count hits zero).

    Another awkward use case is how to write http middleware that hash-sums a post body (i.e. for slack webhook validation) and then also lets inner actions read from the request post body.

    1. 6

      It’s simple. If you pass a *Closer to something, you should assume that something is going to close it. Otherwise you’d just pass it a Reader or a Writer.

      1. 2

        Not everyone gets the memo that this is the way it’s supposed to work. Very often, folks will create large interfaces and pass them around, with no intent on using everything defined.

        1. 2

          Sure, but at a certain point, what can you do about people ignoring the docs and even the sort of high level guidance of the language? I mean, deep reading docs is hard, reading the go proverbs isn’t https://go-proverbs.github.io/ – and you only have to get to the 4th one to get to “The bigger the interface, the weaker the abstraction.”

          1. 5

            “The bigger the interface, the weaker the abstraction.” says very little to someone not engaged in deeply understanding it.

            Obviously, we can’t throw our hands up and say, “what can you do? People will be people!!!” What we can do is ensure that code we write and review takes these principles to heart, and shows the value of it.

            1. 2

              Absolutely, and after doing all that – people are still going to write terrible code that directly goes against all the norms and recommendations. I am all for doing our level best to try to guide people – but leading a horse to water and all that.

        2. 1

          I think this is true, but it basically means you should never pass a *Closer unless you really really have to. The callee should manage the IO lifecycle.

          I would even go so far as to say one of the heavily opinionated Go linters should warn (maybe they do, I have never checked because I don’t think highly opinionated linters are a good idea for anything but beginners).

          1. 1

            This makes sense, but two difficulties.

            1. Still requires careful analysis of docs. It’s very easy to pass a closereader off to a function taking a reader.

            2. You can’t just pass something like a gzip.reader to another layer. Even if that layer closes, it doesn’t close the bottom.

            1. 1

              When I read stuff like this I change my mind about go being a language that can be learned in a weekend.

              1. 1

                You can certainly pick it up and use it effectively in a weekend, but surely you couldn’t learn the ins and outs of anything substantial in just a weekend.

            2. 4

              Between io.TeeReader and io.Pipe I think you can probably wire something up. There’s a decent amount of “plumbing” included, although it took me a few passes through the docs to find it all.

              1. 4

                Yeah, its quite worth it to read through the whole std library docs, I seem to find a new thing each time I skim it again.

              2. 1

                how to write http middleware that hash-sums a post body (i.e. for slack webhook validation) and then also lets inner actions read from the request post body.

                I’ve had to do something like that and I ended up writing a small TeeReadCloser struct that wraps TeeReader but also has a Close method that closes both the reader and the writer. You can probably get by with a version that takes a WriteCloser like mine and one that just takes a Writer and combine them as needed, though I wonder why they couldn’t just put these in the standard library.

              1. 5

                Its really not practical to do a chromium rebuild for every small update. Symbol versioning is annoying and Void Linux started to make every package that is build against glibc to depend on glibc>=buildversion because partial updates are allowed but versioned symbols break all the shared library checks.

                1. 9

                  In practice, package builds already do a chromium rebuild for every small update. Developers do incremental builds regardless of the method of linking.

                  Really, the reason to build Chrome with shared objects is that the linker will fall over when building as a single binary with debug info – it’s already too big for the linker to handle easily. The last time I tried to build Chrome to debug an issue I was having, I didn’t know you had to do some magic to build it in smaller pieces, so the linker crunched on the objects for 45 minutes before falling flat on its face aborting. I think it didn’t like 4 gigabyte debug info sections.

                  Also, keep in mind that this wiki entry is coming from a Plan 9 perspective. Plan 9 tends to have far smaller binaries than Chromium, and instead of large fat libraries, it tends to make things accessible via file servers. HTTP isn’t done via libcurl, for example, but instead via webfs.

                  1. 2

                    That separation also means you can rebuild webfs to fix everything using it without rebuilding them, which is what shared libraries were supposed to help with.

                  2. 6

                    Well, I feel like that’s the only way to handle it in Void really.

                    Anyway, I’d trade disk space for having static linked executables every day. Must be why I love Go so much. But I still understand why it’s used, both for historical and practical reasons. This post showcases the difference between static and dynamic cat but I’m scared of what would happen with something heavy with lots of dependencies. For example qt built statically is about 2/3rd of the size.

                    1. 4

                      If the interface has not changed, you technically only need a relink.

                      1. 3

                        If you have all the build artifacts laying around

                        1. 3

                          Should a distribution then relink X applications pushing hundreds of megabytes of updates or should they start shipping object files and link them on the user system where we would basically imitate shared libraries.

                          1. 6

                            One data point: openbsd ships all .o for the kernel, which keeps updates small.

                            (I don’t think this is actually new. Iirc, old unix systems used to do the same so you could relink a modified kernel without giving away source.)

                            1. 3

                              That’s how SunOS worked, at least. The way to relink the kernel after an update also works if you have the source too; it’s the same build scaffolding.

                              1. 2

                                Kernel, but not also every installed port or package

                              2. 3

                                It would be viable to improve both deterministic builds and binary diffs/delta patches for that. With deterministic builds you could make much better diffs (AFAICT) since the layout of the program will be more similar between patches.

                                1. 4

                                  Delta updates would be nice a nice improvement for the “traditional” package managers. Chrome does this for its updates, but instead of just binary diffs, they even disassemble the binary and reassemble it on the client. http://dev.chromium.org/developers/design-documents/software-updates-courgette

                                  1. 2

                                    What do you mean by delta updates? What should they do differently than what delta RPMs have been doing until now?

                                    1. 1

                                      Yes maybe this, not sure how delta rpms work specifically, do they just diff files in rpms or are those deltas of each binary/file inside of the rpm?

                                      1. 1

                                        They ship new/changed files, and I think they also do binary diffs (at least based on what this page says)

                                2. 1

                                  Chrome already ships using binary diffs, so this is a solved problem.

                                  1. 0

                                    where we would basically imitate shared libraries.

                                    Except without the issue of needing to have just one version of that shared library.

                                    1. 2

                                      Proper shared libraries are versioned and don’t have this issue.

                              1. 12

                                Kind of an aside, but I’m pleased by the lack of vitriol in this.

                                1. 13

                                  Almost all of Theo’s communications are straightforward and polite. It’s just that people cherry-picked and publicized the few occasions where he really let loose, so he got an undeserved reputation for being vitriolic.

                                  1. 2

                                    Pleasantly surprised, even.