1. 3

    So, and I mean this only slightly facetiously, is there any point in learning any language other than Rust at this point? I feel like all the momentum is with Rust and that languages like Go (which I prefer to Rust), the ML languages, and C++ are all dead in the water and basically just in maintaining-existing-codebases mode…

    In other words, for the vast majority of non-web/non-desktop projects it would seem silly given the current zeitgeist to start them in any language other than Rust/JavaScript.

    (I say this because the Linux kernel is notoriously conservative on language use, and with good reason…obviously other languages are gonna be around a long time but it seems like the only compiled typed language I hear about anymore is Rust.)

    1. 8

      the ML languages … are dead in the water

      Haskell and Ocaml are still moving forward well. Haskell recently merged in Linear Types, which goes a long way to making the type system both richer and more intuitive.

      C++ are all dead in the water

      Take a look at C++20. It has some great features. Some undoubtedly inspired by Rust, but many others that have been working their way into the standard for over a decade. C++20 has lambdas, option types, awkward[1] sum types, designated initializers, and more.

      In other words, for the vast majority of non-web/non-desktop projects it would seem silly given the current zeitgeist to start them in any language other than Rust/JavaScript.

      At the end of the day, Rust is still a low-level language [2], which means the programmer is in charge of keeping track of when data is copied, when references to data are passed around, and when data goes out of scope. While the borrow checker certainly helps in this process, it’s still something that the programmer needs to be aware of while coding, and this has its own cognitive load. Often times, when programmers reach for a low-level language, it is specifically because they want to tackle some aspect of memory management (whether that’s custom allocation, data sharing, real-time performance, or other things) differently from the way a more standard managed runtime would. This can lead to situations where the borrow checker can actually make it more, rather than less, difficult (due to non-standard memory management techniques that the programmer consciously wants to undertake) to develop and reason about code.

      Rust is also difficult to use in the following situations:

      • Interfacing with binary libraries
      • Using an alternate libc
      • Using a custom memory allocator
      • Developing for the Android NDK

      Moreover, Rust is a complicated language. The surface area of the language is large and that makes understanding its nuances both difficult and, at times, time consuming. Other languages like Nim and Zig take alternate decisions that also enable low-level development with greater safety and lower cognitive load than C.

      Then there’s the world of scientific computing. C++, C, and Fortran have long been the stalwarts of scientific computing, but now we have a new crop of dynamic languages like Python, Julia, and R, that are all great. Rust still lacks interfaces to many of the really big scientific-computing frameworks[3], whereas many frameworks (such as Stan or CLASP) are written in C++ and offer C++ libraries out of the box, along with Python, R, and sometimes Julia wrappers. One can certainly wrap the interface for these libraries themselves into Rust, but in C++ or Python, I can just grab the library and start developing.

      I think it’s unfortunate that due to the vocal Rust fanbase on certain parts of the internet that some folks feel pressured to use Rust. Rust is a fantastic language (IMO), but there certainly are interesting, performant, and practical alternatives out there that can often be better for certain usecases.

      [1]: I call them awkward because they aren’t as neatly baked into the language as they are in Rust, but C++ offers the well written any and variant types.

      [2]: It’s funny because literature from a few decades ago largely considered C++ to be a “higher level language”, but here we are.

      [3]: Others often call this machine learning, but the proliferation of scientific computing is not at all restricted to ML. Stan is a Bayesian inference library and CLASP is an Answer-Set Programming language/environment.

      1. 7

        Take a look at C++20. It has some great features.

        I think C++’s problem has never been its lack of features.


        My approach has served me quite well: If I see new software being written in C or C++, I’m going to ask “why?” – and if there is no really really convincing reason – I stay away.

        1.  

          It’s funny because literature from a few decades ago largely considered C++ to be a “higher level language”, but here we are.

          “High” is a great term for describing languages this way, because it’s inherently context dependant; you can be high on a ladder or high on an aeroplane. That the heights are so different is not a problem; you wouldn’t use an aeroplane to clean your roof.

          1.  

            Developing for the Android NDK

            With my initial contributor of Android support for Rust hat on, I am curious about this. Yes, it always could be better, but what’s the specific problem? Rust for Android is production ready, in the literal sense of the word. It has been years since Firefox for Android (>100M installs) shipped millions of lines of Rust code in production. Cloudflare also shipped WARP for Android, whose WireGuard implementation is written in Rust.

            1.  

              Haskell recently merged in Linear Types

              I guess you mean GHC, not (standard) Haskell, insofar that even exists.

            2. 3

              I think for down and dirty systems programming, probably not.

              For distributed systems, Java is still in the mix at most companies. Rust isn’t really what I would call “web yet” (despite what the web site says) and Go is a much easier lift for that sort of backend work.

              I think there is still a good number of options on the backend.

              For the front end, I would only consider TypeScript at this point.

              1.  

                I don’t think it’s so black and white. Go has some serious momentum right now. It’s a very pragmatic and practically useful language. Rust is great but has many downsides, e.g. the compiler isn’t anywhere close to as fast and it is far more complex to learn and use.

                1.  

                  Go has occupied the “devops tools” niche, but thankfully it didn’t really take off anywhere else.

                  the compiler isn’t anywhere close to as fast

                  It’s gotten really good at being incremental though. Even on my low-power laptop, cargo run after changing a couple source files is pretty quick. Faster than TypeScript for sure. Also I wonder how many Linux users are just losing lots of time to linking with some awfully slow linker like GNU BFD ld instead of LLD.

                  1.  

                    but thankfully it didn’t really take off anywhere else.

                    Not sure what you mean here. It’s the language of the cloud. Every major cloud provider has an official Go SDK and the Go team at Google is actively involved in supporting this. Many of the CNCF projects are written in Go.

                    e.g. see go-cloud or the support in gcloud functions.

                    For CLI tools as well, it’s become preferred over scripting languages like python and ruby. e.g. fzf is written in Go, the new GitHub CLI is in Go. While rust is used in newer CLIs as well, it’s usually only for performance critical CLIs like ripgrep.

                    It’s gotten really good at being incremental though.

                    Agreed. I still appreciate just how insanely quick Go is, especially when building on a new machine or pulling a dependency. I never have to really wait for things to compile.

                    1.  

                      Many of the CNCF projects are written in Go

                      That is exactly what I meant by ‘the “devops tools” niche’! Maybe I should’ve used the “cloud” buzzword instead of “devops”, but I like calling it “devops”.

                      e.g. fzf is written in Go

                      fzf is hardly a unique tool, I have a list of these things here :) Personally I use fzy, which is pretty popular and actively developed, is packaged everywhere, has a very good matching algorithm.. and is written in C, haha.

                      the new GitHub CLI is in Go

                      Yep, that kind of thing is what I’m not very happy about, but it’s still a pretty small amount of things still, and I can successfully avoid running any Go code on any of my personal machines.

                      1.  

                        Yep, that kind of thing is what I’m not very happy about, but it’s still a pretty small amount of things still, and I can successfully avoid running any Go code on any of my personal machines.

                        Why does it matter whether a binary you use on your machine is written in Go or not? If you’re building it from source, presumably your distribution’s build scripts should take care of orchestrating whatever Go code is necessary in order to compile the package; and if you’re just pulling an executable why does it matter what language was used to produce the binary?

                        1.  

                          I’m the kind of person who cares about how the sausage is made :) If I can “boycott” things I just don’t like, I’ll do it. Here’s a great criticism of the language, but what made me hate it was the internals of the implementation:

                          • the fully static binary thing, not even using libc syscall wrappers
                            • first obvious practical problem: you can’t hook syscalls with LD_PRELOAD in go binaries! I don’t want any binaries where LD_PRELOAD does nothing.
                            • they do raw syscalls even on FreeBSD, where libc is the public API and raw syscalls are not. They completely ignored FreeBSD developers’ messages about this and went ahead with the private API. Yeah sure, the syscalls are backwards compatible, it works fine (until someone disables COMPAT_FREEBSDn kernel options but whatever)..
                            • but porting to a new platform is hell! Yeah, for the most popular ABI (Linux) there’s enough contributors to write custom syscall wrappers for all CPU architectures, but any {less-popular OS + less-popular CPU ISA} combo just won’t be supported for a long time. I’ve started porting Go to FreeBSD/aarch64, abandoned it in frustration, but others have picked it up and finished it. Cool, I guess. But we’ll have the same problem with FreeBSD/powerpc64, FreeBSD/riscv64 etc! (This is the most “practical” consideration here, yes.)
                          • which brings me to how these syscall wrappers (and other things) have to be written.. Go uses a very custom toolchain with a very custom assembler which is completely alien to normal unix conventions. It’s not easy to write it. Oh, also, it is rather half-assed (this affects applications way more than it affects porting Go itself):
                            • it doesn’t support SIMD instructions, and if you want to call something fast written in C or normal sane assembly, you’d have to take the overhead of cgo (because the Go calling convention is custom)
                              • and to not take the overhead, people have written things like c2goasm. Just read that readme and let that sink in!!
                            • it doesn’t even seem to support all addressing modes of amd64! When writing a binding to SIMD base64, I couldn’t use c2goasm because I couldn’t express mov al, byte ptr [rax + base64_table_enc] in Go assembly. (There were probably ways around it, like only passing the offset into the normal-assembly functions and letting them use that offset to look up in the table.. but I was already fed up with the awfulness and just used cgo.)
              1. 4

                This site has a white div over the content unless CSS is disabled. Awful.

                1. 1

                  Hmm seems to work fine for me.

                1. 11

                  i’ve been really digging lwn lately

                  1. 6

                    Same. Just subscribed!

                    1. 6

                      oh you know I just realized all the posts I’ve loved recently have been @benhoyt. Good job, Ben!

                      1. 14

                        You’re welcome. Do subscribe – that’s how the (very small) team makes their living and how the site keeps going.

                  1. 4

                    Hiring at https://coder.com

                    Feel free to hit me up at my email!

                    We love to see open source involvement and our codebase is mainly Go/Typescript/C/Bash

                    1. 1

                      Are you looking for new grads

                      1. 1

                        Yes!

                    1. 1

                      Loved this article! I’ve had the exact same thought, HTML really is just structured data, no good reason not to use something like JSON.

                      1. 3

                        I had to use double dashes for the command you listed:

                        git log --decorate --graph --oneline --all
                        

                        It’s neat thanks for sharing.

                        1. 3

                          Tip: a mnemonic is ‘git log A DOG 🐶’:

                          • --all
                          • --decorate
                          • --oneline
                          • --graph
                          1. 2

                            Or you can use an alias. I’ve had the above or one very similar aliased to ‘git lg’.

                        1. 17

                          In the docs for http.Transport, . . . you can see that a zero value means that the timeout is infinite, so the connections are never closed. Over time the sockets accumulate and you end up running out of file descriptors.

                          This is definitely not true. You can only bump against this condition if you don’t drain and close http.Reponse.Body you get from http.Clients, but even then, you’ll hit the default MaxIdleConnsPerHost (2) and connections will cycle.

                          Similarly,

                          The solution to [nil maps] in not elegant. It’s defensive programming.

                          No, it’s providing a constructor for the type. The author acknowledges this, and then states

                          nothing prevents the user from initializing their struct with utils.Collections{} and causing a heap of issues down the line

                          but in Go it’s normal and expected that the zero value of a type might not be usable.

                          I don’t know. They’re not bright spots, but spend more time with the language and these things become clear.

                          1. 4

                            If you really want to prevent users of your library from using the {} sntax to create new objects instead of using your constructor, you can choose to not export the struct type & instead export an interface that is used as the return value type of the constructor’s function signature.

                            1. 10

                              You should basically never return interfac values, or export interfaces that will only have one implementation. There are many reasons, but my favourite one is that it needlessly breaks go-to-definition.

                              Instead, try to make the zero value meaningful, and if that’s not possible provide a New constructor and document it. That’s common in the standard library so all Go developers are exposed to the pattern early enough.

                              1. 2

                                Breaking go-to-definition like that is the most annoying thing about Kubernetes libraries.

                              2. 4

                                That would be pretty nonidiomatic.

                                1. 1

                                  Yea this is a good approach sometimes but the indirection can be confusing.

                              1. 2

                                https://nhooyr.io

                                Written in typescript and react. It’s fully static but if javascript is enabled, becomes a dynamic app.

                                Source at https://github.com/nhooyr/blog

                                1. 2

                                  That’s very interesting! Fetching all the pages ahead of time with XHR requests in small websites like this one isn’t a bad idea. It works extremely well, I love it!

                                  1. 1

                                    Thank you, I really appreciate that :)

                                    To be clear, pages are only fetched on hover.

                                1. 4

                                  Company: Coder

                                  Company site: coder.com

                                  Position(s): Go and Typescript Engineers

                                  Location: Austin TX - remote or onsite

                                  Description: We’re a small startup in Austin, TX looking to scale our team with solid Go and Typescript engineers. The positions involve developing and maintaining the Go microservices and dashboard that serve our development platform on kubernetes. More details at https://www.reddit.com/r/golang/comments/erz90i/hiring_go_engineers_at_coder/

                                  Tech stack: Go, Typescript, React

                                  Contact: anmol@coder.com

                                  1. 1

                                    remote — I’m guessing US only, still?

                                    1. 1

                                      Other countries are possible, just depends on whether we think it’ll work out with the specific candidate.

                                    2. 1

                                      just fyi, the contents of that post are removed

                                      1. 2

                                        Thank you, I’ll look into it.

                                        For now I’ve duplicated the contents onto this gist: https://gist.github.com/nhooyr/3e6cd38b58df65080df49e2a0318514e

                                    1. 11

                                      https://burntsushi.net

                                      git repo: https://github.com/BurntSushi/blog

                                      I use hugo as a static site generator. I try to keep things pretty simple. With that said, hugo has grown into an absolutely behemoth piece of software such that it has become extremely difficult for me to figure out how to do anything with it that I don’t already know how to do. For example, when I went to update my blog last—I hadn’t done it in a while—hugo choked and emitted an effectively blank index page. Something about how hugo interpreted by index template broke, and I still don’t understand the fix I made. (Which only came after aimlessly googling and reading its docs.)

                                      If you just want a blog without any comments or other dynamic content, then a good static site generator is a good way to go. But stay away from hugo. I’m already shopping for alternatives. What I really want is the ability to write blog posts in Markdown which include syntax highlighted source code that is checked by a compiler while maintaining a single source of truth. I have half a mind just to write my own purpose built for my blog.

                                      1. 4

                                        I have similar feelings about Hugo. Started using it 2-3 years ago after switching from Jekyll for speed and simplicity.

                                        I only create a post once in a blue moon. I probably update my theme more often, but each time I do either of those things, I have to re-read a lot of documentation to figure out the current state of Hugo.

                                        1. 2

                                          I really like your blog. Was just looking into xgb today and read through https://blog.burntsushi.net/thread-safety-x-go-binding/.

                                          1. 2

                                            Thanks!

                                        1. 1

                                          Minimal structured logging library for Go

                                          Minimal

                                          https://godoc.org/cdr.dev/slog#pkg-index

                                          🤔

                                          1. 1

                                            The package index is fairly small for the number of features and extensibility. What would you consider unnecessary?

                                            1. 7

                                              What would you consider unnecessary?

                                              I would not include a concept of log levels at all, and certainly not as many as are included here (Debug, Info, Warn, Error, Critical, Fatal). In structured logging, level is nothing more than a specific key=value pair, not a first-order concept of the logger itself. It’s the role of a decorator or adapter to add those methods, and none of them should be able to terminate the program.

                                              I would not accept a context as a (required) parameter of any of the methods, and especially not automatically extract values from it. Loggers and the context are orthogonal concepts; a decorator or helper function might extract specific values from a context to be logged, but the logger almost certainly shouldn’t be context-aware.

                                              I don’t believe logger packages should have a concept of a sink. Loggers should write to io.Writers, and it’s the responsibility of the caller to determine what happens from there. Logger packages may offer helper adapters for common log destinations, but they shouldn’t be part of the core logger interfaces.

                                              Related, Stackdriver or OpenCensus integration don’t belong in a core logger type. Automatic extraction and logging of OpenCensus data from e.g. contexts is the job of a helper function or middleware, not the core logger.

                                              edit: to be clear, I am biased: here is my take on a minimal structured logger and a corresponding design rationale.

                                              1. 2

                                                I would not include a concept of log levels at all, and certainly not as many as are included here (Debug, Info, Warn, Error, Critical, Fatal). In structured logging, level is nothing more than a specific key=value pair, not a first-order concept of the logger itself. It’s the role of a decorator or adapter to add those methods, and none of them should be able to terminate the program.

                                                I fully agree. The Go team here at @cdr however likes their levels and the zap like API so I decided to keep it. If it were up to me, there would only be Info and Error.

                                                I agree that levels do not need to be a first order concept of the logger but having separate methods for each level is much more readable than having to include it as a field. It also ensures that every log statically has a level versus being a convention.

                                                I would not accept a context as a (required) parameter of any of the methods, and especially not automatically extract values from it. Loggers and the context are orthogonal concepts; a decorator or helper function might extract specific values from a context to be logged, but the logger almost certainly shouldn’t be context-aware.

                                                While they are orthogonal ideas, I disagree the logger shouldn’t be context aware. Logs are very often context dependent and without fields in the context, it’s much harder to dissect what happened. Helper functions are error prone and involve a lot of boilerplate.

                                                I don’t believe logger packages should have a concept of a sink. Loggers should write to io.Writers, and it’s the responsibility of the caller to determine what happens from there. Logger packages may offer helper adapters for common log destinations, but they shouldn’t be part of the core logger interfaces.

                                                So the reason there is separation between the Sink and Logger is so that every Sink does not have to implement the higher level API from scratch. Every provided Logger does in fact only log to a io.Writer. It also enables logger sink composition. i.e a sink that wraps around another sink can wrap both the direct sink or a Logger around that sink that is named or has fields set on it.

                                                Related, Stackdriver or OpenCensus integration don’t belong in a core logger type. Automatic extraction and logging of OpenCensus data from e.g. contexts is the job of a helper function or middleware, not the core logger.

                                                Great point, I’ll move it into a separate package. Opened #69

                                                edit: Also opened #70 regarding the log levels.

                                                1. 1

                                                  So we ended up keeping things as is.

                                                  See https://github.com/cdr/slog/pull/73#issuecomment-564806085 regarding the opencensus coupling and https://github.com/cdr/slog/issues/70 regarding the levels.

                                            1. 3

                                              Wow, I love your site, its beautiful and extremely well designed. Will definitely influence my own personal site, thanks :)

                                              1. 1

                                                I must ask. I’m using capnproto and can only work over interfaces that fit the net.Conn interface. It looks, at a casual glance, like this library does fit that, but there is a note in the documentation saying it isn’t actually exposed?

                                                Might this library be a good fit for my use case?

                                                1. 2

                                                  Should be good. net.Conn isn’t exposed but you get a io.Reader and io.Writer. That should be all you need for capnproto.

                                                  1. 1

                                                    Please discard my other comment. I think I misinterpreted something, there is a net.Conn wrapper now in the library.

                                                    1. 2

                                                      Interesting, because I just finally tried gorilla websocket, and was about to write my own wrapper because there is no websocket.Conn Read(). I’ll just try this!

                                                    1. 1

                                                      With open census you get zpages which does pretty much exactly what the author is talking about.

                                                      https://opencensus.io/zpages/

                                                      1. 3

                                                        Yes, but OpenCensus is going to be shut down (probably this year) and moved to the OpenTelemetry.

                                                        1. 2

                                                          OpenTelemetry is nice but you can’t use it yet. Well maybe you can in Java but not any of the other languages. OpenCensus is here right now and the API will be compatible with OpenTelemetry.

                                                          1. 2

                                                            Will be similar, but it isn’t yet decided whether it will be 1:1 compatible (source, I am part of team that implement OC/OT in Erlang and we are discussing it).

                                                            1. 1

                                                              The site states

                                                              We are still working on the first production-ready release of OpenTelemetry. For those who want to start instrumenting production code immediately, use either OpenCensus or OpenTracing. OpenTelemetry will provide compatibility bridges and there is no need to wait for production-ready OpenTelemetry APIs in your language

                                                              So even if it’s not 1:1, the compat layer should be fine.

                                                      1. 1

                                                        Was convinced by this post to start using code folding by default in Golang. I gotta say, I’m definitely enjoying it, its just so much more natural to see a file’s structure inline with the code.

                                                        1. 7

                                                          This article was a bit thin for me. It boils down to comparison between OpenAPI and gRPC, and the argument is the size/complexity of the description. However, it doesn’t take into account size of the supporting infra, complexity of tooling/debugging techniques (even tho it does mention that one can’t curl anymore to do check). Would be interesting to hear other people (especially from Ops side of world) experiences after switching to gRPC – did things become more complex?

                                                          1. 10

                                                            Absolutely. I regret using it at my current job. Documentation is imo very poor for more advanced use cases. REST/HTTP/1.1 is well understood, easy to debug and performance is more than enough if you’re not google/facebook/twitter. Furthermore, its very well supported by lots of tooling. E.g. k8s doesn’t have a gRPC health check without downloading some binary and putting it in every single one of your containers. I think the main issue I have with gRPC is the insistence on HTTP/2 when HTTP/1.1 would have worked fine. I have more issues with gRPC as well, I need to write a blog post.

                                                            1. 2

                                                              I have more issues with gRPC as well, I need to write a blog post.

                                                              Please do! I’ve avoided gRPC itself, but for example I’m a fan of capnproto.

                                                              1. 1

                                                                shouldn’t the comparison be between protbuf and capnproto instead ? afaik, capnproto provides a serialization / deserialization framework rather than a rpc framework…

                                                                1. 1

                                                                  Capnproto has some libraries that only provide serialization. It’s mostly that plus awesome RPC

                                                            2. 2

                                                              I haven’t thought too much about how necessary this would be with gRPC, but we had a fairly similar binary protocol at work for service-to-service communications that ALSO exposed an HTTP+JSON bridge for curlability, which worked really well!

                                                              1. 2

                                                                It never even actually talked about gRPC, just Protobufs. I built a microservice function using Protobufs without gRPC, they are not equivalent.

                                                                1. 1

                                                                  Conflating these two was a huge source of pain on the project I’m on at work; protobufs have been a nightmare while grpc itself has been fine. (Not helpful at all but not actively slowing us down the way protobuf did.)

                                                                  1. 1

                                                                    Wild – I had the inverse experience, where protobufs have been useful and neat but gRPC hell every step of the way. We swapped out gRPC for Twirp (which still uses protobufs), and things are happy.

                                                                    1. 2

                                                                      YMMV as always; contributing factors in this case included:

                                                                      • we’re on the JVM
                                                                      • we already had been using a much more thorough and descriptive way of declaring the shape of our data (clojure.spec)
                                                                      • encoding efficiency was very far from being a being a performance bottleneck
                                                                      • these were all internal APIs that were always being called from other Clojure codebases rather than being a public endpoint called by who-knows-what clients
                                                                      1. 1

                                                                        I don’t think protobufs are a good choice without a multi-language environment. If everything is using the same language, then just share a service library.

                                                                        In my (limited) experience with it, we were adding a microservice layer in Go that talks to a couple Ruby services. Being able to spec out data types and generate code for each platform is really nice.

                                                                2. 2

                                                                  Twitch apparently had issues with grpc, and made their own thing instead.

                                                                  1. 3

                                                                    We were also using it at work, and switched off of it. The complexity wasn’t worth it.

                                                                    We replaced it with a combination of things, mostly a JSON schema/code generator that I wrote in ~500 lines of python, and plain HTTP or Unix domain socket requests.

                                                                    It doesn’t do the same things as GRPC, but it covers the same pain points (documentation and syncing APIs across languages), and the fact that it’s both tiny and maintained in-house makes it malleable – if it doesn’t do what we want, we fix the framework.

                                                                    1. 2

                                                                      That article is 18 months old now, although it links to grpc-go issues that are two years older than the article. I wonder if anything has improved in gRPC / grpc-go in those two years, or in the 18 months since.

                                                                    2. 2

                                                                      What about streaming?

                                                                    1. 2

                                                                      https://sail.dev is another take on the same problem.

                                                                      1. 3

                                                                        I’m writing my blog with create-react-app. So far I’m having a lot of fun working out the subtleties of the design and what I want my blog to be like. Also brainstormed a bunch of ideas for blog posts.

                                                                        I’m also working on putting out a stable release of https://nhooyr.io/websocket

                                                                        And some other top secret stuff :)

                                                                        1. 3

                                                                          I looked through the readme of the project and I just wanted to say I really appreciate that you had an entire section dedicated to justifying why the library is being written and a comparison to existing libraries.