1. 1

    Minimal structured logging library for Go

    Minimal

    https://godoc.org/cdr.dev/slog#pkg-index

    🤔

    1. 1

      The package index is fairly small for the number of features and extensibility. What would you consider unnecessary?

      1. 6

        What would you consider unnecessary?

        I would not include a concept of log levels at all, and certainly not as many as are included here (Debug, Info, Warn, Error, Critical, Fatal). In structured logging, level is nothing more than a specific key=value pair, not a first-order concept of the logger itself. It’s the role of a decorator or adapter to add those methods, and none of them should be able to terminate the program.

        I would not accept a context as a (required) parameter of any of the methods, and especially not automatically extract values from it. Loggers and the context are orthogonal concepts; a decorator or helper function might extract specific values from a context to be logged, but the logger almost certainly shouldn’t be context-aware.

        I don’t believe logger packages should have a concept of a sink. Loggers should write to io.Writers, and it’s the responsibility of the caller to determine what happens from there. Logger packages may offer helper adapters for common log destinations, but they shouldn’t be part of the core logger interfaces.

        Related, Stackdriver or OpenCensus integration don’t belong in a core logger type. Automatic extraction and logging of OpenCensus data from e.g. contexts is the job of a helper function or middleware, not the core logger.

        edit: to be clear, I am biased: here is my take on a minimal structured logger and a corresponding design rationale.

        1. 2

          I would not include a concept of log levels at all, and certainly not as many as are included here (Debug, Info, Warn, Error, Critical, Fatal). In structured logging, level is nothing more than a specific key=value pair, not a first-order concept of the logger itself. It’s the role of a decorator or adapter to add those methods, and none of them should be able to terminate the program.

          I fully agree. The Go team here at @cdr however likes their levels and the zap like API so I decided to keep it. If it were up to me, there would only be Info and Error.

          I agree that levels do not need to be a first order concept of the logger but having separate methods for each level is much more readable than having to include it as a field. It also ensures that every log statically has a level versus being a convention.

          I would not accept a context as a (required) parameter of any of the methods, and especially not automatically extract values from it. Loggers and the context are orthogonal concepts; a decorator or helper function might extract specific values from a context to be logged, but the logger almost certainly shouldn’t be context-aware.

          While they are orthogonal ideas, I disagree the logger shouldn’t be context aware. Logs are very often context dependent and without fields in the context, it’s much harder to dissect what happened. Helper functions are error prone and involve a lot of boilerplate.

          I don’t believe logger packages should have a concept of a sink. Loggers should write to io.Writers, and it’s the responsibility of the caller to determine what happens from there. Logger packages may offer helper adapters for common log destinations, but they shouldn’t be part of the core logger interfaces.

          So the reason there is separation between the Sink and Logger is so that every Sink does not have to implement the higher level API from scratch. Every provided Logger does in fact only log to a io.Writer. It also enables logger sink composition. i.e a sink that wraps around another sink can wrap both the direct sink or a Logger around that sink that is named or has fields set on it.

          Related, Stackdriver or OpenCensus integration don’t belong in a core logger type. Automatic extraction and logging of OpenCensus data from e.g. contexts is the job of a helper function or middleware, not the core logger.

          Great point, I’ll move it into a separate package. Opened #69

          edit: Also opened #70 regarding the log levels.

          1. 1

            So we ended up keeping things as is.

            See https://github.com/cdr/slog/pull/73#issuecomment-564806085 regarding the opencensus coupling and https://github.com/cdr/slog/issues/70 regarding the levels.

      1. 3

        Wow, I love your site, its beautiful and extremely well designed. Will definitely influence my own personal site, thanks :)

        1. 1

          I must ask. I’m using capnproto and can only work over interfaces that fit the net.Conn interface. It looks, at a casual glance, like this library does fit that, but there is a note in the documentation saying it isn’t actually exposed?

          Might this library be a good fit for my use case?

          1. 2

            Should be good. net.Conn isn’t exposed but you get a io.Reader and io.Writer. That should be all you need for capnproto.

            1. 1

              Please discard my other comment. I think I misinterpreted something, there is a net.Conn wrapper now in the library.

              1. 2

                Interesting, because I just finally tried gorilla websocket, and was about to write my own wrapper because there is no websocket.Conn Read(). I’ll just try this!

              1. 1

                With open census you get zpages which does pretty much exactly what the author is talking about.

                https://opencensus.io/zpages/

                1. 3

                  Yes, but OpenCensus is going to be shut down (probably this year) and moved to the OpenTelemetry.

                  1. 2

                    OpenTelemetry is nice but you can’t use it yet. Well maybe you can in Java but not any of the other languages. OpenCensus is here right now and the API will be compatible with OpenTelemetry.

                    1. 2

                      Will be similar, but it isn’t yet decided whether it will be 1:1 compatible (source, I am part of team that implement OC/OT in Erlang and we are discussing it).

                      1. 1

                        The site states

                        We are still working on the first production-ready release of OpenTelemetry. For those who want to start instrumenting production code immediately, use either OpenCensus or OpenTracing. OpenTelemetry will provide compatibility bridges and there is no need to wait for production-ready OpenTelemetry APIs in your language

                        So even if it’s not 1:1, the compat layer should be fine.

                1. 1

                  Was convinced by this post to start using code folding by default in Golang. I gotta say, I’m definitely enjoying it, its just so much more natural to see a file’s structure inline with the code.

                  1. 7

                    This article was a bit thin for me. It boils down to comparison between OpenAPI and gRPC, and the argument is the size/complexity of the description. However, it doesn’t take into account size of the supporting infra, complexity of tooling/debugging techniques (even tho it does mention that one can’t curl anymore to do check). Would be interesting to hear other people (especially from Ops side of world) experiences after switching to gRPC – did things become more complex?

                    1. 10

                      Absolutely. I regret using it at my current job. Documentation is imo very poor for more advanced use cases. REST/HTTP/1.1 is well understood, easy to debug and performance is more than enough if you’re not google/facebook/twitter. Furthermore, its very well supported by lots of tooling. E.g. k8s doesn’t have a gRPC health check without downloading some binary and putting it in every single one of your containers. I think the main issue I have with gRPC is the insistence on HTTP/2 when HTTP/1.1 would have worked fine. I have more issues with gRPC as well, I need to write a blog post.

                      1. 2

                        I have more issues with gRPC as well, I need to write a blog post.

                        Please do! I’ve avoided gRPC itself, but for example I’m a fan of capnproto.

                        1. 1

                          shouldn’t the comparison be between protbuf and capnproto instead ? afaik, capnproto provides a serialization / deserialization framework rather than a rpc framework…

                          1. 1

                            Capnproto has some libraries that only provide serialization. It’s mostly that plus awesome RPC

                      2. 2

                        I haven’t thought too much about how necessary this would be with gRPC, but we had a fairly similar binary protocol at work for service-to-service communications that ALSO exposed an HTTP+JSON bridge for curlability, which worked really well!

                        1. 2

                          It never even actually talked about gRPC, just Protobufs. I built a microservice function using Protobufs without gRPC, they are not equivalent.

                          1. 1

                            Conflating these two was a huge source of pain on the project I’m on at work; protobufs have been a nightmare while grpc itself has been fine. (Not helpful at all but not actively slowing us down the way protobuf did.)

                            1. 1

                              Wild – I had the inverse experience, where protobufs have been useful and neat but gRPC hell every step of the way. We swapped out gRPC for Twirp (which still uses protobufs), and things are happy.

                              1. 2

                                YMMV as always; contributing factors in this case included:

                                • we’re on the JVM
                                • we already had been using a much more thorough and descriptive way of declaring the shape of our data (clojure.spec)
                                • encoding efficiency was very far from being a being a performance bottleneck
                                • these were all internal APIs that were always being called from other Clojure codebases rather than being a public endpoint called by who-knows-what clients
                                1. 1

                                  I don’t think protobufs are a good choice without a multi-language environment. If everything is using the same language, then just share a service library.

                                  In my (limited) experience with it, we were adding a microservice layer in Go that talks to a couple Ruby services. Being able to spec out data types and generate code for each platform is really nice.

                          2. 2

                            Twitch apparently had issues with grpc, and made their own thing instead.

                            1. 3

                              We were also using it at work, and switched off of it. The complexity wasn’t worth it.

                              We replaced it with a combination of things, mostly a JSON schema/code generator that I wrote in ~500 lines of python, and plain HTTP or Unix domain socket requests.

                              It doesn’t do the same things as GRPC, but it covers the same pain points (documentation and syncing APIs across languages), and the fact that it’s both tiny and maintained in-house makes it malleable – if it doesn’t do what we want, we fix the framework.

                              1. 2

                                That article is 18 months old now, although it links to grpc-go issues that are two years older than the article. I wonder if anything has improved in gRPC / grpc-go in those two years, or in the 18 months since.

                              2. 2

                                What about streaming?

                              1. 2

                                https://sail.dev is another take on the same problem.

                                1. 3

                                  I’m writing my blog with create-react-app. So far I’m having a lot of fun working out the subtleties of the design and what I want my blog to be like. Also brainstormed a bunch of ideas for blog posts.

                                  I’m also working on putting out a stable release of https://nhooyr.io/websocket

                                  And some other top secret stuff :)

                                  1. 3

                                    I looked through the readme of the project and I just wanted to say I really appreciate that you had an entire section dedicated to justifying why the library is being written and a comparison to existing libraries.

                                  1. 8

                                    I’ve pretty much finished my GoCon Canada talk slides, working on my notes for what I’m gonna say next. Sneak Preview

                                    This weekend I was kind of active on Twitter. I posted a visualization of how I synesthetically experience language.

                                    I also wrote out a dream that really stuck with me.

                                    I’m working more on the theory for control streams/descriptors in Olin as well as starting on a code generator for the syscall stubs. I hope to make an olin OS patch to Zig.

                                    1. 4

                                      I just wanted to say: you are interesting.

                                      1. 1

                                        Thanks. My life is an experience. It’s great.

                                      2. 2

                                        Wow, I had no idea we had a go conference in Toronto. I’ll see you there :)

                                        edit: Noooo, tickers are sold out :(

                                        1. 2

                                          That dream is a good story!

                                        1. 1

                                          Recently I’ve found coffee helps a lot. I didn’t drink it at all my whole life but now that I do, I find myself very focused after it. I know it wares off after a while which is unfortunate.

                                          1. 1

                                            TLS ALPN is the elegant way to do this.

                                            1. 1

                                              How does TLS ALPN help? SSH is already a secure transport, it doesn’t need TLS.

                                              1. 1

                                                If you tunnel SSH over TLS, you can switch on ALPN to figure out what protocol is going to be spoken over the connection and handle it appropriately. He’s doing the same thing without ALPN, just sniffing the connection instead which works but is less reliable.

                                                Furthermore, SSH is usually filtered at the protocol level whereas TLS is not.

                                                1. 1

                                                  I don’t see what’s unreliable about it. It’s 100% reliable.

                                                  Tunneling SSH over TLS sounds awful. That’s needless double encryption.

                                                  1. 1

                                                    It’s not needless if SSH is DPI filtered.

                                                    I did misinterpret the blogpost though, I thought he was doing SSH over TLS but its just SSH over HTTP.

                                            1. 14

                                              🚲🏚

                                              The repetition and length in threads like this is disheartening. There’s no particularly problematic post, which means that there’s no “ban this jerk and the problem will go away” solution. There’s no trolling here. Just a bunch of people that all want to be heard, creating a thread that is too long to read all of it, and the result is that none of them read it and they post the same thing ten times. It’s a maddeningly systemic problem.

                                              I’m really looking forward to hearing how the Meta WG plans to mitigate this kind of thing.

                                              1. 1

                                                Similar issue occurred in the Go tracker: https://github.com/golang/go/issues/29934

                                                1. 0

                                                  What’s the value in letting anyone post anything on these threads? It’s not like the working groups don’t do a thorough search of the design space.

                                                  1. 5

                                                    We regularly get good ideas from such threads. Most discussions are rather on the side of not many voices.

                                                1. 2

                                                  Looks very promising. I was just working on a video sync service, that basically send messages from the client to a server and back to all the other clients when a video is selected or paused, etc. and I thought about switching to gobwas/ws, but the exposed API wasn’t that great, so I just may switch to this when I get Go 1.12 on my system. Do you think that there will be any noticeable differences, especially considering that I don’t host 1000000+ connections, but more like 5-15?

                                                  1. 3

                                                    Gave your code a read and looks like you just want a WebSocket server. You don’t need to wait for Go 1.12. Only the client side in my library requires Go 1.12. Will clarify the docs.

                                                    1. 1

                                                      Oops, forgot to mention that. I’ll try it out then, thanks for the notice.

                                                    2. 1

                                                      You should be good, the performance will be more than enough :)

                                                    1. 3

                                                      Finally feel comfortable with releasing https://nhooyr.io/websocket

                                                          1. 2

                                                            Site doesn’t work for me, do you mean github.com/nhooyr/ws? I am not a Go user, but it looks good. Having a relevant README is a huge plus.

                                                            1. 1

                                                              Yea it’s broken for some reason. Will fix thanks.

                                                          1. 2

                                                            https://nhooyr.io/ws

                                                            An improvement over the gorilla/websocket and gobwas/ws WebSocket libraries for Go.

                                                            1. 10

                                                              in monorepos not everything depends on everything else. Thus, there will be jobs that only need parts of the repo, which will end up as full checkout every time on the CI. This is just one instance, but in general, all operations will start taking more and more time, leading to people doing workarounds, leading to too much special-casing.

                                                              This leads me to the question of, why do you want a monorepo in the first place? Multiple repositories is exactly the intended solution for this problem from the Git perspective.

                                                              1. 7

                                                                Multiple repos implies submodules usually. In my opinion submodules do not scale well to many developers. Nothing which cannot be solved with additional scripts and other workarounds but it adds friction. Submodules are even annoying in small teams because you have to commit twice for changes in a submodule.

                                                                The power to change lots of places with a single atomic commit is sometimes critical. At Amazon, if you change an API you better do it in a backwards compatible way and maintain that for a while. You need version numbers and a release process. At Google you just adapt all the API users in the same commit. Ok, this over-simplifies. In both cases you need special tooling. It is a feature which is just not possible with multiple git repos though. Just like permission control is not really possible within a single git repo.

                                                                1. 6

                                                                  At Amazon, if you change an API you better do it in a backwards compatible way and maintain that for a while. You need version numbers and a release process.

                                                                  Yes, and the idea is that a service run by a team is consumed by other teams. This is designed to remove the need of multi-repo/service/team lockstep changes. It encourages good ownership reduces conflict between teams.

                                                                  Unfortunately the hype around microservices led many companies to have more services than employees. API versioning and backward compatibility becomes very expensive… and the monorepo model comes in.

                                                                  With one big repo, lockstep changes, build and deployment, it’s the perfect “distributed monolith”. All the problems of the old monolith (complexity, team interactions, lack of modularity…) plus the difficult debugging and overhead of microservices.

                                                                  1. 3

                                                                    more services than employees

                                                                    I am sincerely sad I never thought of such a great, short, obvious, accurate, and vicious description of the problem I have seen at many client sites.

                                                                  2. 2

                                                                    Yes, exactly this. Working with multiple repositories leads to a lot of submodules to model code dependencies. Then the natural progression of avoiding submodules leads to either storing compiled artifacts into something like Artifactory and having binary dependencies (which leads to horrible cascading releases) or bringing everything into a single repository (which breaks down if developers do not unify build systems).

                                                                  3. 9

                                                                    Cross repo changes are annoying, especially when the company and codebase are young. Its far easier to avoid techincal debt with a monorepo.

                                                                    Personally, I think the best solution is a middle ground. Scale up with a monorepo until it becomes too clunky and then split things out where it makes sense.

                                                                    1. 3

                                                                      Particularly if the org has a standard like “only QA with RELEASE dependencies”. It can take f-cking months for all the planets to align to get new features into downstream libraries and applications. There’s also typically a lot of ceremony around releases in companies that build sell turnkey or even host for customers. Makes the entire process infuriating and slow (but this is a feature in fact).

                                                                      In this type of case I can see where a monorepo would be advantageous.

                                                                      1. 2

                                                                        Cross repo changes are annoying, especially when the company and codebase are young

                                                                        Exactly, so why do you need multiple repositories ? Because of the microservices madness? When you don’t even know the boundaries of your software, why trying to split it?

                                                                      2. 4

                                                                        I’m not sure the concept of “monorepo” is well defined in the case of startups. It comes from the vocabulary of huge corporations with myriad different projects, but startups typically just work on a single thing, which may be divided into various components like server and client. If you previously had two repositories foo-server and foo-client which you now merge into one repository foo, do you now have a “monorepo”? I don’t really think so.

                                                                        From a startup perspective, I think there’s a real issue with premature multiplication of repositories. This has been the case at several startups I’ve worked on. Many people report increased productivity and happiness after merging their startup’s several repositories into one. When a team works on several repositories, it’s like your very source code is a complex distributed system. Working on developer tooling and build configuration becomes trickier.

                                                                        A recent contract of mine was with a small business with around a dozen developers working on literally hundreds of repositories, one for each “project.” GraphViz had trouble plotting the dependency graph. Adding a feature to the system would usually involve committing to at least two or three repositories, but some changes might require committing to dozens. Of course this also multiplies external dependencies since each project pins its own versions. I think merging all of their repositories into one would have been great for the developers.

                                                                        1. 2

                                                                          I’m not sure the concept of “monorepo” is well defined in the case of startups.

                                                                          Sure it is: When you make a commit, can you make it across all components at once, or do you need to break it up into one commit per component? That’s all a monorepo means.

                                                                          1. 1

                                                                            What if your startup doesn’t consist of many different “projects” but is just e.g. one Rails app developed by a single team? That doesn’t really seem like a special monorepo… just a repo! For example, Wikipedia says (I know it’s not an authority) “few build tools work well in a monorepo” and this indicates that the name refers to a repo that is an especially large amalgamation of different projects.

                                                                            1. 2

                                                                              The key here is that ‘monorepo’ is not a bad thing. Indeed, for many many companies and teams, it’s a good thing.

                                                                              One doesn’t necessarily need a single repo, but the number of repos should be relatively few, because the number of interactions tends to scale exponentially with the number of repos. Before too long, all any developer does is manage repo interactions, and noöne has time to write any actual code

                                                                          2. 2

                                                                            True that “monorepo” might be a stretch for some startups, but I’ve seen cases where it starts from exactly the case you mention, and then moves onto having several components, not each needing all the others, as there might be several products, or at least several attempts at products. Can definitely confirm that after merging several repositories into a single one happiness and productivity jumps up, but only as long as the tooling can keep up.

                                                                            The comparison with complex distributed system is spot on :)

                                                                          3. 4

                                                                            Monorepos are one of those things that sound stupid until you try it, and then everyone I know who has been developing on a monorepo laments whenever they can’t do it anymore.

                                                                            1. 1

                                                                              Fair enough! I hope I end up in a position where I can try it, then.

                                                                              1. 1

                                                                                It seems to be only necessary in a professional setting though. There are not that many monorepos in the Open Source world. That leads to the situation that Open Source version control systems do not support that use case very well. Allegedly closed source ones (Plastic, Perforce) do, but I have no experience with them.

                                                                                1. 1

                                                                                  I use a personal mono repo for all my projects.

                                                                                  1. 1

                                                                                    The BSD’s (FreeBSD, certainly) use a monorepo. It’s one of the defining differences.