1. 21
  1.  

  2. 6

    I am considering switching to other languages for my next project or migrating slowly. Will have to wait and see I suppose.

    The main strengths of go (from my point of view) are good libraries and no annoying sync/async code barrier.

    The main weaknesses are a runtime that makes using C libraries hard, and some feeling of kludgyness because users can’t define things like try or w.e. themselves. ‘Go 2’ doesn’t really change anything.

    1. 4

      I consider myself relatively neutral when it comes to Go as a language. What really keeps me from investing much time or attention in it is how its primary implementation isolates itself so completely as if to compel almost every library to be rewritten in Go. In the short term this means it will boost the growth of a vibrant ecosystem but I fear a longer term world where the only reasonable way to interoperate between new languages and systems which don’t fit into Go’s model is to open a socket.

      While I don’t think we need to be alarmist about bloated electron apps but in general, we’re talking about many orders of magnitude in cost increase for language interoperation. This is not the direction we should be going and I fear Go has set a bad precedent with its answer to this problem. Languages will evolve and systems will too but if we have to climb higher and higher walls every time we want to try something new, we’ll eventually be stuck in some local optimum.

      I’d like to see more investment in languages, systems, and runtimes sitting between them that can respond to new ideas in the future w/o involving entirely new revisions of a language with specific features responding to specific problems. Perhaps some version of Go 2 will get there but at the moment it seems almost stuck on optimizing for today’s problems rather than looking at where things are going. Somewhere in there is a better balance and I hope they find it.

      1. 4

        Yeah - I really want to use either GNU guile or Janet to write http handlers for Go, with the current system it is not really possible to do it well.

        There are multiple implementations of Lua in Go for the same reasons, poor interop if you aren’t written in Go and want two way calls.

        1. 3

          A crucial part of this is that go was explicitly, deliberately created as a language to write network servers in.

          In that context, of course the obvious way to interop with a go program is to open a socket.

          1. 2

            Sure. Priorities make RPC look like their main goal but the cost of an RPC call is on an entirely different level than a function call and comes with a lot of complexity from accidental distributed system is now required to call some logic written in another language.

            At a company where everything is already big and complex, this may seem like a small price but it’s becoming a cost today so we see people opting to write pure Go libraries and pass on shareable libraries or duplicating effort. In many cases this becomes a driver to kill diversity in technical choices that I talk about in my original comment above.

            It’s an obvious problem but the Go team would rather drive people away from systems level interoperability for Go’s short term gains. They claim that it’s be too hard to support a real FFI option or that they are short on resources but other runtimes do a better job of this so it’s possible and secondarily, Go supposedly isn’t a Google project but a community, yet we see it clearly being managed from one side of this coin.

            1. 1

              In my experience, it’s the quality of the golang tools driving this.

              For instance: I found it easier to port a (smallish) library to go and cross-compile the resulting go code, than to cross-compile the original c library.

              I initially considered porting to rust, which is imo a delightful language, but even though cross-compilation is much easier in rust than in c (thanks to rustup), it doesn’t compare to go.

              The process for c:

              • For each target arch, research the available compiler implementations; packages are often unavailable or broken, so you’ll be trying to build at least a few from source, probably on an unsupported platform.

              The process for rust:

              • For each target arch, ask rustup to fetch the toolchain. It’ll tell you to install a bunch of stuff yourself first, but at least it tends to work after you do that.

              The process for go:

              • Set an environment variable before running the compiler.
              1. 1

                … so we see people opting to write pure Go libraries and pass on shareable libraries or duplicating effort. In many cases this becomes a driver to kill diversity in technical choices that I talk about in my original comment above.

                It’s unclear to me why having another implementation of something instead of reusing a library reduces diversity rather than increasing it.

                They claim that it’s be too hard to support a real FFI option or that they are short on resources but other runtimes do a better job of this so it’s possible

                I’m personally a maintainer of one of the most used openssl bindings for Go, and I’ve found the FFI to be a very real option. That said, every runtime has it’s own constraints and difficulties. Are you aware of any ways to do the FFI better that would work in the context of the Go runtime? If not, can you explain why not? If the answer to both of those is no, then your statements are just unfounded implications and fear mongering.

                Go supposedly isn’t a Google project but a community, yet we see it clearly being managed from one side of this coin.

                And yet, I’m able to have changes included in the compiler and standard library, provide feedback on proposals, and have stewards of the language directly engage with my suggestions. My perspective is that they do a great job of listening to the community. Of course they don’t always agree with everything I say, and sometimes I feel like that’s the unattainable bar that people hold them to in order to say that it’s a community project.

                1. 1

                  The specific issues with Go FFI interop is usually around dealing with structured data rather than buffers of bytes and integers. Data layout ABI options would be a big plus. Pinning shared data would also help tremendously in avoiding extra copying or marshaling that is required in many of these cases. On the other side, calling into Go could be made faster in a number of ways, particularly in being able to cache thread local contexts for threads Go doesn’t manage (these are currently setup and torn down for every call in this direction).

                  There are also plenty of cases where construction of movable types could be supported with proper callbacks provided but instead Go opts to disallow sharing any of these data types entirely.

                2. 1

                  They claim that it’s be too hard to support a real FFI option or that they are short on resources but other runtimes do a better job of this so it’s possible and secondarily,

                  It’s been a while since I actively used and followed Go, isn’t the problem that they have to forgo the ‘run a gazillion Goroutines’ if they wanted to support real FFI? To support an extremely large number of goroutines, they need small, but growable stacks, which means that they have to do stack switching when calling C code. Plus it doesn’t have the same call conventions.

                  In many respects have designed themselves into a corner that it is hard to get out of, without upsetting users and/or breaking backwards compat. Of course, they may be happy with the corner that they are in.

                  That said, Go is not alone here, e.g. native function calls in Java are also expensive. It seems that someone has made an FFI benchmark ;):

                  https://github.com/dyu/ffi-overhead

              2. 2

                I generally symphatize with your main argument (personally, I also miss easier C interop, esp. given that it was advertised as one of the initial goals of the language) - but on the other hand, I don’t think you’re giving justice to the language in this regard.

                Specifically, AFAIK the situation with Go is not really much different from other languages with a garbage collector - e.g. Java, C#, OCaml, etc, etc. Every one of them has some kind of a (more or less tricky to use) FFI interface to C; in case of Go it’s just called cgo. Based on your claim, I would currently assume you don’t plan to invest much time in any other GCed language either, is that right?

                1. 2

                  I can’t speak to modern JVMs but OCaml and C# (.Net Cote and Mono) both have much better FFIs both in support for passing data around and in terms of performance costs. It’s hard to understate this but CGo is terribly slow compared to other managed language interop systems and is getting slower not faster over time.

                  I’ll let folks draw their own conclusions on whether this is intentional or just a limitation of resources but the outcome is a very serious problem for long term investments in a language.

                  1. 1

                    I think it’s important to quantify what “terribly slow” is. It’s on the order of ~100ns. That is more than sufficient for a large variety of applications.

                    It also appears from the implication that you believe it’s intentionally being slow. Do you have any engineering evidence that this is happening? In other words, are you aware of any ways to make the FFI go faster?

                    1. 1

                      Not in my experience. Other than trivial call with no args and return nothing, it is closer to 1 microsecond for Go calling out in many cases because of how argument handling has to be done and around 10 microseconds for non-Go code calling Go.

                      1. 1

                        It is indeed slower to call from C into Go for various reasons. Go to C calls can also be slower depending on how many arguments contain pointers because it has safety checks to ensure that you’re handling garbage collected memory correctly (these checks can be disabled). I don’t think I’ve ever seen any benchmarks place it at the microsecond level, though, and I’d be interested if you could provide one. There’s a lot of evidence on the issue tracker (here or here for example) that show that there is interest in making cgo faster, and that good benchmarks would be happily accepted.

                  2. 2

                    Every one of them has some kind of a (more or less tricky to use) FFI interface to C; in case of Go it’s just called cgo. Based on your claim, I would currently assume you don’t plan to invest much time in any other GCed language either, is that right?

                    LuaJIT C function calls are apparently as fast as from C (and under some circumstances faster):

                    https://nullprogram.com/blog/2018/05/27/

              3. 4

                500 comments on the try() proposal. Rust was lucky that it added try!() macro while nobody was looking :D