1. 4

    …or just use UNIX sockets.

    1. 1

      But that only works for comms b/w procs on the same machine!

      1. 1

        Which is what the article is about (as well as ephemeral ports).

        1. 1

          I thought it was about using WebSockets. Did I miss something?

          1. 6

            No more than the article is about Ruby.

            Ephemeral port exhaustion only happens when using TCP, if you are proxying to localhost then UNIX or anon sockets are a far better option; they also have less overhead.

            1. 2

              I was wondering, is there any downside of binding to UNIX sockets instead of regular TCP ones?

              1. 4

                Other than it being a host local only socket, not really though portability to Windows might be important to you. Maybe you are fond of running tcpdump to packet capture the chit-chat between the front and backends and UNIX sockets would prevent this though if you are doing this you probably are just as okay with using strace instead.

                From a developer perspective instead of connecting to a TCP port you just connect to a file on your disk, the listener when binding to a UNIX socket creates that file, nothing else is different. The only confusing gotcha is that you cannot ‘re-bind’ if the UNIX socket file on the filesystem already exists; for example the situation when your code bombed out and was unable to mop up. Two ways to handle this:

                1. unlink() (delete) any previous stale UNIX socket file before bind()ing (or starting your code); most do this, as do I
                2. use abstract UNIX sockets which works functionally identical but does not create files on the filesystem so no need to unlink. You need to take care though on the naming of the socket as all the bytes in sun_path contribute to the reference name, not just the bytes up to the NUL termination

                Personally what I have found works with teams (for an HTTP service) is for development the backend presentation is a traditional HTTP server listening over TCP enabling everyone to just use cURL, their browser directly or whatever they like. In production though, a flag is set (well I just test if STDIN is a network socket) to go into UNIX socket/FastCGI mode.

                As JavaScript/Node.js is a effectively a lingua franca around here, this is what that looks like:

                $ cat src/server.js | grep --interesting-bits
                const http = require('http');
                const fcgi = require('node-fastcgi');
                
                const handler = function(req, res){
                  ...
                };
                
                const server = fcgi.isService()
                  ? fcgi.createServer(handler).listen()
                  : http.createServer(handler).listen(8000);
                
                server.on('...', function(){
                  ...
                });
                
                $ cat /etc/systemd/system/sockets.target.wants/myapp.socket 
                [Unit]
                Description=MyApp Server Socket
                
                [Socket]
                ListenStream=/run/myapp.sock
                SocketUser=www-data
                SocketGroup=www-data
                SocketMode=0660
                Accept=false
                
                [Install]
                WantedBy=sockets.target
                
                $ cat /etc/systemd/system/myapp.service
                [Unit]
                Description=MyApp Server
                Before=nginx.service
                
                [Service]
                WorkingDirectory=/opt/myorg/myapp
                ExecStartPre=/bin/sh -c '/usr/bin/touch npm-debug.log && /bin/chown myapp:myapp npm-debug.log'
                ExecStart=/usr/bin/multiwatch -f 3 -- /usr/bin/nodejs src/server.js
                User=myapp
                StandardInput=socket
                StandardOutput=null
                #StandardError=null
                Restart=on-failure
                ExecReload=/bin/kill -HUP $MAINPID
                ExecStop=/bin/kill -TERM $MAINPID
                
                [Install]
                WantedBy=multi-user.target
                

                The reason for multiwatch in production is you get forking and high-availability reloads. Historically I would have also used runit and spawn-fcgi but systemd has made this no longer necessary.

              2. 1

                Agreed.

            2. 1

              Local load balancing is the motivating example, but I wrote it highlight the general problem when load balancing between a large number of connections between a small number of backends (potentially external machines).

              UNIX sockets might be a reasonable solution to the particular problem in the post. It’s not something I’ve tried with HAProxy before though, so I’m not sure how practical it would be.

        1. 4

          I dislike these kinds of posts because instead of discussing effective uses of Go they discuss how to imitate language X in Go. That’s just not an appealing way to use a programming language.

          1. 6

            Many developers learning LISP and functional languages have said it changed how they think about some problems with their coding style picking up on that. Some people also imitate useful idioms to get their benefits. So, with no claim about this one, I think it’s always worth considering in general how one might expand a language’s capabilities.

            Double true if it has clean metaprogramming. :)

            1. 2

              I don’t entirely disagree. Maybe it’s just the quality of most of these posts that leave something to be desired.

            2. 6

              The goal of the post was to show how you would solve problems in Go that you would commonly use sum types for in other languages; not how to “get” sum types in Go.

              I agree that the first two approaches are trying to do imitate sum types, and there are disadvantages to that. But I would argue that using a vistor pattern is quite different, and is the “Go way” (as in it’s the only way that works harmoniously with the type system).

            1. 7

              This only covers a small fraction of the use case of sum types; namely, when there is a small set of standardized tasks that is shared across multiple types.

              You probably wouldn’t even use a sum type for this in Haskell or Rust; you would use a typeclass or a trait, which is basically what the author ended up doing in Go.

              By far the most useful feature of sum types (and further generalizations on multi-constructor types, like GADTs) is the exact representation of types with non-power-of-2 cardinalities. It’s hard to appreciate this if you’re used to working without it, but this single feature probably eliminates (conservatively) 60-70% of logic bugs I would make in languages like C or Java. I am not aware of any pattern or technique that satisfyingly reproduces this power in languages without native sum types.

              1. 3

                Could you give a simple example of that which a Go programmer might run into?

                1. 3

                  The classic example is the null pointer. You want to represent either your data structure D or some special case representing absence or whatever. This has cardinality |D| + 1. The null pointer is the traditional way to express this, and it’s bad for obvious reasons.

                  Second most straightforward example is you have two different data structures depending on the situation. Let’s say an error description or a success result. This has size |D| + |E|.

                  Parsers are one of the most recognizable scenarios where you have types with weird sizes, corresponding to the various clauses of the grammar. This is, I believe, one of the primary things ADTs were invented for.

                  One I ran into recently was representing a bunch of instructions in an ISA and their respective arguments.

                2. 2

                  when there is a small set of standardized tasks that is shared across multiple types

                  Isn’t this what interfaces are for?

                  By far the most useful feature of sum types […] is the exact representation of types with non-power-of-2 cardinalities

                  It would be great if you could provide an example of how this is useful.

                1. 2

                  Java’s results are super surprising. I hold the JVM’s GC in extremely high regard, so I would love to see comments from someone who is more familiar with it’s implementation.

                  1. 10

                    Java is optimized for throughput, Go is optimized for latency. There is no free lunch.

                    1. 3

                      After reading into this more, it looks like the Java runtime has a number of GC algorithms available, and will use heuristics to pick one as the program runs. The goal of this is to allow it to perform well with either low latency or high throughput requirements.

                      In the Java benchmark results listed in the blog post, one version lets the runtime decide which algorithm to use, and the other explicitly uses the G1 collector. After reading the HotSpot docs, it looks like the concurrent mark and sweep (similar to Go’s) GC might perform well with low latency requirements.

                    2. 7

                      The reddit user jcipar managed to get the max pause down to 22ms by tweaking parameters.

                      He also mentioned that the JVM GC does a lot of online tuning, so the max pause times may drop over a longer run of the program. This is similar to the Racket GC, where the maximum pauses are >100ms at the start of the run, but converge to around 20ms as the program continues to run.

                      It would be nice to run the benchmarks for a longer period of time, and only measure max pause times once this “ramp up” period is over.

                      1. 1

                        Ya - I was going to say. The magic of Java (and .NET actually) is that they’re much better given long run times with their Server GC’s. I’d like to see the benchmarks over the course of a day or even a week.

                      2. 4

                        Gil Tene suggests a part of this is the lack of compaction in Go

                        .@jamie_allen Go’s (current) collectors don’t compact. Different problem space. Not compacting in Java mean not running very long.

                        1. 2

                          I wonder how they deal with heap fragmentation in that case?

                          1. 1

                            This makes sense at first blush. Java is pointer-mad.