1. 17

  2. 10

    I think an important direction for future programming language development is better support for writing single programs that span multiple nodes. It’s been done, e.g. erlang, but it would be nice to see more tight integration of network protocols into programming languages, or languages that can readily accommodate libraries that do this without a lot of fuss.

    There’s still some utility in IDLs like protobufs/capnproto in that realistically the whole world isn’t going to converge on one language any time soon, so having a nice way of describing an interface in a way that’s portable across languages is important for some use cases. But today we write a lot of plumbing code that we probably shouldn’t need to.

    1. 3

      I couldn’t agree more. Some sort of language feature or DSL or something would allow you to have your services architecture without paying quite so many of the costs for it.

      Type-checking cross-node calls, service fusion (i.e. co-locating services that communicate with each other on the same node to eliminate network traffic where possible), RPC inlining (at my company we have RPC calls that amount to just CPU work but they’re in different repos and different machines because they’re written by different teams; if the compiler had access to that information it could eliminate that boundary), something like a query planner for complex RPCs that decay to many other backend RPC calls (we pass object IDs between services but often many of them need the data about those same underlying objects so they all go out to the data access layer to look up the same objects). Some of that could be done by ops teams with implementation knowledge but in our case those implementations are changing all of the time so they’d be out of date by the time the ops team has time to figure out what’s going on under the hood. There’s a lot that a Sufficiently Smart Compiler(tm) can do given all of the information

      1. 3

        There is also a view that it is a function of underlying OS (not a particular programming language) to seamlessly provide ‘resources’ (eg memory, CPU, scheduling) etc. across networked nodes.

        This view is, sometimes, called Single Image OS (briefly discussed that angle in that thread as well )

        Overall, I agree, of course, that creating safe, efficient and horizontally scalable programs – should much easier.

        Hardware is going to continue to drive horizontal scalability capabilities (whether it is multiple cores, or multiple nodes, or multiple video/network cards)

        1. 2

          I was tempted to add some specifics about projects/ideas I thought were promising, but I’m kinda glad I didn’t, since everybody’s chimed with stuff they’re excited about and there’s a pretty wide range. Some of these I knew about others I didn’t, and this turned out to be way more interesting than if it had been about one thing!

          1. 2

            Yes, but: you need to avoid the mistakes of earlier attempts to do this, like CORBA, Java RMI, DistributedObjects, etc. A remote call is not the same as an in-process call, for all the reasons called out in the famous Fallacies Of Distributed Computing list. Earlier systems tried to shove that inconvenient truth under the rug, with the result that ugly things happened at runtime.

            On the other hand, Erlang has of course been doing this well for a while.

            I think we’re in better shape to deal with this now thanks all the recent work languages have been doing to provide async calls, Erlang-style channels, Actors, and better error handling through effect systems. (Shout out to Rust, Swift and Pony!)

            1. 2

              Yep! I’m encouraged by signs that we as a field have learned our lesson. See also: https://capnproto.org/rpc.html#distributed-objects

              1. 1

                Cap’nProto is already on my long list of stuff to get into…

            2. 2

              Great comment, yes, I completely agree.

              This is linked from the article, but just in case you didn’t se it, http://catern.com/list_singledist.html lists a few attempts at exactly that. Including my own http://catern.com/caternetes.html

              1. 2

                This is what work like Spritely Goblins is hoping to push forward

                1. 1

                  I think an important direction for future programming language development is better support for writing single programs that span multiple nodes.


                  I think the model that has the most potential is something near to tuple spaces. That is, leaning in to the constraints, rather than trying to paper over them, or to prop up anachronistic models of computation.

                  1. 1

                    better support for writing single programs that span multiple nodes.

                    That’s one of the goals of Objective-S. Well, not really a specific goal, but more a result of the overall goal of generalising to components and connectors. And components can certainly be whole programs, and connectors can certainly be various forms of IPC.

                    Having support for node-spanning programs also illustrates the need to go beyond the current call/return focus in our programming languages. As long as the only linguistically supported way for two components to talk to each other is a procedure call, the only way to do IPC is transparent RPCs. And we all know how well that turned out.

                    1. 1

                      indeed! Stuff like https://www.unisonweb.org/ looks promising.

                    2. 2

                      Schemas and signatures have different purposes. Schemas are much more useful when you expect people to need/want to implement their own version of the protocol to better fit their requirements/needs/projects.

                      1. 1

                        Could you elaborate? Why do you say that?

                        1. 2

                          Not OP, but I think @glacambre is saying that schemas are less specific than signatures, which grants implementors more freedom when implementing/maintaining one. Server-client architectures don’t even have an ABI, so we don’t need to worry about its compatibility. I can switch to another runtime/compiler or even another language whenever I want, as long as the protocol doesn’t change.

                          Everything in an interface is a promise you shall keep indefinitely, so more expressiveness means more responsibility.

                      2. 1

                        Combining all of these in a project sounds much more complex, though. Especially in popular high level languages.

                        Maybe that’s just because we have lots of existing tooling for process management, RPC and service discovery.