1. 38
  1.  

  2. 12

    On one hand, I’m sympathetic to the idea of bringing systems programming more in line with “systems thinking” and “systems theory” in other fields. On the other, I think under that definition of “systems programming” we don’t have any truly systems-oriented languages yet.

    1. 7

      That’s actually what Oil is supposed to be! (eventually) Maybe this connection isn’t obvious, but one way to think of it is:

      • A Unix shell is a language for describing processes on a single machine.
      • A distributed system is a set of Unix processes spread across multiple machines. [1]
      • So you can imagine extending shell to talk about processes on multiple machines. There are existing languages that do this, although sometimes they are only “config files”. I believe you need the expressiveness of a full language, and an extension of the shell is the natural candidate for that language.

      I briefly mentioned this 18 months ago, in the parts about Borg/Kubernetes: Project Goals and Related Projects

      But I haven’t talked about it that much, because I want to keep things concrete, and none of this exists.

      I also mentioned Kubernetes in this blog post: Why Create a New Unix Shell?

      The build time and runtime descriptions of distributed systems are pretty disjoint now, but I think they could be moved closer together. Build time is a significant problem, not just a detail.

      I mentioned a couple books on the philosophy of systems here: Philosophy of Systems


      I like the OP’s framing of things. I agree that “systems programming” is overloaded and there should be another word for describing the architecture of distributed systems.

      Although I guess I totally disagree with the conclusion about OCaml and Haskell. I’m pretty sure we are talking about the same thing, but maybe not exactly.

      I guess he defining a “system” with the 5 qualities, which I agree with, but I am picking out “distributed systems” as an important subset of systems that have those 5 qualities.

      My basic thesis is that Shell should be the language for describing the architecture of distributed systems. The architecture is essentially a set of processes and ports, and how they are wired together. And how they can be applied to a particular hardware/cluster configuration.

      Right now those two things are heavily entangled. We’re basically still in the era where you have to modify your (distributed) program to run it on a different computer (cluster).

      Concretely, I think a cleaner shell mostly needs Ruby-like blocks, and it can express a lot of things, to do stuff like this:

      https://www.terraform.io/docs/configuration/syntax.html

      … which looks pretty similar to Google’s (internal) Borg configuration language. I described that language to someone as roughly “JSON with map/filter/cond/inheritance” :) It evaluates to protocol buffers that get send to the Borg master, and then flags get sent to the Borg “slaves” on each machine.

      Kubernetes has almost exactly the same architecture as far as I can tell, but everybody seems to use Go templates to generate YAML. That is not a good description language! :-( Things seem to have gone backward in this respect when the tech made its way out of Google (where I used all this cluster / big data stuff for many years).


      Also, I’m not the only one who thinks this. I link to Why Next Generation Shell? in my FAQ. He uses the term “systems engineer” too, which is also overloaded.


      [1] And they really are Unix processes; it’s hard to think of any significant non-Unix distributed systems. I guess there are still many mainframes in similar roles, but I imagine the number of nodes is fairly small compared to Unix-based systems.

      1. 4

        Feature request for oil. currying of commands…

        let targz = tar args… | gzip

        I think a proper ‘systems language’ would let composing OS processes be trivial while looking a bit like ocaml or reason ml. If I designed something i would consider having a clean distinction between functions and processes. Then let processes be first class things like functions, that can be curried, passed as arguments generated on the fly, just like closures in functional languages.

        The difference between ‘proc’ and ‘func’ would be procs can interop with OS processes and can die or be cancelled.

        and a static type system…

        1. 3

          “I think a proper ‘systems language’ would let composing OS processes be trivial while looking a bit like ocaml or reason ml.”

          I’ve said this, too, but in the context of getting rid of pipes. I thought modularity and composition were good but defaulting on the processes and pipes weren’t. At the least, we should have a choice. It seems we can specify what’s supposed to happen at a high level with the implementation (eg pipes, function calls) being generated later. The developer enters the criteria for that. So, we start getting the benefits of high-level languages plus can get close to specific functionality like how UNIX works. Might avoid maintenance and security issues, too.

          1. 3

            Oil will have proc and func, with exactly those keywords:

            http://www.oilshell.org/blog/2017/02/05.html

            proc is identical to current shell functions, which take argv and return an exit code, and can also be transparently put inside a pipeline or run in a subshell/command sub. It’s a cross between a procedure and a process.

            func is basically like Python or JavaScript functions.

            As for currying, I’d have to see some examples. I don’t see why a normal function syntax doesn’t solve the problem.

            By the way, a complex service at Google can have over 10K lines of the Borg config language, briefly mentioned here:

            https://research.google.com/pubs/pub43438.html?hl=es

            It is actually a functional language – it’s sort of like JSON with map/filter/cond/lambda/inheritance. And practically speaking, the functional style doesn’t really add anything. Most people seem to complain about masses of highly nested curly braces and awkward lambdas of map and filter. Plenty of people at Google have background in functional languages and they don’t even like it as a “config” syntax.

            Some teams went as far as to develop and alternate Python-derived language for describing service configs instead. A coworker actually made a statically typed variant which was abandoned.

            I basically think there is a confusion between functional-in-the-small and functional-in-the-large. I don’t care about functional-in-the-small – map/filter/cond etc. can be written in an imperative style. However functional-in-the-large is very important in distributed systems. It lets you compose processes and reason about them.

            And you can write pure functions in an imperative style! In fact that is how most of my programs including Oil are written. They use zero mutable globals, and pure dependency injection of I/O and state, which is essentially equivalent to functional programming.

            More concretely, I want Oil to be familiar to both existing shell users, and users of common languages like Python, JavaScript, Go, etc. I’m trying not to invent any new syntax – it should all be borrowed from a popular language. I think Reason ML is great and they make some good critiques of the inconsistency of OCaml syntax, and they bring it closer to JavaScript. So Oil might look more like Reason ML than OCaml.

            1. 1

              Main advantage of functional vs imperative for me is the first is easier get get correct the first time or during refactorings. This article has a good explanation why functional programs are easier to verify than imperative.

            2. 2

              Whilst it’s a nice idea, for that example I would use a bash function:

              function targz {
                tar "$@" | gzip
              }
              

              Also, I’ve become less fond of command line arguments over time, and tend to prefer env vars instead for key/value things. That way we don’t have to care about their order, we don’t need to match up keys with values ourselves, they’re automatically propagated through wrapper scripts, etc. I tend to only use command lines for inherently sequential things, like a list of filenames to act on.

              I’m not sure if ‘currying environments’ makes sense, although something like Racket’s parameterize for env vars in the shell would be nice. I’ve actually written some Racket machinery which lets me set env vars this way when invoking subprocesses :)

              1. 1

                I am interested in another example of use case for this.

                1. 4

                  Try writing a Go program that does a lot of shelling out to commands like ssh, tar, gzip, gsutil or awscli , then try the same in bash. In bash you will have crappy programming experience ‘in the large’, with go you will have an overly verbose mess ‘in the small’.

                  1. 2

                    Yeah writing shell scripts in Go seems to be more and more common these days, because Go is a common “cloud service” language, and a lot of devs don’t know shell, or (understandably) want to avoid it.

                    My friend sent me a shell script rewritten in Go. I get why. It works, but it’s clunky.

                    Here’s another example:

                    https://jvns.ca/blog/2017/07/30/a-couple-useful-ideas-from-google/

                    Here’s a post about rewriting shell in Python that I link from my FAQ. IMO it inadvertenty proves the opposite point: Python is clunky for this use case!

                    https://medium.com/capital-one-developers/bashing-the-bash-replacing-shell-scripts-with-python-d8d201bc0989

                    1. 1

                      TL;DR: I want to separate Ad-Hoc Scripting and Developing

                      Shell scripts are no good for programming, but I prefer all of the actual /code/ I use to be part of a well identified project.

                      The pattern I strive for is let scripts happen, then identify an actual task it is doing, and replacing the “horsework” the shell script was doing by a “workhorse” like Go / Python…

                      Otherwise, I might just settle using the programming language for “everything in-between programs”, which will end-up as “everything”. My point is to avoid a giant ocean of code that grows and grows without identifying the patterns.

                      It might pretty much possible to do the same with full-fledged programming languages, but slipping from scripting to writing to programming here and there is too easy.

              2. 1

                Personally, I’m keeping my fingers crossed hard for Luna language to (eventually) become the language of distributed computing… and of scaling the ladder of abstraction in both ways…

              3. 4

                Agreed. But don’t expect the languages to come before the thinking and theories!

                1. 2

                  VDM with code generator? ;) Also, ASM’s have been used to model everything. One, Asmeta, is a programming language, too. So, it seems doable. I’m not going to say pragmatic, though.

                2. 5

                  While functional programming languages like Haskell are conducive to modularity and otherwise generally good software engineering practices, they are unfit as implementation languages for what I will call interactive systems. These are systems that are heavily IO bound and must provide some sort of guarantee with regard to response time after certain inputs. I would argue that the vast majority of software engineering is the engineering of interactive systems, be it operating systems, GUI applications, control systems, high frequency trading, embedded applications, databases, or video games. Thus Haskell is unfit for these use cases. Haskell on the other hand is a fine implementation language for batch processing, i.e. non-interactive programs where completion time requirements aren’t strict and there isn’t much IO.

                  It’s not a dig at Haskell, this is an intentional design decision. While languages like Python/Java remove the need to consider memory allocation, Haskell takes this one step further and removes the need to consider the sequential steps required to execute a program. These are design trade-offs, not strict wins.

                  1. 5

                    While languages like Python/Java remove the need to consider memory allocation, Haskell takes this one step further and removes the need to consider the sequential steps required to execute a program.

                    Haskell makes it necessary to explicitly mark code which must be performed in sequence, which, really, is a friendlier way of doing things than what C effectively mandates: In C, you have to second-guess the optimizer to ensure your sequential code stays sequential, and doesn’t get reordered or removed entirely in the name of optimization. When the IO monad is in play, the Haskell compiler knows a lot of its usual tricks are off-limits, and behaves itself. It’s been explicitly told as much.

                    Rust made ownership, previously a concept which got hand-waved away, explicit and language-level. Haskell does the same for “code which must not be optimized as aggressively”, which we really don’t have an accepted term for right now, even though we need one.

                    1. 8

                      The optimiser in a C implementation absolutely won’t change the order in which your statements execute unless you can’t observe the effect of such changes anyway. The definition of ‘observe’ is a little complex, but crucially ‘my program is faster’ isn’t an observation that counts. Your code will only be reordered or removed in the name of optimisation if such a change is unobservable. The only way you could observe an unobservable change is by doing things that have no defined behaviour. Undefined behaviour exists in Haskell and Rust too, in every language.

                      So I don’t really see what this has to do with the concept being discussed. Haskell really isn’t a good language for expressing imperative logic. You wouldn’t want to write a lot of imperative logic in Haskell. It’s very nice that you can do so expressively when you need to, but it’s not Haskell’s strength at all. And it has nothing to do with optimisation.

                      1. 3

                        What if you do it using a DSL in Haskell like Galois does with Ivory? Looks like Haskell made their job easier in some ways.

                        1. 1

                          Still part of Haskell and thus still uses Haskell’s awful syntax. Nobody wants to write a <- local (ival 0). or b' <- deref b; store a b' or n `times` \i -> do when they could write int a = 0;, a = *b; or for (int i = 0; i < n; i++).

                          1. 8

                            “Nobody wants to”

                            You’re projecting your wishes onto everybody else. There’s piles of Haskell code out there, many DSL’s, and some in production. Clearly, some people want to even if some or most of us don’t.

                            1. 1

                              There is not ‘piles of Haskell code out there’, at least not compared to any mainstream programming language. Don’t get confused by its popularity amongst people on lobsters, hackernews and proggit. It’s an experimental research language. It’s not a mainstream programming language. It has piles of code out there compared to Racket or Idris or Pony, but compared to Python or C or C++ or Ruby or Java or C# or god forbid Javascript? It might as well not exist at all.

                              1. 2

                                Im not confused. Almost all languages fail getting virtually no use past their authors. Next step up get a few handfuls of code. Haskell has had piles of it in comparison plus corporate backing and use in small scale. Then, there’s larger scale backings like Rust or Go. Then, there’s companies with big market share throwing massive investments into things like .NET or Java. There’s also FOSS languages that got lucky enough to get similarly high numbers.

                                So, yeah, piles of code is an understatement given most efforts didnt go that far and a pile of paper with source might not cover the Haskell out there.

                                1. 1

                                  I don’t care how popular Haskell is compared to the vast majority of languages that are used only by their authors. That’s completely irrelevant to the discussion at hand.

                                  Haskell is not a good language for expressing imperative concepts. That’s plainly and obviously true. Defending it on the basis that it’s widely used ignores that firstly languages aren’t better simply because they’re widely used, secondly that languages can be widely used without necessarily being good at expressing imperative concepts, and thirdly that Haskell isn’t widely used.

                            2. 4

                              int a = 0 is okay, but not great. a = *b is complete gobbledygook that doesn’t look like anything unless you already know C, but at least it’s not needlessly verbose.

                              for (int i = 0; i < n; i++) is needlessly verbose and it looks like line noise to anyone who doesn’t already know C. It’s a very poor substitute for actual iteration support, whether it’s n.times |i| or for i in 0..n or something else to express your intent directly. It’s kind of ridiculous that C has special syntax for “increment variable by one and evaluate to the previous value”, but doesn’t have special syntax for “iterate from 0 to N”.

                              All of that is kind of a minor nit pick. The real point is that C’s syntax is not objectively good.

                              1. 2

                                How in the world are people unfamiliar with ruby expected to intuit that n.times|i| means replace i with iterative values up to n and not multiply n times i?

                                1. 2

                                  A more explicit translation would be 0.upto(n) do |i|.

                                2. 0

                                  You do know C. I know C. Lots of people know C. C is well known, and its syntax is good for what it’s for. a = *b is not ‘gobbledygook’, it’s a terse way of expressing assignment and a terse way of expressing dereferencing. Both are very common in C, so they have short syntax. Incrementing a variable is common, so it has short syntax.

                                  That’s not ridiculous. What I am saying is that Haskell is monstrously verbose when you want to express simple imperative concepts that require a single character of syntax in a language actually designed around those concepts, so you should use C instead of Haskell’s weird, overly verbose and syntactically poor emulation of C.

                          2. 3

                            How does Haskell allow you to explicit mark code that must be performed in sequence? Are you referring to seq? If you’re referring to the IO Monad, it’s a fair point, but I think generally it’s considered bad practice to default to using the IO monad. This sort of thing creates a burden when programming Haskell, at least for me. I don’t want to have to constantly wonder if I’ll need to port my elegant functional code into sequential IO Monad form in the future. C++/Rust address this sort of decision paralysis via “zero-cost abstractions,” which make them both more fit to be implementations languages, according to my line of reasoning above.

                            1. 5

                              Personally, I dislike discussions involving “the IO Monad”. The key point is that Haskell uses data flow for control flow (i.e. it’s lazy). We can sequence one thing after another by adding a data dependency (e.g. making bar depend on the result of foo will ensure that it runs afterwards).

                              Since Haskell is pure, compilers can understand and optimise expressions more thoroughly, which might remove ‘spurious’ data dependencies (and therefore sequencing). If we want to prevent that, we can use an abstract datatype, which is opaque to the compiler and hence can’t be altered by optimisations. There’s a built-in datatype called IO which works well for this (note: none of this depends at all on monads).

                              1. 3

                                The trouble is that oftentimes when you’re building time-sensitive software (which is almost always), it’s really inconvenient if the point at which a function is evaluated is not clear from the source code. Since values are lazy, it’s not uncommon to quickly build up an entire tree of lazy values, and then spend 1-2 seconds waiting for the evaluation to complete right before the value is printed out or displayed on the screen.

                                You could argue that it’s a matter of setting correct expectations, and you’d be right, but I think it defeats the spirit of the language to have to carefully annotate how values should be evaluated. Functional programming should be about functions and pure computation, and there is no implicit notion of time in function evaluation.

                                1. 4

                                  I agree that Haskell seems unsuitable for what is generally called “systems programming” (I’m currently debugging some Haskell code that’s been over-complicated in order to become streaming). Although it can support DSLs to generate suitable code (I’ve not experience with that though).

                                  I was just commenting on using phrases like “the IO Monad” w.r.t. evaluation order, etc. which is a common source of confusion and hand-waving for those new to Haskell, or reading about it in passing (since it seems like (a) there might be something special about IO and (b) that this might have something to do with Monads, neither of which are the case).

                                  1. 2

                                    building time-sensitive software (which is almost always)

                                    Much mission-critical software is running in GC’d languages whose non-determinism can kick in at any point. There’s also companies using Haskell in production apps that can’t be slow. At least one was using it specifically due to its concurrency mechanisms. So, I don’t think your “almost always” argument holds. The slower, less-predictable languages have way too much deployment for that at this point.

                                    Even time-sensitive doesn’t mean what it seems to mean outside real-time since users and customers often tolerate occasional delays or downtime. Those they don’t might also be fixed with some optimization of those modules. Letting things be a bit broken fixing them later is default in mainstream software. So, it’s not a surprise it happens in lots of deployments that supposedly are time-critical as a necessity.

                                    In short, I don’t think the upper bounds you’ve established on usefulness match what most industry and FOSS are doing with software in general or timing-sensitive (but not real-time).

                                    1. 2

                                      Yeah it’s a good point. There certainly are people building acceptably responsive apps with Haskell. It can be done (just like people are running go deployments successfully). I was mostly speaking from personal experience on various Haskell projects across the gamut of applications. Depends on cost / benefit I suppose. For some, the state of the art type system might be worth the extra cycles dealing the the occasional latency surprise.

                                      1. 2

                                        The finance people liked it because it was both closer to their problem statements (math-heavy), the apps had lower defects/surprises vs Java/.NET/C, and safer concurrency. That’s what I recall from a case study.

                                2. 1

                                  If you’re referring to the IO Monad, it’s a fair point, but I think generally it’s considered bad practice to default to using the IO monad

                                  Lmao what? You can define >>= for any data type effectively allowing you to create a DSL in which you can very precisely specify how the elements of the sequence combine with neat do notation.

                                  1. 2

                                    Yes that’s exactly the problem to which I’m referring: Do notation considered harmful. Also do notation isn’t enough to specify evaluation sequencing since values are lazy. You must also carefully use seq

                                    1. 1

                                      Ah well I use a Haskell-like language that has strict-by-default evaluation and seems to be able to address a lot of those other concerns at least by my cursory glance:)

                                      Either way the benefits of do, in separating the logic and execution of procedures, look great to me. But I may be confusing them with the benefits of dependent typing, nevertheless the former facilitates the latter when it comes to being able to express various constraints on a stateful system.

                              2. 3

                                For systems Haskell, you might like Habit from the people behind House, a Haskell OS. I just found some answers to timing part that I’ll submit in morning.

                                1. 1

                                  The House website seems incredibly out of date!

                                  1. 3

                                    Oh yeah. It’s mostly historical. They dropped the work for next project. Then dropped that for even better one. We get some papers and demos out of it.

                                    1. 2

                                      But so damn cool.

                                      1. 2

                                        Exactly! Even more so, there’s a lot of discussion of how to balance the low-level access against Haskell’s high-level features. They did this using the H Layer they describe in some of their papers. It’s basically like unsafe in Rust where they do the lowest-level stuff in one way, wrap it where it can be called by higher-level Haskell, and then do what they can of the rest in Haskell. I figured the concepts in H Layer might be reusable in other projects, esp safe and low-level. The concepts in Habit might be reusable in other Haskell or non-Haskell projects.

                                        It being old doesn’t change that. Good example is how linear logic was in the 1980’s, That got used in ML first I think years later, then them plus singleton types in some safer C’s in the 2000’s, and an affine variant of one of them in Rust. They make a huge splash with “no GC” claim. Now, linear and affine types are being adapted to many languages. The logic is twenty years old with people talking about using it for language safety for 10-20 years. Then, someone finds it useful in a modern project with major results.

                                        Lots of things work that way. It’s why I submit older, detailed works even if they have broken or no code.

                                  2. 1

                                    none of the examples of “interactive systems” you mention are nomally io bound. sub-second response time guarantees, otoh, are only possible by giving up gc, and use a real-time kernel. your conclusion that Haskell is unusable for “these use cases” seems entirely unfounded. of course, using Haskell for real time programming is a bad idea, but no less bad than anything that’s, essentially, not C.

                                    1. 2

                                      I’ve had a few personal experiences writing large Haskell applications where it was more trouble than I thought it was worth. I regularly had to deal with memory leaks due to laziness and 1-5 second stalls at io points where large trees of lazy values were evaluated last minute. I said this in another thread: it can be done, it just requires a bit more effort and awareness. In any case, I think it violates the spirit of Haskell programming to have to carefully consider latency issues, GC times, or lazy value evaluation when crafting pure algorithms. Having to trade off abstraction for performance is wasteful IMO, i think Rust and C++ nail this with their “zero cost abstractions.”

                                      I would label most of those systems IO bound. My word processor is normally waiting on IO, so is my kernel, so is my web app, so is my database, so is my raspberry pi etc.

                                      1. 1

                                        I guess I’m picking nits here, but using lots of working memory is not “memory leaks”, and a program that is idling due to having no work to perform is not “io bound”. Having “to carefully consider latency issues, GC times, [other tradeoffs]” is something you have to do in every language. I’d venture that the ability to do so on a subconcious level is what distinguishes a skilled developer from a noob. This also, I think, plays a large part in why it’s hard for innovative/weird languages to find adoption; they throw off your sense of how things should be done.

                                        1. 1

                                          Yes you have to consider those things in all languages which is precisely my point. Haskell seeks to abstract away those details but if you want to use Haskell in any sort of “time-sensitive” way, you have to litter your pure, lazy functional code with annotations. That defeats the purpose of the language being pure and lazy.

                                          And yes, waiting on user input does make your program IO bound. If your program is spending more time waiting on IO and less time churning the CPU, it is IO bound. IO bound doesn’t simply mean churning the disk.

                                        2. 1

                                          I brought that up before as a counterpoint to using Haskell. A Haskeller gave me this link which is a setting for making it strict by default. Might have helped you out. As a non-Haskeller, I can’t say if it makes the language harder to use or negates its benefits. Worth looking into, though, since it was specifically designed to address things like bang patterns that were cluttering code.

                                    2. 3

                                      Thank you for highlighting the origins of this concept.

                                      I wish to go toward learning system programming, yet without a one stop definition of what it is, so this helps.

                                      1. 1

                                        I think systems programming is a misnomer, at least as seen from today. I agree with the author that it should be the name of programming with systems. I guess we are stuck with the bad name for low-level programming though so I just try to avoid System Programming as a term and use low-level programming when I speak about low-level programming and a more specific term when I speak about the other (like distributed computing).

                                        When I first head about system engineering, I was a bit confused until it occured to me that at least systems engineering is named more appropriately.