1. 9
  1.  

  2. 5

    The same kind of flexibility that gives room to unexpected behaviors as mentioned in the article is responsible for their fitness to creative problem solving. Developers nowadays take things like pipes for granted but, in 1968, their architectural description served as a true paradigm shift to an industry that had been ailing since its early days. More than 50 years later, we still have two main approaches to software development: either produce it fast and full of failures, giving developers time to discover which are those and fix them, hence delivering somewhat reliable products; or produce it full of failures. No amount of sanding on the rough edges of complexity could make things better.

    1. 3

      “Developers nowadays take things like pipes for granted but, in 1968, their architectural description served as a true paradigm shift to an industry that had been ailing since its early days. “

      We had function calls that can do the same thing. Except there were languages emerging with type checks. Hoare and Dijkstra were doing pre/post-conditions and invariants. You get composition with stronger guarantees. Smalltalk and Lisp showed better pace of development with easier maintenance, too.

      “we still have two main approaches to software development”

      Fagan Inspections (pdf), Cleanroom, and OpenVMS argued oppositely way back in the 1980’s with Hansen’s RC 4000 showing it in the 1960’s. You can cost-effectively inject reliability into software processes in many ways. It’s even easier now with safer languages plus automated tooling for test generation, static analysis, etc. Even moving quickly, the majority of errors should be in business/domain logic or misunderstanding of the environment at this point. Far as environments, OS’s such as OpenVMS and OpenBSD also show you can eliminate a lot of that by using platforms with better design, documentation, and resilience mechanisms.

      Example: FoundationDB was a startup using ultra-reliable engineering in an intensely-competitive space. They got acquired by Apple.

      So, newest approach if reliability-focused is to prototype it fast using modern tooling, document your intended behavior as contracts + tests (if hard to specify), use automated tooling to find the problems in that, make sure the components are composable, and integrate them in safe ways. You can do this very quickly. If 3rd-party, use mature components, esp libraries and infrastructure, to reduce number of time-wasting surprises. You will get a small slow-down up-front, possible speed-up from reduced debugging, and the speed-up may or may not knock out the slow-down. The result should be reliable software coming out at a competitive pace, though.

      1. 2

        Hi, Nick, I am pretty sure you have your historical facts in much better shape than I have mine, but I will give it a go and try to discuss.

        We had function calls that can do the same thing. Except there were languages emerging with type checks. Hoare and Dijkstra were doing pre/post-conditions and invariants. (…) Smalltalk and Lisp showed better pace of development with easier maintenance, too.

        Programming languages with support to function calls predate pipes by approximately 15 years: the first implementation of pipes dates from 1973. FORTRAN II, for example, had support to subroutines in 1958; and I am not sure, but ALGOL introduced procedures either with its first iteration, from 1958, or with ALGOL 60. So even if their use was not widespread, at least they had implementations by 1968, year of NATO Software Engineering Conference in which Doug McIlroy’s ideas were presented. Subroutines were an important paradigm shift, but if they did offer any substantial help to developers back in the day, that was avoiding the aggravation of the software engineering crisis, not preventing it.

        Needless to say, LISP predates any of those technologies, and Smalltalk got popular by the mid 80’s. And, IMHO, Unix systems were definitely a big influence over Smalltalk’s architectural decisions: at least one of them, LOCUS, is mentioned by Alan Kay as an inspiration to Smalltalk. IIRC, it was by watching that system in action that he thought that even natural numbers could be represented as processes and communicate via IPC. But the overhead of such solution would be too big back in the day, and he opted to represent message passing via ordinary procedure calls.

        Far as environments, OS’s such as OpenVMS and OpenBSD also show you can eliminate a lot of that by using platforms with better design, documentation, and resilience mechanisms.

        I chose to reply this fragment because it was the easiest one to address with a platitude. :) OpenBSD also shows that you can keep the bulk of an Unix system and still provide above par security.

    2. 2

      There’s something off here. Processes sending one another bytes is quite simple, but it’s built up a lot of cruft. We also expect various kinds of delimitations of those bytes, for complex terminal behavior, quotation, etc.

      The complexity here comes to exist because the simple interface didn’t meet the user’s needs, but the promises of the simple interface are still being met.

      User needs are complex. Achieving simple solutions to them is incredibly hard. Simple technologies are great and lead to simple compositions, but… there are still a lot of simple legos that must be stacked to reach complex user-desired behaviors.

      Stories with similar links:

      1. Simplicity isn't simple via tizoc 4 years ago | 23 points | 10 comments