1. 15
    1. 4

      Far as matching features to objectives, those I know that were designed for sure are Ada, Wirth’s stuff (esp Oberon’s language & system), and Noether. Ada was done by systematically analyzing the needs of programmers plus how they screwed up. The language’s features were there to solve problems from that analysis. The Wirth languages aimed at the minimal amount of language features that could express a program and compile fast. Cardelli et al did a nice balancing job in Modula-3 for a language easy to analyze, easier to compile than C++, handles small stuff, and large stuff. Noether addresses design constraints more than any I’ve seen so far by listing many of them then balancing among them.

      http://www.adacore.com/knowledge/technical-papers/safe-and-secure-software-an-invitation-to-ada-2012/

      https://cr.yp.to/bib/1995/wirth.pdf

      https://en.wikipedia.org/wiki/Modula-3

      https://tahoe-lafs.org/~davidsarah/noether-friam4.pdf

      I don’t know about Smalltalk. It had a nice design from what I’ve seen but I don’t know about the process that went into it. It could’ve been cleverly hacked together for all I know. Scheme’s and Standard ML’s languages seem to lean toward clean designs that try to balance underlying principles/foundations against practicality with some arbitrary stuff thrown in there. There’s also variants of imperative and functional languages designed specifically for easy verification in a theorem prover. They have to subset and/or superset them to achieve that.

      1. 5

        Smalltalk was (and is!) very much a designed language, with strong vision and principles. Alan Kay has plenty to say about this, but the best source is Dan Ingalls: http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html

        It also dates from an era before the language/system divide, so is unfortunately misunderstood by most contemporary “language” people. Richard Gabriel has a good essay about this: http://www.dreamsongs.com/Files/Incommensurability.pdf

        1. 4

          I love this part:

          A way to find out if a language is working well is to see if programs look like they are doing what they are doing. If they are sprinkled with statements that relate to the management of storage, then their internal model is not well matched to that of humans.

          Couldn’t agree more.

          More generally: Programming languages are supposed to translate human concepts to machine concepts, in the most efficient way possible and without hand-holding.

          Of course all current programming languages completely fail in this regard at the moment, but I believe we should still keep our eyes on this as the ultimate goal.

          It’s very hard for programming languages to do this at present because we don’t have a clean way to express human concepts to machines. Current language syntax and grammar is a very poor channel to communicate these things, since we’re using machine level formalisms, not human formalisms as the foundation for design.

          I believe if we do more research into how to express and channel human concepts, then future programming languages will have a much better chance at succeeding in this endeavor.

          Also it’s very interesting how Alan Kay and Dan Ingalls thought back then. As @minimax pointed out, the essays were written “before the language/system divide”.

          People really did think in much higher level ways regarding Human Computer Interaction back then. Somewhere along the line we forgot about philosophy and the human component. It would be nice to get back to that at some point.

          We’ve been making great strides with NLP over the decades but NLP still doesn’t help us with “the bit in the middle”.

          Nothing against Rust, but for example, I really don’t give a damn about the borrow checker, and nor should anyone. we shouldn’t have to