1. 19

  2. 6

    On the gripping hand, there seems to be a formal split between “level 0” implementations and “level 1” implementations, and the only difference between them is something about how arrays are passed as function args.

    A lot of people have trouble with this. I’ve seen some really bad answers as to what Level 1 Conformant Arrays are (not that TFA is bad, just on places like SO).

    Pascal has a nominal type system, like Go, so

    Foo = INTEGER
    Bar = INTEGER

    Are two different types. Moreover, the size of an array is part of its type, so

    String256 = ARRAY 256 OF CHAR

    Specifies, well, an array of 256 characters.

    So when you define a procedure in Pascal, the formal parameters have to have a type. Array types must include their sizes.

    In other words, it’s not possible in Standard Pascal (Level 0) to have a procedure that operates on arrays of arbitrary size.

    Level 1 introduces Conformant Arrays. That lets you pass array sizes as additional parameters to a procedure, and relaxed the typing rules so that types that are conformant to the formal parameter array type can be used.

    For example:


    defines a procedure that takes an array of integers with the additional parameters of low and high, which indicate the bounds of the array.

    (To date this remains my sole contribution to Stack Overflow…)

    1. 5

      The original Pascal implemented by Wirth was level 0. The absence of conformant arrays was a sufficiently big deal that it was fixed in the standard, but they grandfathered in old Pascal implementations without the extension using the level 0/ level 1 distinction.

      1. 3

        Oh snap. Thank you both! That makes perfect sense.

    2. 6

      Wow, the standard is dry as heck

      Peak dryness was achieved in the Algol 68 standard, which used a 2 level van Wijngaarden grammar to formalize as much of the syntax and semantics as possible. (BNF grammars were invented for Algol 60, and they were such a hit that people wanted to take BNF to the next level.) But, the Algol 68 report was so dry, rumour is that the only person who could read and understand it was Adriaan van Wijngaarden. Wirth resigned from the Algol 68 committee in protest.

      All of Wirth’s languages are reactions to and attempts to improve on Algol. His first was Algol W, which was a refinement of Algol 60, and was originally presented to the Algol committee as the successor to Algol 60 (we got Algol 68 instead, which took ideas from Algol W).

      The next I remember is Euler, which was a dynamically typed dialect of Algol: it was simpler, more regular, more powerful. I really liked it (in the context of its time, I mean, since it hasn’t aged well). It had features missing from Wirth’s later languages, like lambda expressions and standalone array literals.

      1. 2

        Added to that: which Rust spec are you (@icefox) referring to in the same sentence? The Ferrocene Spec?

        1. 3

          I was just referring to The Reference. I know, I know, it’s not a spec, but it aims at the same space.

      2. 5

        This is really fascinating! I hope you’ll investigate Oberon as well.

        For what it’s worth, the book Semantics with Applications: An Appetizer is a short introduction to the kind of programming language semantics used in the Standard ML standard and (some) newer standards. You’ve read more of these standards than I have so maybe it’s old hat for you. I think the old standards pretty much formalized the syntax using grammar and then used nothing but piles of semi-formal English to explain how it would work, which leaves a lot of room for differences in different implementations. I think it’s pretty useful to have a brief, unambiguous mathematical description of the semantics using one of the formalisms described in that book. These descriptions tend to be pretty opaque, but at least they are brief and unambiguous.

        I do find the “mathish” I/O section to be rather obtuse, as you described.