1. 14
  1.  

  2. 1

    What is it about inference rules, regular expressions, and BNF grammars that allow us to understand them perfectly well, even with the syntactical variance? If simple syntactic differences can cause confusion with overlines and substitution, shouldn’t their expression be re-thought entirely? For instance, would the example that mixes overlines and substitution not be clearer if two for-all quantifiers (∀) were used?

    1. 1

      Presumably, if you used a quantifier, you’d still have the problem of delimiting the extent of the quantified variables. As discussed in the talk, you would need to signal grouping with an overline, parens, binding dots, or similar.

      1. 1

        What is it about inference rules, regular expressions, and BNF grammars that allow us to understand them perfectly well, even with the syntactical variance?

        Regular expressions and BNF grammars are easy to understand because they’re just powerful enough for their intended purpose - specifying the syntax of programming languages, which is an easy task provided you don’t purposefully make it hard.

        On the other hand, I’m not sure we understand inference rules that well.

        For instance, would the example that mixes overlines and substitution not be clearer if two for-all quantifiers (∀) were used?

        Not really. The trouble with a substitution like t[xs -> ts] is that it doesn’t really mean what it literally says. You don’t actually have two separate lists xs of variables and ts of terms to substitute them with. You have a single list ss of pairs of the form (x,t), where x is a variable and t is a term to substitute it with. (Alternatively, you may insist on using two separate lists, but then you would need to maintain an invariant that the lists have the same length. IMO, this makes life unnecessarily hard.)