1. 30
  1.  

  2. 8

    Johnathan Edwards built a prototype visual programming language around decision tables, called “Subtext 2”. Here’s the OOPLSA ‘07 paper. Interesting work, but he abandoned it. And its successors… looks like he’s up to “Subtext 10” now.

    1. 1

      “built” seems like a strong statement, looks like an academic project? So it’s all papers and demos and blog posts, but nothing anyone could actually try out…

      1. 3

        No, there were (are) definitely some functioning prototypes of all those systems, but nothing he wanted to release or support. Research is like that. You probably could get some source code if you sent him a nice email… but if you’re genuinely interested in the ideas, you should probably just read the papers, understand them as best you can, and implement them yourself. Research!

        1. 2

          From the demo videos, “built a prototype” seems accurate enough, though it’s not “released”. Plenty of people and companies don’t release every prototype they build.

          1. 2

            Ah, I’ll take your parsing. I read it as ((built) (a prototype language)) but ((built a prototype) [of a] language) makes much more sense here

      2. 6

        I’m surprised this doesn’t talk about Karnaugh maps as a technique for simplifying truth tables.

        1. 2

          I remember being taught K-maps in an EE class. It seemed cute but anachronistic and vaguely unprofessional, sort of like wasting a week out of a college accounting course to learn how to use an abacus. They don’t scale beyond 3 or 4 variables, and there are much better ways to go about simplifying formulae, especially if you have access to a computer.

          1. 2

            and there are much better ways to go about simplifying formulae, especially if you have access to a computer

            Would you happen to know any? I recently had to simplify a lot of boolean functions (32 inputs, 75 outputs many of which are “don’t care”s for certain inputs). All I could find in this area was espresso. Should I just use an SMT solver? I ended up doing the minimization by hand, but that also makes it difficult to modify later on.

            1. 2

              There are lots of methods out there. Finding usable implementations of each and evaluating them against your particular need is, um, left as an exercise for the reader. I do understand you’re asking for a specific tool, but yes I’d just use an SMT solver. Very handy things to be familiar with!

        2. 4

          It’s kind of astonishing that there’s apparently not any well-known tools to check soundness/completeness of tables (assuming Hillel would’ve mentioned it if there was one). I wonder if this is a case where it feels too trivial for anyone to implement it?

          1. 2

            Hm but what format would it read?

            Personally I find the value of specs separate from code to be limited, because of the the sheer size of many specs and the increased likelihood of divergence.

            I would rather have the spec be executable, or generate executable code, as Oil does in many places. The “source code” section at the bottom of every release links to a few examples of this: https://www.oilshell.org/release/0.8.0/

            The issue with Any (-) and compressing the table is one reason it’s nontrivial. I think it makes sense as a “bespoke” unit test for your particular program, rather than as a general tool.

            One little-known fact is that re2c performs exhaustiveness checks, much like say an ML or Haskell compiler. If you write a set of regular expressions that don’t cover the input space, or there is an unreachable rule, it gives you a compile-time error.


            One thing I probably haven’t explained very well is that the reason I use regular languages (“regexes”) everywhere is precisely so that every transition is defined ! That is not the case with bash and you can shake bugs out of it quite easily with this knowledge (it has dozens of different lexical syntaxes, e.g. for PS1, echo -e, and more).

            There seems to be a widespread misunderstanding about using regular expressions for lexing and parsing. They are a great tool, due to this property and several others. Related posts:

            http://www.oilshell.org/blog/2020/07/eggex-theory.html (this is mostly about backtracking, again I should have emphasized the correctness benefits of regular languages)

            http://www.oilshell.org/blog/2020/07/ideas-questions.html

            1. 1

              If I was going to do it, I’d create a browser based tool for interactive editing, and possibly design my own format for serialization.

              Remember this is for checking the soundness/completeness of a decision table you’re developing as part of a spec or design doc, not just for testing a program. I think the latter is a natural extension, but I’m not talking about that right now.

          2. 3

            What do you think about visual representations of the same data, in forms like decision trees or flowcharts?

            I have teaching experience with decision tables in an admittedly pretty specific domain: Decision tables are one of the canonical ways of specifying a finite state machine (called a state-transition table in this context). Typically there will be a column for states, a column for inputs, and a column for successor state. Each row of the table represents one possible transition.

            The same information can be visualized as a state diagram, where each row of the decision table is represented by a directed edge in a graph instead. Students seem to almost invariably find the diagrammatic representation more intuitive to both read and write, to the point where they regard the table version as an arcane implementation detail at best (e.g. you can shove the table into a 2d array and write a table-driven FSM simulator), but not as a representation suited to humans. This might not generalize to more general contexts than FSMs though, where I could imagine the diagrams getting more hairy.

            1. 2

              In my personal opinion, diagrams tend to scale much better for state machines than tables do. I like to use DTs for single point-in-time decisions, and the more you go away from that the worse they become.

            2. 1

              Nice article. I never really think about “decision tables”, but I do think about completeness and soundness, especially in the context of state machines and regular languages.

              But this made me think of shell and language design. I was meaning to write a post about the “else whatever anti-pattern”.

              I think a better name for it would be the “else-idgaf” pattern. In other words you handle the 3 cases you happen to think about, and everything else is just silently passes through, even though it’s erroneous.


              It’s a bit hard to explain but bash is full of this incomplete case analysis. It’s the reason for many of the corner case differences here:

              https://www.oilshell.org/release/0.8.0/doc/known-differences.html

              Example: one table that only has 4 entries, but if you examine what bash actually does, you cannot conceive of it being intentional:

              https://www.oilshell.org/release/0.8.0/doc/known-differences.html#indexed-and-associative-arrays-are-distinct

              In other words I think of it as a combination of inputs that nobody gave much thought to …

              1. 1

                About the “else” pattern: my reading was that even if the value is “any”, it’s still part of the set indicated by the column it resides in (i.e. the example “x % 2 == 0” has a precondition, x must support the modulo operation). As in the pattern matching analogy, I would expect the “_” case to fail earlier, if the type/some condition on that branch is a mismatch. Maybe I’m misreading your comment :)

                1. 1

                  Yeah I probably didn’t explain it very well, but I guess I’m saying you can’t tell from the code which of these is true:

                  1. the else is explicitly part of the decision table
                  2. the else is just the fallthrough for whatever the author didn’t think about.

                  I admit it’s a judgement call, and maybe impossible to prove, but I would say many programs including language implementations like bash are filled with #2. They just made the “happy path” work and didn’t care about other inputs. Hence the name “else-idgaf anti-pattern” :)

                  For the paths to be in #2 they have to be undocumented, and that is extremely common.

                  The example of arrays is a good one. Arrays can be indexed or associative. However there is redundancy in the syntax – -a or -A, as well as (foo) vs ([foo]=bar). If you try out the “wrong” combinations I think you may agree they fall in the “else-idgaf” territory. There is no other way to explain the behavior other than through bash implementation details, which change from time to time. bash is full of these things.

              2. 1

                This is a good method and a great formal description of it. I like to do this informally in Haskell when there are many different cases to consider. Just match over a tuple of all inputs and write down all the different constructor combinations. You know that you don’t miss anything and you normally see simplifications pretty fast.