1. 7

For a few years now I’ve been sort of hoping that linear logic, other substructural logics, category theory, denotational semantics, and functional reactive programming would sort of make their way into my brain via osmosis. So far this has failed, and this paper (which unfortunately seems to require a good understanding of substructural logics including linear logic, of category theory, and of FRP) shows why I am unhappy about my state of ignorance, because it means that this paper is really hard for me to grok.

(Just to be super clear, I am not blaming the authors for this.)

The paper, if I understand correctly, purports to support fairly general higher-order programming of reactive systems in bounded space. Another name for “reactive systems in bounded space” is “hardware” (I’d say “circuits” but in theoretical CS apparently a “circuit” is a purely combinational circuit), or “software that doesn’t crash”. That seems like a pretty huge breakthrough, since most of the reasons our current computer systems suck so terribly is that their memory usage is unbounded and also large, so at any time they are prone to unexpectedly transition into Snail In Cold Molasses Mode, then crash, because they’re suddenly out of RAM.

We have existing methods for building bounded-space systems (COBOL, VHDL, Verilog, MISRA C) but you wouldn’t want to write a C compiler in them, let alone a web browser, because their support for eliminating duplication (through abstraction) is very limited. From what I have been able to make out so far of Krishnaswami et al.’s paper here, their system supports pretty general higher-order programming, and also puts tight bounds on memory allocation.

I’ll comment further if I can make heads or tails of sentences like “The cokleisli function lifts a function from streams S(A) to a type B to a function from streams of A to streams of B, giving a comonad structure [24] to the S(⋅) functor.” (The one thing I am sure of is that the algorithm they give as the Sieve of Eratosthenes is Not The True Sieve.) While I’d appreciate help, I kind of think the problem is not so much the absence of explanations as an abundance of them, which I haven’t yet taken the time to understand. Am I alone in this?

Also I think that if I somehow manage to understand this paper (“We are nevertheless able to prove a normalization result for the within-step operational semantics since the fixed point substitutes for a variable at a future time.” actually mostly makes sense to me; normalization results are things you prove about things resembling the lambda calculus to show that your specified evaluation rules always yield the same “normalized” result, and the particular evaluation rules they’re talking about are the “within-step operational semantics” they just specified) it will take me a long way toward finally understanding something about linear logic! Something more than the small, completely non-rigorous amount I’ve gleaned from Henry Baker papers.

One extremely tantalizing note toward the end:

Supporting other dynamic data structures (not necessarily streams) suggests looking at the work of Acar et al. [1], who have studied adding user-controlled incrementalization to self-adjusting computation, which shares many implementation issues with FRP. Conversely, it will be worth investigating whether our denotational semantics can be adapted to provide a useful categorical cost semantics for self-adjusting computation[16].

  1.