1. 12

Bicicleta needs incremental recomputation, like a spreadsheet, to be reasonably efficient. And I just discovered that ten years ago this guy Umut Acar (and Guy Blelloch, et al.) found a super rigorous way to derive statically-type-safe incremental versions of non-incremental (i.e. batch) algorithms, and wrote his dissertation on it. It seems like this work ought to be pretty broadly applicable right now, especially relating to incremental updates of DHTML user interfaces, but I haven’t heard of anybody doing that — which seems totally insane to me! Is something wrong with the work, or is it just that nobody has heard of it?

  1.  

  2. 4

    I stumbled on this a couple months ago myself. It seems like an exciting approach with a wide array of possible applications, including artificial intelligence.

    I haven’t dived into the details of the article. I’m pretty sure it’s a quite difficult problem purely in the abstract and that any self-adjusting schema would hinge on what kind of computation one would like to make self-adjusting, at least to be at all time-efficient - but even with this large limit, I still believe it’s an approach with massive potential.

    And all this said, I don’t see any evidence that the concept has gained traction - the first article I find on “self-adjusting computation” is Acar’s and it has only ~80 citations - very few if this is defining a field of study.

    Being an amateur Computer Science “buff”, I regularly try to use Google scholar to follow the trajectory of CS ideas. In academia, I get the impression that “broad, exciting visions rife with possibilities” may get the attention of us amateurs and the general public but they are generally dismissed by academics unless they have massive prestige and backing. When I attended a top university years ago, a professor of mine mentioned, over a beer, that “the goal is not to be famous but anonymous in the right way”. Provide a tweak that improves the performance and reputation of something a hundred people are working on already, you’ll have a hundred friends. Provide a clearly superior alternative that’s completely different from what a hundred people are working on, you might have a hundred enemies.

    Douglas Hofstadster’s analysis of other AI project in Fluid Concepts and Creative Analogies is instructive.

    What if creativity, fluidity and multi-directional reasoning was the key to human intelligence but is inherently undervalued in an academic or corporate setting? Remember academics want demonstrate and explore things that are personally unique to them, and if creativity is universal, it can’t be unique to the given academic approach. Consider how skill at chess was once considered the height of intelligence until the development of computers showed is relied on raw processing power.

    1. 2

      You actually make this sound more interesting than I thought it would be, from reading the abstract. Maybe I should read the rest of it. :)

    2. 4

      I was trying to understand the basics of these ideas. I found the thesis slides of Matthew Hammer presented at a pace I was able to follow. http://www.cs.umd.edu/~hammer/selfadjmachines/self-adjusting-machines--hammer2012-slides.pdf In case it is useful to others.

      Edit: The above slides help because they have an illustrative example. In words, the following opening sentence from Ruy Ley-Wild’s thesis was helpful:

      “Self-adjusting computation is a paradigm for programming incremental computations that efficiently respond to input changes by updating the output in time proportional to the changes in the structure of the computation.”

      From the descriptions on the linked page (“Change the computation to track the data”) I was thinking in terms of learning algorithms, like neural networks that were always on, i.e. the training phase is never stopped, but it turns out that’s not the case.

      Edit2: Another lobste.rs article - the one to do with CAD and Python, strikes me as a type of application where this knowledge could be applied for efficiency gains.

      1. 5

        Very cool,

        There are two parts - Hammer’s formulation of the basic problem/paradigm and Hammer’s particular solution to that problem/instance of the new paradigm. My vague sense is that Hammer’s solution is too complex but that his initial formulation is useful.

        I suspect “programming by example” could be connected to this. See: http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-177.html

        1. 2

          Yes! I agree about Antimony. (It seems like in fact for that kind of thing it might be useful to go the full nine yards and do a truth maintenance system for multidirectional constraint propagation and conflict detection, instead of the unidirectional propagation that Hammer and Acar and these guys have done so much amazing work on.)