1. 38
  1.  

  2. 7

    Amazing article, so much to explore here.

    There is something that I think I can add after putting in thought while working on the development of Max for Live (the technology that integrates Max MSP in Ableton Live):

    There may be somewhat more freedom than normal text, with the ability to place boxes anywhere on a 2D canvas (…)

    From the outset, I believe that visual programming has much less freedom than text-based code, and that is precisely what makes it so appealing for either very early programmers or artists who don’t want to be developers.

    In a visual language, there are the objects at your disposal and the ways you can connect them. In text, there is a choice from 128 characters for each caret position. Without the context that you only get after your first few tutorials, text-based code has infinitely more combinations than a visual language.

    I believe that this, plus the absence of jargon (no need for the terms ‘argument’, ‘property’, ‘instance’ or ‘data type’, it’s just connecting a line to a box), makes the initial ratio between primary purpose and required secondary information much higher for visual languages, at least for beginners.

    1. 5

      I love this article. I’m a big fan of this kind of loose exploration of a conceptual space.

      The section acknowledging “visual interactive features” that eventually just become “features” hints at the spaces where new and useful ways of representing the abstractions we’re trying to understand might hide. I call trying to re-envision the abstraction in my head, “rotating the object” because I liken it to trying to view the same thing from a different angle to see the new conceptual shapes that hang off of it. I wish we had more of these things. As described later on,

      Viewed through this lens there’s almost an argument that visual spaghetti is a feature not a bug — at least you can directly see that you’ve created a horrible mess, without having to be much of a programming expert.

      I pine for more ways to represent abstractions in less abstract ways. At an individual level, it makes it easier for me to reason about things (in the same way that they talk about how humans are good at remembering spaces), and at a group level I suspect it makes it simpler to communicate with other people about these patterns as well. I’ve spent a lot of time in my career learning the heuristics that seem to work well for identifying helpful/unhelpful code, and how nice it would be if I could pull up a diagram - automatically generated based on the code - and say, “See that little curly bit hanging off right there? That’s a problem. It happened because of X and it means that your only choice is Y to respond to it.” New programmers can be taught the cost of having those little curly bits show up. How many of us might finally grok monads if we could demonstrate the shape of one when conceptually represented the right way?

      Moving on - this quote from within the article reminds me of an exchange I had:

      Code is always written with “indentation” and other things that demonstrate that the 2d canvas distribution of the glyphs you’re expressing actually does matter for the human element. You’re almost writing ASCII art. The ( ) and [ ] are even in there to evoke other visual types. - nikki93

      In a different field, I was speaking to a coworker about how I would visually scan documents to determine what was different about them (these are regulation documents or letters and the like). I described how I would unfocus my eyes just a little bit and focus on the physical arrangement of the words, like how far out a sentence would jut compared to the one above or below it. My coworker was confused, and admitted he had no idea what I was talking about. This was when I realized I had invented a mental tool for myself.

      I think there’s opportunity to invent more of these tools for ourselves, and I find visual features to be extremely effective at evoking that process. I’m hoping this article and other musings like it inspire people to discover ways they can share with the rest of us and make it easier to see the dark side of the conceptual moon.

      1. 3

        This was a great read!

        One interesting node-based approach is the Notch VFX tool. It’s been a while since I tried it but I recall the graph having at least four functions simultaneously:

        • Representing a “scene graph”. This means parent-child relationships between 3D objects so that they move in sync.
        • The usual dataflow programming, for instance passing a texture to a material node.
        • Plugging in polymorphic types into “holes” in larger manager nodes. Think a big particle system node that has an input socket for a “particle renderer” instance. This means you can mix and match functionality visually. The data flow over that wire is obviously bidirectionial.
        • The node relative position also matters. I recall camera nodes setting their relative priorities based on node X or Y positions. The priority was then used to pick which view to show when multiple cameras were active.

        As stated in the article, visual programming is common audio synthesis tools. One extreme example of this is the demoscene synth 4klang that puts the musician in charge of programming the x87 floating point stack! It works great :)

        If you’re into this kind of work check out the “demotool” gallery I’ve collected.