I’m not sure how I feel about the author’s thesis — software is very different from the feelings we project onto living agents, e.g it doesn’t get bored — but this “Actor Network Theory” they talk about is really interesting. I’m thinking of it in terms of UX and psychology, though:
If taken to its logical conclusion, then, nearly any actor can be considered merely a sum of other, smaller actors. A car is an example of a complex system. It contains many electronic and mechanical components, all of which are essentially hidden from view to the driver, who simply deals with the car as a single object. This effect is known as punctualisation, and is similar to the idea of encapsulation in object-oriented programming.
Humans seem to work this way too, viz. Minsky’s Society Of Mind, and the consensus in cognitive science that the ego or conscious mind is just a thin frosting over the much larger unconscious (and is much less in control than it thinks.)
When an actor network breaks down, the punctualisation effect tends to cease as well. In the automobile example above, a non-working engine would cause the driver to become aware of the car as a collection of parts rather than just a vehicle capable of transporting him or her from place to place. This can also occur when elements of a network act contrarily to the network as a whole. In his book Pandora’s Hope, Latour likens depunctualization to the opening of a black box. When closed, the box is perceived simply as a box, although when it is opened all elements inside it become visible.
This happens a lot when bugs surface in user-level software — the site you’re reading stops responding, and suddenly you’re aware of browser tabs, the process manager, your DNS settings, pinging the router… Ideally a system would be designed so that the nested components are also understandable, not a seething mass of whirring gears. We don’t do well at that, generally.
I had an experience recently where I was deeply thinking about a particular design and I spontaneously began to imagine one of the components as a sort of old clerk who periodically collected and dropped off messages from/to a physical paper-based message board. It felt reassuring - putting my mind at ease around how immediate the system would need to be in responding to subscriptions & notifications. After reading the article, I’m eager to try this again in a more intentional way.
I’m not sure how I feel about the author’s thesis — software is very different from the feelings we project onto living agents, e.g it doesn’t get bored — but this “Actor Network Theory” they talk about is really interesting. I’m thinking of it in terms of UX and psychology, though:
Humans seem to work this way too, viz. Minsky’s Society Of Mind, and the consensus in cognitive science that the ego or conscious mind is just a thin frosting over the much larger unconscious (and is much less in control than it thinks.)
This happens a lot when bugs surface in user-level software — the site you’re reading stops responding, and suddenly you’re aware of browser tabs, the process manager, your DNS settings, pinging the router… Ideally a system would be designed so that the nested components are also understandable, not a seething mass of whirring gears. We don’t do well at that, generally.
I had an experience recently where I was deeply thinking about a particular design and I spontaneously began to imagine one of the components as a sort of old clerk who periodically collected and dropped off messages from/to a physical paper-based message board. It felt reassuring - putting my mind at ease around how immediate the system would need to be in responding to subscriptions & notifications. After reading the article, I’m eager to try this again in a more intentional way.