1. 10
  1.  

  2. 4

    Xerox PARC had a whole group researching “ubiquitous computing” in the ‘90s.

    There was a lot of interest in big multitouch display surfaces about 15 years ago — basically a horizontal piece of frosted glass with a projector and a camera below. The really interesting part is that the camera can detect patterns printed on the bottoms of objects you place on the glass, so those objects become interactive. Board games are an obvious use, but there was also an audio/music app where the objects represented audio effects that were modulated as you moved or rotated them. (IIRC Björk used this on one of her albums, and perhaps live on tour.)

    The “W1” chip Apple’s using in AirTags lets objects be located wirelessly with really high accuracy; maybe it could be used for interactive objects like these.

    1. 2

      It’s a fun idea - I love the answering machine concept.

      But how do we handle waste in such a scenario?

      Will we someday just be walking around with millions of uniform little boxes that serve as containers for all of our information?

      … wait…

      1. 1

        Call me boring and unimaginative, but I think there’s a reason the UIs are the way they are: it just works well.

        Take that jacket from Poupyrev’s talk for example where he skips to the next slide by brushing his sleeve. Neat, but how do you distinguish between normal human gestures and a UI gesture? What if you have an itch? Or just want to wipe something off? Or someone brushes up to you in a busy street? There are all sorts of regular things you can do with your jacket that could be accidentally interpreted as a UI gesture. Of course, you could have some sort of way to enable/disable the gesture interface, but if you have that UI … why bother with the gesture interface? At this point it just seems like an unnecessary step.

        If I look at science fiction shows like The Expanse, then all the tech there looks neat on the screen, but in the real world it would be quite impractical IMO. I can’t find a good video of it, but you have stuff like people zooming in and highlighting stuff just by pointing at something in a 3D display, or “grabbing” stuff in the display to send it to someone. Looks neat, but it never struck me as especially practical with a huge potential for accidental triggers.

        In short, UIs should be explicit.

        There’s also a security aspect here; how do you control who is allowed to do what? Never mind that a lot of software will be of dubious quality. We also need to be careful that we don’t lose core functionality just because some piece of technology broke (which is already a problem to some degree).

        The Dynamicland website and Twitter feed is surprisingly light on details on actual practical problems this solves. Quite frankly it just comes off as a collection of buzzwords and feel-goods.

        Undoubtedly some useful stuff will come rolling out of this, and I encourage everyone to research stuff. But I’d be surprised if this is going to fundamentally change the way we interact with computers.