I developed this visual PostScript programming interface for the NeWS window system:
The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989 Written by Don Hopkins, October 1989.
The PSIBER Space Deck is an interactive visual user interface to a graphical programming environment, the NeWS window system. It lets you display, manipulate, and navigate the data structures, programs, and processes living in the virtual memory space of NeWS. It is useful as a debugging tool, and as a hands on way to learn about programming in PostScript and NeWS.
https://medium.com/@donhopkins/the-shape-of-psiber-space-october-1989-19e2dfa4d91e
I also had a great time working with Bounce aka Body Electric (developed at VPL for VR/Dataglove/MIDI/device control real time data flow programming.
Bounce Stuff: Bounce is a real time visual data flow programming language, designed to create interactive graphical simulations, and to filter and control midi, serial, ethernet, and other devices.
https://medium.com/@donhopkins/bounce-stuff-8310551a96e3
Definitely check out “Snap!”, which is a wonderful visual block programming language like Scratch, implemented in JavaScript so it runs in the browser, with everything that’s important about Scheme, including first class functions, visual lexical closures, user defined blocks and control structures, special forms and macros, and even continuation!
Also, Alan Kay just sent me the Fabrik paper which I had never seen before, and also pointed out some influential work was a precursor to drag and drop programming:
http://donhopkins.com/home/Fabrik%20PE%20paper.pdf
Thank you! I remember hearing the name Fabrik mentioned somewhere, but never found much to read about it so I don’t know much about it.
I’ll read up on it and integrate it into my articles!
Pie Menus: A 30 Year Retrospective
https://medium.com/@donhopkins/pie-menus-936fed383ff1
SimCity, Cellular Automata, and Happy Tool for HyperLook (nee HyperNeWS (nee GoodNeWS)): HyperLook was like HyperCard for NeWS, with PostScript graphics and scripting plus networking. Here are three unique and wacky examples that plug together to show what HyperNeWS was all about, and where we could go in the future!
https://medium.com/@donhopkins/hyperlook-nee-hypernews-nee-goodnews-99f411e58ce4
I really enjoyed this paper “A Taxonomy of Simulation Software: A work in progress” from Learning Technology Review by Kurt Schmucker at Apple. It covered many of my favorite systems.
http://donhopkins.com/home/documents/taxonomy.pdf
It reminds me of the much more modern an comprehensive “Gadget Background Survey” that Chaim did at HARC, which includes your favorites Rockey’s Boots and Robot Odyssey, and his amazing SimCity Reverse Diagrams and lots of great stuff I’d never seen before:
http://chaim.io/download/Gingold%20(2017)%20Gadget%20(1)%20Survey.pdf
I’ve also been greatly inspired by the systems described in the classic books “Visual Programming” by Nan C Shu, and “Watch What I Do: Programming by Demonstration” edited by Alan Cypher.
Brad Myers wrote several articles in that about his stuff, like Peridot and Garnet (which I briefly worked on with him at CMU, and was very cool, but needed a bit more right brain graphic design if you know what I mean ;). To paraphrase Rumsfeld, “As you know, you go to screen with the graphics API you have, not the graphics API you might want or wish to have at a later time.”
https://medium.com/@donhopkins/constraints-and-prototypes-in-garnet-and-laszlo-84533c49c548
Here is another great comprehensive collection from 2014 by Eric Hosick of visual programming language screen shots and links:
http://blog.interfacevision.com/design/design-visual-progarmming-languages-snapshots/
And Alan Kay mentioned that this cool “Let’s Build a Halftime Show” parade programming scene from Thinkin’ Things was a precursor for drag and drop programming:
“Two precursors for DnD programming were in my grad student’s – Mike Travers – MIT thesis (not quite the same idea), and in the “Thinking Things” parade programming system (again, just individual symbol blocks rather than expressions).”
There’s a great demo of Thinkin’ Things on youtube!
Thinkin’ Things Collection 3 Gameplay
https://youtu.be/gCFNUc10Vu8?t=24m58s
Here’s info about Mike Travers’ thesis:
http://alumni.media.mit.edu/~mt/
http://worrydream.com/refs/Travers%20-%20Recursive%20Interfaces%20for%20Reactive%20Objects.pdf
Oh my, I popped up a level and discovered that there’s a whole lot of cool stuff in Brett Victor’s worrydream refs directory!
That directory is a treasure trove! Lots of amazing historical stuff I’ve never seen before, like this:
Interaction at Lincoln Laboratory in the 1960’s: Looking Forward – Looking Back
Down the rabbit hole!
Note: My article under discussion is still a rough draft! Arthur insisted I take the focus off him and focus it on HyperLook itself, and I decided to widen it.
Now I’m describing how three exemplary apps, SimCity, Cellular Automata Machine, and Happy Tool, plug together and interact with each other in the HyperLook environment. How it uses message passing, delegation, prototypes, interface editing, property sheets and controls. And how structured PostScript graphics, data and code (PostScript data is polymorphic like JSON, and homoiconic code like Lisp) are the keystone that everything pivots around.
Here’s some more stuff about the NeWS programming environment, the PSIBER Space Deck, a visual PostScript programming and debugging environment for NeWS that I made early at UMD HCIL:
https://medium.com/@donhopkins/the-shape-of-psiber-space-october-1989-19e2dfa4d91e
There’s also a big part missing from the HyperLook article that I have’t put in yet:
A video tape demo on youtube of HyperLook, SimCity, the Cellular Automata Machine, PizzaTool, RasterRap, which I’ve transcribed and made screen shots, and will make into another article.
Unfortunately the video itself is terribly compressed, and I haven’t had a chance to set up the equipment to re-capture it from video tape.
But the illustrated transcript I’ll post soon will be easier to follow than the video, because it was recorded in the Exploratorium so there are kids screaming maniacally and people laughing uproariously in the background! ;) It was a pretty insane demo and the camerawoman almost dropped the camera, but I’m glad I got it all on tape! (Including the HappyTool back-story!)
HyperLook Demo
https://www.youtube.com/watch?v=avJnpDKHxPY
Demonstration of SimCity running under the HyperLook user interface development system, based on NeWS PostScript, running on a SPARCstation 2. Includes a demonstration of editing HyperLook graphics and user interfaces, the HyperLook Cellular Automata Machine, and the HyperLook Happy Tool. Also shows The NeWS Toolkit applications PizzaTool and RasterRap. HyperLook developed by Arthur van Hoff and Don Hopkins at the Turing Institute. SimCity ported to Unix and HyperLook by Don Hopkins. HyperLook Cellular Automata Machine, Happy Tool, The NeWS Toolkit, PizzaTool and Raster Rap developed by Don Hopkins. Demonstration, transcript and close captioning by Don Hopkins. Camera and interview by Abbe Don. Taped at the San Francisco Exploratorium.
EDIT: Just finished a big round of editing, please try again if you like! Next I do the transcript.
Here’s a recurring theme explained:
The Three Axis of AJAX, Which NeWS Also Has To Grind!!!
NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:
We will return to these three important dimensions as a recurring theme throughout this article:
The Axis of Eval: Code, Graphics and Data.
How was postscript as a fit for this kind of application? For instance, was it easy to add object encapsulation & message passing? Was it straightforward for users to modify running code?
PSIBER looks like it contains interesting ideas with regard to making internal state visible, but I worry about the learning curve of stack languages for this purpose. I see the killer application for a composable UI system like this as the dissolution of the user/programmer distinction, easing people into casual programming the way that the unix command line eases people into casual shell scripting. A key part of that is having the language be something that a user can fit entirely into their head, but also something a fairly non-technical user can absorb through context. Stack languages, when the stack is not visible & unambiguous during iterative development, present something that must be understood and mentally simulated during development – which means homework before coding even begins. (For this reason, in Kaukatcr, the data stack, call stack, and dictionary are all part of the visible structure. But, I don’t think Kaukatcr really achieves the kind of ease-of-adoption I’m looking for, either.)
I don’t suppose you did any user studies of non-programmers learning to code via interacting with PSIBER & running applications?
I found it great! And adding message passing and encapsulation was super easy! Owen Densmore’s “Object Oriented Programming in NeWS” he presented at Monterey 86 Usenix Graphics Workshop showed how to do that in two pages of code!
Object Oriented Programming in NeWS, by Owen M. Densmore, Sun Microsystems. 1986 Monterey Usenix Computer Graphics Workshop. 1986. http://donhopkins.com/home/monterey86.pdf
This led us to look for a formalization of this style. A Smalltalk-like class mechanism seemed to fill this need. Just before our beta release, therefore, we decided to look for the extensions we would need to make to PostScript to support classes. Much to our surprise, PostScript could implement classes with no modifications! The secret is PostScript dictionaries.
NeWS’s object oriented PostScript dialect is based on SmallTalk’s model, and implemented with PostScript’s dictionary stack (which is purely dynamic scoping: just look up names in the dictionaries on the dictionary stack in order of precedence).
It was very straightforward to inspect and modify running code! PostScript is totally homoiconic (“not that there’s anything wrong with that” ;) – code is simply normal data like arrays (whose executable bit is set). And objects and classes are just normal dictionaries, pushed onto the dictionary stack in search order.
It supported multiple inheritance, which we used a lot.
Sidebar: But we weren’t as multiple-inheritance-happy as ScriptX was, whose collections (arrays, dictionaries, sets, etc) were classes that many parts of the system inherited from, like the containment and clock hierarchies and timelines (you call the array append method to add children to them), while in PostScript (as in JavaScript), arrays and dictionaries are built-in, primitive non-object types, that you can use to build classes and objects but aren’t inherited from directly themselves (so you call addChild to append children to them). What I mean is that in PostScript (JavaScript), dictionary (object) is not class that other objects INHERIT from, but it is the primitive data type that classes and objects are CONSTRUCTED from.
But to contrast with PostScript and JavaScript, ScriptX was designed from day 1 to be an object oriented system with multiple inheritance, so collections are first class objects that you can easily subclass and mix in to your own classes that like to contain things or act like maps, so all the normal iterating and filtering constructs and collection methods worked on them. So in ScriptX you don’t loop over a container’s children, you loop over the container itself, since doesn’t HAVE and array of children, it IS an array of children.
Flounder: I can’t believe I threw up in front of Dean Wormer. Pinto: Face it, Kent. You threw up ON Dean Wormer. -Animal House
The syntax for sending a message to an object was “/foo obj send”, where “/foo” was like (QUOTE FOO) in Lisp, a literal name. Executable names are looked up on the dict stack, and their values executed (pushed onto the execution stack). Literal names are just pushed onto the operand stack.)
“obj” resolved to some object (and could be pushed on the stack any way, you didn’t have to give it a name like “obj”, it just had to be in the right place at the right time).
And “send” took the name of a message (or an executable array), and an object. It first established the context by pushing all the target object’s dictionaries on the stack (the old version didn’t bother to pop the old object’s dicts off, but later versions did). Then it looked up the name on the new dictionary stack (or just used the executable array you passed in), and executed it. Then it restored the previous context.
Passing in an executable array to send instead of a message name was kind of an optimization shortcut, and formally it should be methodcompiled, but if you knew what you were doing (like you just wanted to send a sequence of consecutive message to the same object and didn’t want to pay the cost to ping-pong back and forth between contexts) you could just methodcompile it by hand and coalesce the messages into an executable array of executable names (and other parameters and operators if you wanted).
In this way PostScript was a lot like a dynamically scoped Lisp with macros!
You could also send the /promote message to an object with a name and a value as an argument, to dynamically define an instance variable. Or send an /installmethod message to an object with a name and a method. It would dynamically methodcompile the method into the object’s scope and install the method on the instance as a local method.
That is one of the important properties of a prototype based object system, that you can dynamically attach methods and properties directly to instances, and later undefine them when you want!
The layout and painting code would often lazily promote cached measurements into the instance on demand. There would be a “backstop” method in the class that computed an expensive value and used /promote to cache it in the instance, overriding the backstop method, then later like in /invalidate or /resize, other methods could clear out any invalid cached methods or properties when something changed that they depended on, so they’d be recomputed and cached back on demand.
NeWS objects didn’t contain “self” references to themselves, but there was a function called “self” that figured out what the current object was by searching the dictionary stack top to bottom for an instance dictionary with a “ParentDict” key (later optimizations put a shared “ParentDictArray” in every instance of a class, which was quicker for switching contexts and searching than following links).
So it wasn’t very optimal to call “self” all the time, but formally you were supposed to. There was a very simple “method compiler” (like a Lisp macro) that optimized sends to the same object like “/foo self send” (push literal /foo, figure out self, and send message /foo to it) into just “foo” (look up foo on current dictionary stack and execute it without switching context), and it resolved “/foo super send” references to a direct inline references in the code (like “/foo supersend” so no dynamic resolution was necessary.
Here is all there was to the (original version of the) method compiler (later versions may have been a bit more complex but not much more).
% Crack open the methods and fix for "super send" and "self send"
/methodcompile { % method parentdict => newmethod
10 dict begin
/superpending false def
/selfpending false def
/parentDict exch def
[ exch
{
dup /send eq superpending selfpending or and {
pop pop
superpending
{parentDict /className get cvx /supersend cvx}
{cvx} ifelse
} if
dup type /arraytype eq {parentDict methodcompile} if
dup /super eq /superpending exch def
dup /self eq /selfpending exch def
} forall
] cvx
end
} def
And here’s all there is to send, supersend and self (later send was fixed to pop the previous instance off the stack before sending and restore it when done, and object’s parentDict key was changed to parentDictArray, and I think send was eventually implemented as a build-in operator since we used it so much):
% Generic Smalltalk-ish Primitives.
% Send a message to an object.
/send { % <args> message object => <results>
dup /parentDictArray get {begin} forall
begin
cvx exec
parentDictArray length 1 add {end} repeat
} def
% Send a message to super without popping myself.
/supersend { % <args> keywordmessage superclass => <results>
exch { 2 copy known {exit} {exch /parentDict get exch} ifelse } loop
get exec
} def
% Put me on the operand stack.
/self {/parentDict where pop} def
Here’s another great early resource about NeWS programming. But it doesn’t cover the later refinements like multiple inheritance and optimizations, like using dict-like canvases, events and processes (threads) directly as objects (for making the actual canvases on the screen, event interest templates and event managers, into objects themselves, instead of separate objects just referring to them):
The NeWS Book. An Introduction to the Network/extensible Window System. James Gosling, David S. H. Rosenthal. Michelle J. Arden. David A. LaVallée. Sun Microsystems. 1989. http://donhopkins.com/home/The_NeWS_Book_1989.pdf
Fantastic! I hadn’t realized that postscript had first-class associative array support – that makes any prototype-based object system a lot easier.
It looks like message passing is an immediate call, as opposed to adding to an object’s message queue to be handled at task switch time. Is this accurate? Was there any kind of task scheduling with regard to how objects handled their messages? If not, did this ever cause real problems (like accidental forward loops that prevent execution on other widgets from proceeding)?
I’ll have to read the NeWS documentation. I had been vaguely aware of postscript based windowing for a while but I didn’t realize it had smalltalk-like features.
Yes that’s right. NeWS also had events and threads and monitor locks for synchronization, but those were a lot heavier weight than sending messages.
The wonderful thing about NeWS was its air-tight event synchronization, that users could actually feel in their sphincters: when the system got busy and slowed like it always (still) does, you could be confident that mouse and keystroke events would be deliver perfectly in the correct order to the correct recipient, so you didn’t have to suspensefully freeze and hang on the edge of your seat after clicking or typing, then stressfully wait for the system to catch up with you, hovering with your mouse button held down while holding your breath and gritting your teeth, worrying it might miss an event or deliver it to the wrong window, making your asshole tighten up stressfully in trepidation, like how X-Windows or even modern Mac and Windows desktops still do.
I still experience this exhilarating stressful anal tightening effect every time I press Cmd-F in Medium to search while in edit mode, because if I immediately start typing my search string, the first several characters get inserted somewhere in the document, then it starts searching for the partial string and the cursor moves away so I can’t see where it dropped a turd, resulting in accidentally saving documents with titles like “uckingPie Menus” that I don’t realize until a day later.
NeWS event distribution was synchronous. A process would express “interests” in events it wanted to handle. An interest was just an event used as a template or pattern for specifying which events a process was interested in, and which handlers to use when they are received. Certain important interests like MouseDown whose handing might change the event distribution policy (by expressing other interests in MouseMove and MouseUp, opening or closing or moving windows around, changing the input focus, etc), could be marked as “synchronous”. That meant that when they were delivered from the device input queue to the NeWS global event queue, ALL distribution of other NeWS events was temporarily blocked, until the the synchronous event handler (either the receiving process like the global event manager, or some other process like a client event manager that it directly dispatched the event to) explicitly unblocks the input queue, finally giving the system permission to continue distributing events after any changes have been made (like closing a window or changing the input focus from the document to the search field for example). So there was a hard guarantee that if you typed a character that closed a window, the next character you typed would be correctly delivered to the window under the cursor at the time AFTER the window was closed, instead of accidentally being put on the input queue of the window about to close.
I really miss being able to relax confidently after I click the mouse or press a key, and not worry the event will get delivered to the wrong place at the right time. I’m sure it’s taken years off of my life by raising my blood pressure and blocking up my digestive system!
It seems useful from the perspective of guaranteeing that user input & focus is totally determinate. Did it ever make having totally independent windows update simultaneously a problem (say, if one of the two had a much heavier load)?
Somebody sent me the white paper of a cryptocurrency scheme he was promoting and asked for my comments. I will abstract the name to [Color][Fart] to protect the guilty. Their goal is to produce a zero emission world. (heh heh heh)
Hi Don,
I hope you’re doing well!
I wanted to get your thoughts on a project I have been working on with an international group called [Color][Fart], a blockchain solution for creating a zero-emission world.
We are about to release our [Color][Fart] white paper publicly, and I was hoping you could take a look first? It’s quite a long read, but I would already be thrilled if you could take a look at the summary on page 1. I would really like to hear your thoughts.
Please feel free to share this white paper with anyone that you think may be interested, perhaps specifically in the energy and real estate space, or in crypto currencies in general. We’re always trying to get better from feedback.
We’ve been having a number of exciting conversations with folks from around the world, most recently at the World Economic Forum in Davos, and we’re hoping to build on that strong momentum before launching publicly.
Best,
[FirstName] [LastName] chairman & founder [Color][Fart] The leading blockchain solution to a zero emission world
White Paper:
[Color][Fart] White Paper: [Color][Fart]: The leading blockchain solution to a zero emissions world.
Summary: Creating a zero emission world may be both the greatest environmental challenge & the greatest financial opportunity of our generation.
[Cows] cause 33% of global climate changing greenhouse gasses through the [bla bla bla bla bla bla bla bla bla] [Color][Fart] [Cow] [Fart] [bla bla bla bla bla bla bla bla bla bla bla] [Fart] [Fart] [Fart] [bla bla] [Cow] [bla bla bla] the final [Color][Fart] [Cow] solution. That’s the beauty of it! Cryptocurrency is basically one big pyramid scam. Those that get in on the top get rich. Those that get in on the bottom get fucked. So: do you want to get rich, or do you want to get fucked?
Now where have I heard that before???
https://youtu.be/jmaRTZpJgPA?t=2m10s
And where else have I seen that Brock Pierce guy??? Oh yeah:
https://youtu.be/g6iDZspbRMg?t=19m42s
So I replied:
Yes, I’m doing well, and [bla bla bla]! I’ve [bla bla bla], including [bla], and it’s quite [bla bla bla], enabling me to [bla bla bla]! You should have seen the look on his face.
Thanks for sending me the [Color][Fart] white paper. I’ve read the summary, but not the whole paper.
At first from the title I was quite excited because it sounded like you had come up with a zero emission block chain.
Isn’t the block chain itself (or at least Bitcoin as currently implemented) extremely and necessarily wasteful of energy?
That’s the huge terrible problem with the blockchain that really needs to be solved for the good of humanity.
But and as far as I know it’s a difficult unsolved problem, and will be for a long time. I thought all of the proposed solutions so far have serious flaws. (Even Bram Cohen’s “proof of disk storage” still requires some “proof of work”, and a shitload of disk drives, which require power and create pollution themselves.)
So I don’t see how building a “zero emission” system on top of an extremely energy wasteful system could be itself very efficient.
If each transaction uses as much energy as it takes to run your house for a week, then how many transactions will it take to run a [Color][Fart] [Cow] for a week, and how many houses of energy will that have to somehow offset just to break even?
I also don’t understand what it is about using a blockchain that magically makes this plan work (and surmounts all the chicken and egg problems of coming out with a protocol that you want everyone to suddenly start using), that couldn’t be done in some less wasteful (albeit less trendy and headline catching) way.
What difficult and otherwise unsolvable problem does the blockchain solve in this context, that can’t be solved otherwise?
And how does that compare in difficulty to the other intractable problem of getting everyone to trust your company and invest the time and money in using your protocol somehow?
Or is the whole point of using the block chain that everyone who uses your protocol will not need to trust you, and get rich quick, so everyone will want to sign up?
-Don
Never heard back! ;(
This has a lot of overlap with ideas I’ve been working on independently (like small computing, composable GUIs, implicit over-the-network message passing on cluster computers, and stack based languages with visible state). Presumably Don & I have both been paying a lot of attention to Alan Kay :). Glad there are fellow-travelers around – for a long time it was easy to believe I was the only Xanadu/Smalltalk/ZUI guy around!
I really like the “tentative guidelines for composable uis” post. Going to save and reflect on that a bit.
I’ve been working on a live-codeable interactive environment inspired by Self/Smalltalk etc., but using a Lua prototype-inherited scopes system (where code is “eval’d” in reified environments that can inherit from each other). After some iteration on such things I’ve realized it helps to have a concrete use case (allows testing, motivation, and empathizable communication with other people), but at the same time the choice of good use cases is important: if I choose a “make boring CRUD apps” use case, it basically involves porting existing libs + concepts and limits experimentation – so I went with a ‘generative art tool’ use case. This seems to let folks that look at it challenge their current ways of thinking about “software development.” It can evolve from static to animated art, then simulations (thus allowing games) and eventually hoping for network collaboration etc. Here are some videos:
https://www.youtube.com/watch?v=zDGzEUJscYE (making an art sketch)
https://www.youtube.com/watch?v=5-mxbhHBFOw (making another sketch)
https://www.youtube.com/watch?v=rRMeOGc1JLQ (slightly older, using it on the phone, you can see the browsing / inheritance here)
As you may have noticed, here too there is a concept of ‘sending messages’ to the scopes – all a message is is some code to eval at the scope, and that’s how you program objects to begin with anyways.
Some of the ‘composable UI’ stuff I’ve explored here is that like the .__tostring(...) metatable function Lua datatypes can support, the ‘console’ window will call a .__toui(...) metatable function on values you try to print in it (if defined), so that objects can provide their own renderers, you can set custom renderers for slots that you add to scopes (going to explore this soon for color picker widgets), …
I’m trying to use terminology like ‘perform’ etc. and more other art/human oriented words to move this tool away from “software engineering as a career choice” style orientation, as I think some of your blog posts also touch on. ‘Mindstorms’ by Seymour Papert along with some other readings are fun to explore here…
This is fantastic!
Some of my earlier experiments in the composable-UI vein used Lua, but I found that it was easy to run up against both coroutine problems (lack of preemption) & limitations in the maximum number of identifiers in the global namespace. Your work here looks a lot more advanced than mine ever got.
(My current prototype is in Io, but I discovered that I would essentially need to rewrite Io to get a working system, because it stopped being maintained years ago & has problems with its speculative execution based thread planning.)
Yeah def. understand the global identifiers thing – in my prototype above globals are by default written to the ‘current object’ and scopes can inherit, so it sort of works like process environments in UNIX. You can really bend Lua to your will a lot – I do it by setting the metatable of the environment that code is eval’d in.
Thanks for the nice words. :) I really like your writings so will be digging in there more. Definitely feels like you’ve thought about this stuff a lot and there’s good overlap. Will update you as I make more progress on this. Let me know if / when you have any more sources for me to grok!
There’s a group of people interested in the subject of composable UIs, hypertext, and utopian attempts to fix what’s broken about computing as a whole, over on mastodon. Most of my discussion happens there, & a lot of what I write on Medium is a refined version of discussions I have there. You might find that stimulating – I don’t represent the views of the whole community, which overlaps with the generative art scene.
(My current prototype is in Io, but I discovered that I would essentially need to rewrite Io to get a working system, because it stopped being maintained years ago & has problems with its speculative execution based thread planning.)
I didn’t realize Io wasn’t maintained anymore! That makes me sad. By happenstance, I’d looked recently and found the repo itself to be quite active, but it does mostly look like keep-it-going maintenance, not heavy work. Ah well. I recall it having its own pretty cool UI toolkit back in the day, too.
That said, would you mind explaining a bit about the speculative thread planning? I was just an undergrad last I used it, and I’d thought Io had a pretty normal cooperative threading system; this makes it sound like I really misunderstood something pretty cool, but I can’t find much (any?) info about this on the language site. (All I could find was the note, “The Scheduler object is responsible for resuming coroutines that are yielding. The current scheduling system uses a simple first-in-first-out policy with no priorities.”)
There’s some kind of complicated heuristic for determining whether or not coroutines have already exited, when determining whether or not to transfer control to them. I ran into false positives with regard to that behavior, which were not entirely reproducible. I asked about the behavior in the irc channel, & was told that this was a known bug with the scheduler system, and one of the reasons active development was halted – the author didn’t think he could get the behavior right in C, if I understand the history correctly.
I started implementing a new version in Go (a language I don’t know, but one that has support for channels and real multithreading built-in). This should allow me to more easily make it support a smalltalk-style image-based format with a history & support for transactions & rolling back execution, too, so it’s a general win. (Plus, since I don’t need to keep full compatibility with Io, I can break that compatibility if it makes it easier to make my composable UI system – the important bits are message passing, multi-threading, a prototype-based object system, and a simple syntax with few keywords to memorize, all of which can be preserved.)
Hell yes on composable components! Great articles, thanks!
Here’s the money shot from an HN article I just posted about that:
Valerie Landau interviewed by Martin Wasserman
Q: Do you have any last minute comments or observations about him to finish up. Or a good anecdote?
A: I think – I wanted to say one thing that Doug told me many years ago. And this is really for the software developers out there. Once, this was in the 90’s. And I said, Doug, Doug, I’m just started to get involved with software development, and we have this really cool tool we’re working on. Do you have any advice, about … for a young software developer. He looked at me and said:
“Yes. Make sure that whatever you do is very modular. Make everything as module as possible. Because you know that some of your ideas are going to endure, and some are not. The problem is you don’t know which one will, and which one won’t. So you want to be able to separate the pieces so that those pieces can carry on and move forward.”
And the source code is available from Don Hopkin’s webpage.
Alas that’s only the binary distribution of the HyperLook runtime and SimCity. The PostScript source is obscured by tokenizing it as binary and stripping the comments unfortunately. And I deeply regret the actual HyperLook source code for HyperLook is lost to me in the sands of time (unless Dug or Arthur has it on a tape somewhere), although I still have the SimCity sources.
I scanned all the manuals for older versions of HyperNeWS, the entire HyperLook manual (it was pretty big), and the SimCity manual, which we made in the NeWS version of FrameMaker (aka PainMaker: it’s RIDDLED with FEATURES!) Links to those are at the end of the article.
i’d love to run it again in an emulator (it’d run faster than ever I bet) and make some screencasts of demos, but my SS2 is in storage in the US. If somebody could please give me some help getting X11/NeWS to run on a SparcStation emulator I’d buy them a lot of beer or whatever they needed to tolerate raw unshielded doses of early Solaris. I have tried but haven’t been able to find the right images and get them to work.
Best yet would be to run a SparcStation emulator in the browser, then anybody could actually run the original version of HyperLook and SimCity! Has anybody done that and configured it to boot up a version of Solaris that runs OpenWindows on a dumb color framebuffer? That would be fine!
It’s the 30 year anniversary of CHI’88 (May 15–19, 1988), where Jack Callahan, Ben Shneiderman, Mark Weiser and I (Don Hopkins) presented our paper “An Empirical Comparison of Pie vs. Linear Menus”. We found pie menus to be about 15% faster and with a significantly lower error rate than linear menus!
So I’ve written up a 30 year retrospective:
This article will discuss the history of what’s happened with pie menus over the last 30 years (and more), present both good and bad examples, including ideas half baked, experiments performed, problems discovered, solutions attempted, alternatives explored, progress made, software freed, products shipped, as well as setbacks and impediments to their widespread adoption.
Here is the main article, and some other related articles:
Pie Menus: A 30 Year Retrospective. By Don Hopkins, Ground Up Software, May 15, 2018. Take a Look and Feel Free!
https://medium.com/@donhopkins/pie-menus-936fed383ff1
This is the paper we presented 30 years ago at CHI’88:
An Empirical Comparison of Pie vs. Linear Menus. Jack Callahan, Don Hopkins, Mark Weiser () and Ben Shneiderman. Computer Science Department University of Maryland College Park, Maryland 20742 () Computer Science Laboratory, Xerox PARC, Palo Alto, Calif. 94303. Presented at ACM CHI’88 Conference, Washington DC, 1988.
https://medium.com/@donhopkins/an-empirical-comparison-of-pie-vs-linear-menus-466c6fdbba4b
Open Sourcing SimCity. Excerpt from page 289–293 of “Play Design”, a dissertation submitted in partial satisfaction of the requirements for the degree of Doctor in Philosophy in Computer Science by Chaim Gingold.
https://medium.com/@donhopkins/open-sourcing-simcity-58470a275446
Recommendation Letter for Krystian Samp’s Thesis: The Design and Evaluation of Graphical Radial Menus. I am writing this letter to enthusiastically recommend that you consider Krystian Samp’s thesis, “The Design and Evaluation of Graphical Radial Menus”, for the ACM Doctoral Dissertation Award.
https://medium.com/@donhopkins/don-hopkins-october-31-2012-e0166ec3a26c
Constructionist Educational Open Source SimCity. Illustrated and edited transcript of the YouTube video playlist: HAR 2009: Lightning talks Friday. Videos of the talk at the end.
How to Choose with Pie Menus — March 1988.
https://medium.com/@donhopkins/how-to-choose-with-pie-menus-march-1988-2519c095ba59
BAYCHI October Meeting Report: Natural Selection: The Evolution of Pie Menus, October 13, 1998.
https://medium.com/@donhopkins/baychi-october-meeting-report-93b8e40aa600
The Sims Pie Menus. The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo.
https://medium.com/@donhopkins/the-sims-pie-menus-49ca02a74da3
The Design and Implementation of Pie Menus. They’re Fast, Easy, and Self-Revealing. Originally published in Dr. Dobb’s Journal, Dec. 1991.
https://medium.com/@donhopkins/the-design-and-implementation-of-pie-menus-80db1e1b5293
Gesture Space.
https://medium.com/@donhopkins/gesture-space-842e3cdc7102
Empowered Pie Menu Performance at CHI’90, and Other Weird Stuff. A live performance of pie menus, the PSIBER Space Deck and the Pseudo Scientific Visualizer at the CHI’90 Empowered show. And other weird stuff inspired by Craig Hubley’s sound advice and vision that it’s possible to empower every user to play around and be an artist with their computer.
OLPC Sugar Pie Menu Discussion Excerpts from the discussion on the OLPC Sugar developer discussion list about pie menus for PyGTK and OLPC Sugar.
https://medium.com/@donhopkins/olpc-sugar-pie-menu-discussion-738577e54516
Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser. By Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William Weiland.
Pie Menu FUD and Misconceptions. Dispelling the fear, uncertainty, doubt and misconceptions about pie menus.
https://medium.com/@donhopkins/pie-menu-fud-and-misconceptions-be8afc49d870
The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989. Written by Don Hopkins, October 1989. University of Maryland Human-Computer Interaction Lab, Computer Science Department, College Park, Maryland 20742.
https://medium.com/@donhopkins/the-shape-of-psiber-space-october-1989-19e2dfa4d91e
The Amazing Shneiderman. Sung to the tune of “Spiderman”, with apologies to Paul Francis Webster and Robert “Bob” Harris, and with respect to Ben Shneiderman.
https://medium.com/@donhopkins/the-amazing-schneiderman-9df99def882f
And finally this has absolutely nothing to do with pie menus, except for the shape of a pizza pie:
The Story of Sun Microsystems PizzaTool. How I accidentally ordered my first pizza over the internet.
https://medium.com/@donhopkins/the-story-of-sun-microsystems-pizzatool-2a7992b4c797
Also, a more modern pie demo was tried within quicksilver by nick jitkoff many years ago: https://www.youtube.com/watch?v=d4LkTstvUL4&feature=youtu.be&t=18m17s
(Shameless plug and not a pie menu, I spent a chunk of time experimenting with a gesture/zooming palette menu of sorts with https://thimblemac.com - the idea was to find something that felt a bit more scalable than a pie menu layouts but I arguably went too far in the steps to trigger the gesture for it. )
Wow, that is really nice! Does it somehow magically integrate itself with existing unmodified unsuspecting tools like Photoshop and Sketchup? Could it be applied to many other applications too? I’ll read your blog post about making thimble.
But before I do, and speaking of magic, I’d love to introduce you to Morgan Dixon’s brilliant and important work on Prefab: The Pixel-Based Reverse Engineering Toolkit.
I wrote a summary and links to his work in relation to an idea I have called “aQuery – like jQuery for Accessibility” on Hacker News. I’d love to figure out a way to work on this stuff so many people can easily use it!
https://news.ycombinator.com/item?id=11520967
Here are some ideas and discussion about aQuery – like jQuery for accessibility – which would be useful for implementing this kind of stuff out of of existing window systems and desktop applications.
http://donhopkins.com/mediawiki/index.php/AQuery
Morgan Dixon’s work is truly breathtaking and eye opening, and I would love for that to be a core part of a scriptable hybrid Screen Scraping / Accessibility API approach.
Screen scraping techniques are very powerful, but have limitations. Accessibility APIs are very powerful, but have different limitations. But using both approaches together, screencasting and re-composing visual elements, and tightly integrating it with JavaScript, enables a much wider and interesting range of possibilities.
Think of it like augmented reality for virtualizing desktop user interfaces. The beauty of Morgan’s Prefab is how it works across different platforms and web browsers, over virtual desktops, and how it can control, sample, measure, modify, augment and recompose guis of existing unmodified applications, even dynamic language translation, so they’re much more accessible and easier to use!
–
James Landay replies:
Don,
This is right up the alley of UW CSE grad student Morgan Dixon. You might want to also look at his papers.
–
Don emails Morgan Dixon:
Morgan, your work is brilliant, and it really impresses me how far you’ve gone with it, how well it works, and how many things you can do with it!
I checked out your web site and videos, and they provoked a lot of thought so I have lots of questions and comments.
I really like the UI Customization stuff, and also the sideviews!
Combining your work with everything you can do with native accessibility APIs, in an HTML/JavaScript based, user-customizable, scriptable, cross platform user interface builder like (but transcending) HyperCard would be awesome!
I would like to discuss how we could integrate Prefab with a Javascriptable, extensible API like aQuery, so you could write “selectors” that used prefab’s pattern recognition techniques, bind those to JavaScript event handlers, and write high level widgets on top of that in JavaScript, and implement the graphical overlays and gui enhancements in HTML/Canvas/etc like I’ve done with Slate and the WebView overlay.
Users could literally drag controls out of live applications, plug them together into their own “stacks”, configure and train and graphically customize them, and hook them together with other desktop apps, web apps and services!
For example, I’d like to make a direct manipulation pie menu editor, that let you just drag controls out of apps and drop them into your own pie menus, that you can inject into any application, or use in your own guis. If you dragged a slider out of an app into the slice of a pie menu, it could rotate it around to the slice direction, so that the distance you moved from the menu center controlled the slider!
While I’m at it, here’s some stuff I’m writing about the jQuery Pie Menus. http://donhopkins.com/mediawiki/index.php/JQuery_Pie_Menus
Web Site: Morgan Dixon’s Home Page.
https://web.archive.org/web/20170616115503/http://morgandixon.net/
Web Site: Prefab: The Pixel-Based Reverse Engineering Toolkit.
https://web.archive.org/web/20130104165553/http://homes.cs.washington.edu/~mdixon/research/prefab/
Video: Prefab: What if We Could Modify Any Interface? Target aware pointing techniques, bubble cursor, sticky icons, adding advanced behaviors to existing interfaces, independent of the tools used to implement those interfaces, platform agnostic enhancements, same Prefab code works on Windows and Mac, and across remote desktops, widget state awareness, widget transition tracking, side views, parameter preview spectrums for multi-parameter space exploration, prefab implements parameter spectrum preview interfaces for both unmodified Gimp and Photoshop:
http://www.youtube.com/watch?v=lju6IIteg9Q
PDF: A General-Purpose Target-Aware Pointing Enhancement Using Pixel-Level Analysis of Graphical Interfaces. Morgan Dixon, James Fogarty, and Jacob O. Wobbrock. (2012). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’12. ACM, New York, NY, 3167-3176. 23%.
Video: Content and Hierarchy in Prefab: What if anybody could modify any interface? Reverse engineering guis from their pixels, addresses hierarchy and content, identifying hierarchical tree structure, recognizing text, stencil based tutorials, adaptive gui visualization, ephemeral adaptation technique for arbitrary desktop interfaces, dynamic interface language translation, UI customization, re-rendering widgets, Skype favorite widgets tab:
http://www.youtube.com/watch?v=w4S5ZtnaUKE
PDF: Content and Hierarchy in Pixel-Based Methods for Reverse-Engineering Interface Structure. Morgan Dixon, Daniel Leventhal, and James Fogarty. (2011). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’11. ACM, New York, NY, 969-978. 26%.
Video: Sliding Widgets, States, and Styles in Prefab. Adapting desktop interfaces for touch screen use, with sliding widgets, slow fine tuned pointing with magnification, simulating rollover to reveal tooltips: https://www.youtube.com/watch?v=8LMSYI4i7wk
Video: A General-Purpose Bubble Cursor. A general purpose target aware pointing enhancement, target editor:
http://www.youtube.com/watch?v=46EopD_2K_4
PDF: Prefab: Implementing Advanced Behaviors Using Pixel-Based Reverse Engineering of Interface Structure. Morgan Dixon and James Fogarty. (2010). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’10. ACM, New York, NY, 1525-1534. 22%
PDF: Prefab: What if Every GUI Were Open-Source? Morgan Dixon and James Fogarty. (2010). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’10. ACM, New York, NY, 851-854.
Morgan Dixon’s Research Statement:
Community-Driven Interface Tools
Today, most interfaces are designed by teams of people who are collocated and highly skilled. Moreover, any changes to an interface are implemented by the original developers and designers who own the source code. In contrast, I envision a future where distributed online communities rapidly construct and improve interfaces. Similar to the Wikipedia editing process, I hope to explore new interface design tools that fully democratize the design of interfaces. Wikipedia provides static content, and so people can collectively author articles using a very basic Wiki editor. However, community-driven interface tools will require a combination of sophisticated programming-by-demonstration techniques, crowdsourcing and social systems, interaction design, software engineering strategies, and interactive machine learning.
–
agumonkey writes:
Small temporary answer while I unwind all the linked content, Dixon’s target aware pointing is already missing so much. I wonder how on earth nobody in smartphone land thought to implement something similar. I’m already hooked :)
–
Don replies:
It’s missing from many contexts where it would be very useful, including mobile. It’s related in many ways to those mobile gui, web browser and desktop app testing harnesses. It could be implemented as a smart scriptable “double buffered” VNC server (for maximum efficiency and native Accessibility API access) or client (for maximum flexibility but less efficient).
The way jQuery widgets can encapsulate native and browser specific widgets with a platform agnostic api, you could develop high level aQuery widgets like “video player” that knew how to control and adapt many different video player apps across different platforms (youtube or vimeo in browser, vlc on windows or mac desktop, quicktime on mac, windows media player on windows, etc). Then you can build much higher level apps out of widgets like that.
Target aware pointing is one of many great techniques he shows can be layered on top of existing interfaces, without modifying them.
I’d like to integrating all those capabilities plus the native Accessibility API of each platform into a JavaScript engine, and write jQuery-like selectors for recognizing patterns of pixels and widgets, creating aQuery widgets that tracked input, drew overlays, implemented text to speech and voice control interfaces, etc.
His research statement sums up where it’s leading: Imagine wikipedia for sharing gui mods!
Berkeley Systems (the flying toaster screen saver company) made one of the first screen readers for the Mac in 1989 and Windows in 1994.
https://en.wikipedia.org/wiki/OutSpoken
Richard Potter, Ben Shneiderman and Ben Benderson wrote a paper called Pixel Data Access for End-User Programming and graphical Macros, that references a lot of earlier work.
The integrations with Sketch/SketchUp was done via AXAPI, using https://github.com/robrix/Haxcessibility with some small extensions to scan the top menu. So yes, for those apps there’s no modification, the setup so far’s been done by curated plist/icon assets contained within my app, but in theory this could be made to be user-editable & leverage icons contained within the app bundles.
Initially I started with Illustrator via an API (and dropped it due to API bugginess). And while the website still shows Photoshop support I dropped it because it used another buggy network-based api. The lesson learned is that on the desktop accessibility hacks proved to be much easier, but not entirely perfect - occasionally there’s still times where AXAPI seemingly decides not to cooperate and rejects requests until switching away from the window and back.
Off the top of my head, a couple other apps in the space of leveraging macos AXAPI would be shortcat ( https://shortcatapp.com/ ), bettertouchtool (I thiink it does some axapi traversing: https://folivora.ai/ ), and ghostnote ( https://www.ghostnoteapp.com/ )
Prefab: Wow, took a look at https://www.youtube.com/watch?v=w4S5ZtnaUKE and got enthused enough to make a quick montage of their work ( https://twitter.com/vivekgani/status/997727951583035393 ). Have definately had some similar thoughts but never thought to dig through HCI papers to see existing work in the space.
aQuery: Will have to look into this idea more, sounds doable though I’d be curious about whether it could also be structured allow for subscribing to state changes (e.g. a button going from enabled to disabled). I find myself still debating whether to go down a more ‘pure’ path towards a whole environment where things are designed to be altered (dynamicland, smalltalk - https://youtu.be/AnrlSqtpOkw?t=10m41s )
Wow, this is wonderful stuff and pieces of the puzzle I never heard about!
Another puzzle piece that I’m interested in, that’s useful for live performance, video manipulation, screen scraping, image processing, pattern matching and vision based approaches like Prefab, is the ability to efficiently send images between apps running on the same system, via the GPU, without any performance penalty of copying between GPU and system memory (or even GPU and GPU memory unnecessarily)!
Integrating uncooperative apps and libraries with virtual webcams and virtual framebuffers is also very useful, and can bridge many gaps between existing monolithic apps you can’t otherwise modify or extend with plugins.
There is a library for the Mac called Syphon, and a similar one for Windows called Spout, that’s used by the VJ community and other people to efficiently integrate many programs like Max/MSP/Jitter, Pure Data, Quartz Composer, Processing, Unity3D, AfterEffects, VirtualDJ, FreeFrameGL, VLC, Java, Cinder, Vuo, Plask, Open Frameworks, Omnidome, Little Projection Mapping Tool, and even CEF (Chrome Embedding Framework), etc.
Syphon is an open source Mac OS X technology that allows applications to share frames - full frame rate video or stills - with one another in realtime. Now you can leverage the expressive power of a plethora of tools to mix, mash, edit, sample, texture-map, synthesize, and present your imagery using the best tool for each part of the job. Syphon gives you flexibility to break out of single-app solutions and mix creative applications to suit your needs.
http://syphon.v002.info/faq.php
https://github.com/Syphon/Syphon-Framework
Syphon is a Mac OS X technology to allow applications to share video and still images with one another in realtime, instantly.
It’s quite useful to have a web browser in the mix, so this is an essential ingredient:
https://github.com/vibber/CefWithSyphon
Syphon is built on top of the macOS “IOSurface” API:
https://developer.apple.com/documentation/iosurface
Web browsers like Chrome and Safari that break the browser up into separate heavy weight processes for a user interface process and multiple rendering processes also use IOSurface to efficiently share images in the GPU between processes.
iOS also supports IOSurface, and that’s how WKWebView works (which runs Safari in another hidden background process and embeds the browser in the foreground iOS app process). But you can’t multitask on IOS or run Max and AfterEffects, so it doesn’t make much sense for Syphon to support iOS. Syphon only makes sense on desktop computers.
IOSurface currently supports both OpenGL and Metal. But unfortunately Syphon is totally OpenGL based. So sadly you can’t use it with Unity3D if you need to use the Metal device drivers (which are vastly preferable to OpenGL).
https://shapeof.com/archives/2017/12/moving_to_metal_episode_iii.html
IOSurface is neat. A shared bitmap that can cross between programs, and it’s got a relatively easy API including two super critical functions named IOSurfaceLock and IOSurfaceUnlock. I mean, if you’re sharing the data across process boundaries then you’ll need to lock things so that the two apps don’t step on each other’s toes. But of course if you’re not sharing it across processes, then you can ignore those locks, right? Right?
And then a single file came back, named IOSurface2D.mm, which was some obscure sample code from Apple that I had received at one point a number of years ago.
As I said, the documentation is thin and examples are rare! But the Safari and Chrome web browser source code, bug reports and discussion group is a great source of cutting-edge information, because internal Apple developers are working on it who have access to secret information that’s not public, and know how to make it perform well.
I think it’s even possible for an OpenGL app and a Metal app to communicate with each other via IOSurface. (But Syphon doesn’t currently support that.)
The web browsers have moved on to using a newer higher level but mostly undocumented API, that I believe is built on top of IOSurface, called CARemoteLayerClient / CARemoteLayerServer.
https://developer.apple.com/documentation/quartzcore/caremotelayerclient
https://developer.apple.com/documentation/quartzcore/caremotelayerserver
Here are some comments on supporting it in Chrome:
https://bugs.chromium.org/p/chromium/issues/detail?id=102340
Safari must also be using something like IOSurfaces to make pixel data available across processes, so I’m guessing similar performance must be achievable in Chrome.
Safari uses private SPI to share surfaces, which has different behaviors than IOSurface, so things that Safari can do aren’t necessarily feasible in other browsers (which is why Invalidate Core Animation exists in the first place).
I’m not sure what advantages of using CARemoteLayerClient/Server are over IOSurface, or what “SPI” means exactly, but I think it has to do with this header file:
https://github.com/WebKit/webkit/blob/master/Source/WebCore/PAL/pal/spi/cocoa/IOSurfaceSPI.h
At any rate, there are currently two problems with Syphon, which is otherwise very wonderful but getting long in the tooth:
it’s not a cross platform API so it doesn’t support Windows. (But there is Spout for that.)
it depends on OpenGL and doesn’t support Metal. Ideally it would support both.
perhaps there is a good reason to do the same thing as the web browsers are now doing and use CARemoteLayerClient/Server instead of IOSurface, but I don’t know the issues involved or why they switched, and the documentation is sparse.
On Windows, there’s a similar library called Spout that supports many different applications and libraries like Unity3D, OBS (Open Broadcaster Software), FreeframeGL, Java, Processing, Max/MSP/Jitter, Ableton Live, Virtual Webcam, OpenFrameworks, Cinder, etc:
https://cycling74.com/forums/syphon-recorder-equivalent-on-windows-or-other-tricks
Senders and receivers include FreeframeGL plugins, a Java interface for Processing, Jitter externals for Max/Msp, VIZZable modules for Ableton Live, and a Virtual Webcam as a universal receiver. There is also example code for creating your own applications with openFrameworks and Cinder. Now these applications running on Windows can share video with each other in a similar way to Syphon for OSX.
For compatible graphics hardware, OpenGL textures are shared by way of DirectX using the NVIDIA DirectX/Opengl interop extension. If hardware is not compatible, SPOUT provides a backup by way of CPU memory.
Ideally I’d love to have one glorious cross platform API that supports whatever transport (or transports) work best on each platform, and even transparently supports less efficient APIs like simply sending images over shared system memory, uncompress or compressed over the network, WebRTC, etc, and also support whatever graphics APIs you need to integrate any particular existing app.
Then the same library would let you integrate apps using different graphics APIs on the same system super-efficiently, and even Macs and Windows systems on the same network to communicate as efficiently as possible by streaming video.
Huh, I think I’m starting to get your vision - you want the hacks/piping work built to fling the existing surfaces of ‘personal computing’ to facilitate the other forms of computing (social, presentative, etc.) - one use i’ve also been thinking of has been ‘protected sharing’ - e.g. screencasts or recording where the user selects areas and text is magically blurred out.
FWIW, iOS is probably vastly different now but I remember that Adam Bell ( https://twitter.com/b3ll ) sorta played in the space of getting app framebuffers a while ago when jailbreaking was still easy - https://youtu.be/Ox09rWJTuCA?t=18m26s .
This kicks ass: The foot menu!
How does this compare to other menus if its keyboard driven (both menu opening and item selection).
What about compared to some regexp search through the menu items?
One good thing that my horrible ActiveX pie menus did handle nicely was keyboard navigation, shown in the following demo.
Four and eight item pie menus work very well with the numeric keypad and arrow keys, as well as joysticks and d-pads. You can also move to the next/previous item around the circle with tab / shift-tab, and type the name of an item to jump to it directly, hit escape to cancel the whole menu tree, backspace to peel back one submenu layer, etc, just like normal menu keyboard navigation.
It’s great to be able to navigate the same memorable menu trees with many kinds of input devices!
Terrible Example: ActiveX Pie Menus
This demo of an old OLE/ActiveX implementation of pie menus in Internet Explorer shows the weakness and complexity of the traditional desktop GUI MFC based approach to creating and editing pie menus with typical property sheets, scrolling lists and tree editors. It also has a weird way of paging between large numbers of items in pages of eight or fewer items each, which I was never very happy with. And programming complex user interfaces with XML/C++/ActiveX/OLE/MFC/Win32 is hell on earth.
Did you do any experiments specifically with joystick input? Because mouse drivers are intended to produce actual cursor position deltas, even trackpoint-style mice produce more information than is used in a pie menu (and while warping can make up for overshooting, it can be confusing for the user unless done carefully). On the other hand, while Mass Effect never nested their pie menus, they essentially made use of one of the two controller sticks as a pure direction-input device with good radial resolution (better than a d-pad, which has only four directions & thus can defeat the point of a pie menu if there are more than 4 options).
Using a dual-stick video game controller for a pair of pie menus that are statically placed means simultaneous navigation of two menus can be fast. Using fans instead of pies and orienting the fan such that the fan center is the center of the previous selection and the fan’s options are arrayed perpendicular to the swipe direction used to select it could make it possible to navigate and back out of many layers of menu pretty easily without extra selection gestures (the one complaint I had about the pie menus in The Sims 1)
No, I haven’t done any joystick experiments myself.
Before he started spreading FUD about marking menus, Bill Buxton wrote an excellent paper on input devices in 1983, called “Lexical and Pragmatic Considerations of Input Structures”, which categorizes input devices and how their unique properties map to the problem domain, and discusses a lot of important concepts like pragmatics, chunking and closure, device independence, taxonomy of devices, and the nulling problem:
https://www.billbuxton.com/lexical.html
lexical: issues having to do with spelling of tokens (i.e., the ordering of lexemes and the nature of the alphabet used - symbolic or iconic, for example).
pragmatic: issues of gesture, space and devices.
Figure 1: Taxonomy of Input Devices. https://www.billbuxton.com/lexical1.gif
Continuous manual input devices are categorized. The first order categorization is property sensed (rows) and number of dimensions (columns). Subrows distinguish between devices that have a mechanical intermediary (such as a stylus) between the hand and the sensing mechanism (indicated by “M”), and those which are touch sensitive (indicated by “T”). Subcolumns distinguish devices that use comparable motor control for their operation.
Since you mentioned fans, I should point out that Simon Schneegans, the same guy who did the excellent Gnome-Pie, also did these amazing spoke-like and fan-out pies for his bachelor’s thesis:
The Coral-Menu: https://vimeo.com/51072812
The Trace-Menu: https://vimeo.com/51073078
Those are great on so many levels!
You’re welcome! I’m glad you’re hungry for it. ;)
Enjoy some pizza: https://medium.com/@donhopkins/the-story-of-sun-microsystems-pizzatool-2a7992b4c797
And some more pie too: https://www.youtube.com/watch?v=Xj1LFYSO2Kw
Thanks!
Ignoring the thing its built on, that’s an interesting demo. I like that expert gesture-like mode. How well does it works in the end? Too bad this video doesn’t show keyboard or joystick use (or I missed it having skipped most of the editor portion).
It was very snappy because it was directly calling the Win32 API to manipulate windows and handle events (well, going through some layers of OLE/MFC, but not as squishy as a web browser). So mouse-ahead was quite reliable since it wasn’t polling for events and letting some of them slip through the cracks when the computer slows down (i.e. always), like some systems do.
That was the great thing about NeWS: it had perfect, air tight, synchronous event handling, and never dropped one event on the floor, or sent a keystroke to the wrong window.
Even my modern MacBook Pro always gets overheated, slows down, and when I open a new tab I have to stand back, be patient, and wait for the vehicle to stop moving completely before embarking with the mouse and keyboard.
On Medium, I keep typing cmd-f to search before the page has settled down, and the first few characters of my search string that I type after cmd-f get inserted into the document somewhere at random, then the rest go into the search field! So I have to type cmd-f, sit back, take a breath, wait for a while, then type what I want to search for. Emacs on a 300 baud terminal was better than that!
But I really hurt for an easy to use drawing API like cairo or canvas (i.e. I’m an old PostScript programmer, and it’s a huge pain in the ass carefully selecting resources in and out of the win32 GDI context absolutely perfectly every time without making the slightest mistake or else GDI memory would overflow and Windows would freeze up and the reset button would not work so you had to remove the laptop’s battery to reboot).
But then again it was even worse programming on a Mac without any memory protection whatsoever! Those old Macs tend to freeze up instantly whenever you made the slightest programming mistake, instead of slowly grinding to a halt while glitching out the screen like Windows. To their credit, the PowerPC Macs could crash really really really quickly! You instantly knew when to reach for the “programmer’s switch”.
If these PIE menus are so awesome, why can’t I just use them?
In Android, there as an patch on (now discontinued) Paranoid Android fork, later respawned as SlimPIE but retored due to problematic maintenance in newer Android. There are some overlay-like applications for that though, but they all suffer from minor problems and feel like extraordinary additions not integrated well with the rest of OS.
Using such thing as context menu in GTK or Qt could be probably possible (on X.org at least, with XShape ext.) but there are too many places where people assumed that menus are in form of rectangle (both in 3rd party apps and library itself) that you would probably need to rewrite too many LOCs
Not even saying anything about WinNT, as they moved scrollbars rendering code into kernelspace because “it’s faster lol”
I don’t know too much about MacOS GUI rendering mechanics, but they already did a great improvement over standard UIs with that global menu thing, which is also pretty hard to reimplement in any other environment without breaking stuff
I’ll be really glad to see the software world migrating to more ergonomic UI/UX concept instead of flattening the world for fun and profit (I mean, reducing amount of skilled graphicians), even when they use some hybrid concepts intead of full PIE, for example “trees” or regular menus expanding from root PIE items. This particular concept has been well presented in Sword Art Online anime and also kinda predicted the UX paradigms for holographic/floating interfaces:
Like most widget types that are poorly supported or unsupported in popular GUI toolkits, I see pie menus almost exclusively in games – since people writing games have the expectation that they’ll need to essentially roll their own GUI toolkit in order to make it sufficiently themable anyhow.
It’s an unfortunate state of affairs: widget functionality, once you’ve got past the initial learning curve, is proportional to the degree to which the widget behavior matches the user’s internal mental model of the task, and so expressive UI elements have the ability to make real work much more efficient; limiting the use of good GUIs to video game menus means UI design is limited in its capacity to make anything but our leisure time easier.
(As a side note: SAO is a bad example of UI design in anime – the titular game is, on many levels, not professional enough to make it to market, and the way the menus are laid out is no exception. Geoff Thew explains this in detail in one of his video essays. The way SAO uses pie menus is sort of a shallow copy of how pie menus have been used in games for the past decade, as understood by someone who has never played a video game. A good example of an interesting use of pie menus in a real game is the conversation system in Mass Effect – where segments have their size changed in order to make certain responses more likely, and where pie areas correspond to emotions or tactics.)
Great points!
I wrote about “Ersatz Pie Menus” in this additional article, “Pie Menu FUD and Misconceptions: Dispelling the fear, uncertainty, doubt and misconceptions about pie menus.”
Ersatz Pie Menus
Richard Stallman likes to classify an Emacs-like text editor that totally misses the point of Emacs by not having an extension language as an “Ersatz Emacs”.
In the same sense, there are many “Ersatz Pie Menus” that may look like pie menus on the surface, but don’t actually track or feel like pie menus, or benefit from all of their advantages, because they aren’t carefully designed and implemented to optimize for Fitts’s Law by being based purely on the direction between stroke endpoints instead of the entire path, minimizing the distance to the targets, and maximizing the size of the targets.
Microsoft Surface Dial: Someone on Hacker News asked me: Any thoughts on Microsoft’s Surface Dial radial menu?
Good question — glad you asked! (No, really! ;) Turning a dial is a totally different gesture than making directional strokes, so they are different beasts, and a dial lacks the advantages pie menus derive from exploiting Fitts’s Law. […]
Beautiful but Ersatz Pie Menu Example – the graphics are wonderful but the tracking is all wrong: http://pmg.softwaretailoring.net/
Turning Is Not Like Stroking: In terms of “Big O Notation”, pull down menus, click wheels, and carousel selection is linear O(n), while with a pie menu you only have to perform one short directional gesture to select any item, so selection is constant O(1) (with a small constant, the inner inactive radius of the hole in the middle, which you can make larger if you’re a spaz).
Yucky Pie Menus Recipes
Bedazzling and Confusing Graphics and Animations […]
Rectangular Label Targets Instead of Wedge Shaped Slice Targets […]
Triggering Items and Submenus on Cursor Motion Distance Instead of Clicking […]
Not Starting Pie Menus Centered on the Cursor […]
Improperly Handling Screen Edges […]
Improperly Handling Mouse-Ahead Display Preemption and Quick Gestures on Busy Computers […]
Yummy Pie Menu Recipes
I’m certainly not saying that pie menus should never be graphically slick or have lots of cool animations. Just that they should be thoughtfully designed and purposefully easy to use first, so they deeply benefit users from Fitts’s Law, instead of just trying to impress users with shallow useless surface features.
Spectacular Example: Simon Schneegans’ Gnome-Pie, the slick application launcher for Linux
I can’t understate how much I like this. Not only is it slick, beautiful, and elegantly animated, but it’s properly well designed in all the important ways that make it Fitts’s Law Friendly and easy to use, and totally deeply customizable by normal users! It’s a spectacularly useful tour-de-force that Linux desktop users can personalize to their heart’s content.
Gnome-Pie — Simon Schneegans
Homepage of Gnome-Pie, the slick application launcher for Linux. simmesimme.github.io
Gnome-Pie is a slick application launcher which I’m creating for Linux. It’s eye candy and pretty fun to work with. It offers multiple ways to improve your desktop experience.
Check out the project’s homepage @ http://gnome-pie.simonschneegans.de
I saw. You had a lot of interesting stuff on your Medium account.
(Are you likely to post more on HyperTIES? As a Xanadu-er & someone interested in the history of pre-web hypertext systems, I find it interesting, since it has a pretty distinct look & feel and seems like it might have more interesting UI ideas to copy. The ‘pop-out’ mechanism for linked areas in images interested me when I saw it mentioned.)
One of the good ideas was that every article had a brief definition, which it would show to you the first time you clicked a link, without leaving where you were. That’s a feature I wish was universally supported by the web, so you didn’t have to leave your current page to find out something about the link before following it.
You could then click again (or click on the definition), or pop up a pie menu to open the link in the current or the other window. Also you could turn pages and navigate with the pie menus, swiping in obvious directions, like with an iPad, but having the pie menu to provide a visual affordance of which gestures are available (“self revealing” gestures).
Here are a couple of articles about HyperTIES that I haven’t moved to Medium yet:
Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser. By Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William Weiland. http://www.donhopkins.com/drupal/node/102
Abstract: Since browsing hypertext can present a formidable cognitive challenge, user interface design plays a major role in determining acceptability. In the Unix workstation version of Hyperties, a research-oriented prototype, we focussed on design features that facilitate browsing. We first give a general overview of Hyperties and its markup language. Customizable documents can be generated by the conditional text feature that enables dynamic and selective display of text and graphics. In addition we present:
an innovative solution to link identification: pop-out graphical buttons of arbitrary shape.
application of pie menus to permit low cognitive load actions that reduce the distraction of common actions, such as page turning or window selection.
multiple window selection strategies that reduce clutter and housekeeping effort. We preferred piles-of-tiles, in which standard-sized windows were arranged in a consistent pattern on the display and actions could be done rapidly, allowing users to concentrate on the contents.
Pie menus to permit low cognitive load actions: To avoid distraction of common operations such as page turning or window selection, pie menus were used to provide gestural input. This rapid technique avoids the annoyance of moving the mouse or the cursor to stationary menu items at the top or bottom of the screen.
HyperTIES Hypermedia Browser and Emacs Authoring Tool for NeWS. http://www.donhopkins.com/drupal/node/101
That has a screen dump, an architectural diagram, a list of interesting features, data structures, and links to the C, Forth, PostScript, Emacs MockLisp, and HyperTIES markup language source code!
Here’s a demo of HyperTIES and the pop-out embedded menus:
HCIL Demo - HyperTIES Browsing: Demo of NeWS based HyperTIES authoring tool, by Don Hopkins, at the University of Maryland Human Computer Interaction Lab.
https://www.youtube.com/watch?v=fZi4gUjaGAM
A funny story about the demo that has the photo of the three Sun founders whose heads puff up when you point at them:
When you point at a head, it would swell up, and you pressed the button, it would shrink back down again until you released the button again.
HyperTIES had a feature that you could click or press and hold on the page background, and it would blink or highlight ALL of the links on the page, either by inverting the brightness of text buttons, or by popping up all the cookie-cut-out picture targets (we called them “embedded menus”) at the same time, which could be quite dramatic with the three Sun founders!
Kind of like what they call “Big Head Mode” these days! https://www.giantbomb.com/big-head-mode/3015-403/
I had a Sun workstation set up on the show floor at Educom in October 1988, and I was giving a rotating demo of NeWS, pie menus, Emacs, and HyperTIES to anyone who happened to walk by. (That was when Steve Jobs came by, saw the demo, and jumped up and down shouting “That sucks! That sucks! Wow, that’s neat. That sucks!”)
The best part of the demo was when I demonstrated popping up all the heads of the Sun founders at once, by holding the optical mouse up to my mouth, and blowing and sucking into the mouse while secretly pressing and releasing the button, so it looked like I was inflating their heads!
One other weird guy hung around through a couple demos, and by the time I got back around to the Emacs demo, he finally said “Hey, I used to use Emacs on ITS!” I said “Wow cool! So did I! What’s was your user name?” and he said “WNJ”.
It turns out that I had been giving an Emacs demo to Bill Joy all that time, then popping his head up and down by blowing and sucking into a Sun optical mouse, without even recognizing him, because he had shaved his beard!
He really blindsided me with that comment about using Emacs, because I always thought he was more if a vi guy. ;)
Accidentally stumbled upon the xanadu-esque side of this conversation. Just to throw in a thought, it’s been on my mind to attempt applying a high level of polish to the federated wiki project such that it facilitated a way to zoom in on reading one thing at a time (a zoom of sorts). So..a ‘big head mode’ of sorts to let you focus on consuming (or editing an article when you needed it then zoom back out to see the connections.
The other crazy thought on my mind is seeing if there’s a way to bend existing oss text editors (particularly atom.io) to facilitate more freeform things like ‘code bubbles’ or liquidtext.
I just added this example to the article, which you may have missed (since it wasn’t there until a few minutes ago):
Spectacular Example: Simon Schneegans’ Gnome-Pie, the slick application launcher for Linux
I can’t understate how much I like this. Not only is it slick, beautiful, and elegantly animated, but it’s properly well designed in all the important ways that make it Fitts’s Law Friendly and easy to use, and totally deeply customizable by normal users! It’s a spectacularly useful tour-de-force that Linux desktop users can personalize to their heart’s content.
Gnome-Pie - Simon Schneegans
Homepage of Gnome-Pie, the slick application launcher for Linux. simmesimme.github.io
Gnome-Pie is a slick application launcher which I’m creating for Linux. It’s eye candy and pretty fun to work with. It offers multiple ways to improve your desktop experience.
Check out the project’s homepage @ http://gnome-pie.simonschneegans.de
If that doesn’t blow your mind, check this out – there are so many great things about it:
I used a great pie interface on Android for a while, though I cannot remember what it was called and am failing to find screenshots online. It helped considerably with one-handed operation.
Imagine how safe democracy would be if only voting machines used pie menus:
https://medium.com/@donhopkins/dumbold-voting-machine-for-the-sims-1-3e76f394452c
No mention of Neverwinter Nights? As a game dev I am disappointed. That game had nested pie menus for almost all in game interaction and it worked great. A lot of people complained and said they didn’t like them but it was mostly just unfamiliarity. Once you got used to them they worked great.
I think an important goal is to not only give users the tools to create their own pie menus, but design tools that support and motivate user to intuitively understand Fitts’s law, and how to design good pie menus for themselves.
A recent game that lets players create their own radial menus, in a way that deeply improves game play, is Monster Hunter: World!
Awesome Example: Monster Hunter: World — Radial Menu Guide Monster Hunter: World is a wonderful example of a game that enables and motivates players to create their own pie menus, that shows how important customizable user defined pie menus are in games and tools.
Want access to all your items, ammo and gestures at your fingertips? Here’s a quick guide on the Radial Menu.
With a multitude of items available, it can be challenging to find the exact one you need in the heat of the battle. Thankfully, we’ve got you covered. Here’s a guide on radial menus, and how to use them: The radial menu allows you to use items at a flick of the right stick. There are four menus offering access to eight items each, and you can fully customize them, all to your heart’s content. Radial menus are not just limited to item use, however. You can use them to craft items, shoot SOS flares, and even use communication features such as stickers and gestures.
Somebody raised an interesting point on reddit about patent trolls:
BobTheSCV> Neverwinter Nights also implemented them, and they worked very well. I had just assumed it was patent trolls or something that kept them from being widely adopted.
I replied:
You are absolutely correct about the patent trolls!
Bill Buxton at Alias and his marketing team spread a bunch of inaccurate FUD about their “marking menu patent”, which I accidentally discovered and tried to correct and get him to stop doing decades ago, but he refused, and continued to spread FUD.
So Alias kept advertising their “patented marking menus” for DECADES, purposefully and successfully discouraging their competition 3D Studio Max, AND many other developers of free and proprietary apps as collateral damage, from adopting them.
When I asked Buxton about the “marking menu patent” before it was granted, he lied point blank to me that there was no “marking menu patent”, so I couldn’t prove to Kinetix that it was OK to use them, or contact the patent office and inform them about the mistakes in their claims about prior art, and the fact that the “overflow” technique they were claiming in the patent was obvious.
The whole story is here:
Pie Menu FUD and Misconceptions: Dispelling the fear, uncertainty, doubt and misconceptions about pie menus.
https://medium.com/@donhopkins/pie-menu-fud-and-misconceptions-be8afc49d870
Some excerpts:
There is a financial and institutional incentive to be lazy about researching and less than honest in reporting and describing prior art, in the hopes that it will slip by the patent examiners, which it very often does.
Unfortunately they were able to successfully deceive the patent reviewers, even though the patent references the Dr. Dobb’s Journal article which clearly describes how pie menu selection and mouse ahead work, contradicting the incorrect claims in the patent. It’s sad that this kind of deception and patent trolling is all too common in the industry, and it causes so many problems.
Even today, long after the patent has expired, Autodesk marketing brochures continue to spread FUD to scare other people away from using marking menus, by bragging that “Patented marking menus let you use context-sensitive gestures to select commands.”
A snapshot of Alias’s claim about “Patented marking menus” from one of their brochures that they are still distributing, even years after their bad patent has expired:
https://cdn-images-1.medium.com/max/450/1*3C79dFnlhN__OJ3XmEjN9A.png
“Marking Menus: Quickly select commands without looking away from the design. Patented marking menus let you use context-sensitive gestures to select commands.”
http://images.autodesk.com/adsk/files/aliasdesign10_detail_bro_us.pdf
The Long Tail Consequences of Bad Patents and FUD
I attended the computer game developer’s conference in the late 90’s, while I was working at Maxis on The Sims. Since we were using 3D Studio Max, I stopped by the Kinetix booth on the trade show floor, and asked them for some advice integrating my existing ActiveX pie menus into their 3D editing tool.
They told me that Alias had “marking menus” which were like pie menus, and that Kinetix’s customers had been requesting that feature, but since Alias had patented marking menus, they were afraid to use pie menus or anything resembling them for fear of being sued for patent infringement.
I told them that sounded like bullshit since there was plenty of prior art, so Alias couldn’t get a legitimate patent on “marking menus”.
The guy from Kinetix told me that if I didn’t believe him, I should walk across the aisle and ask the people at the Alias booth. So I did.
When I asked one of the Alias sales people if their “marking menus” were patented, he immediately blurted out “of course they are!” So I showed him the ActiveX pie menus on my laptop, and told him that I needed to get in touch with their legal department because they had patented something that I had been working on for many years, and had used in several published products, including The Sims, and I didn’t want them to sue me or EA for patent infringement.
When I tried to pin down the Alias marketing representative about what exactly it was that Alias had patented, he started weaseling and changing his story several times. He finally told me that Bill Buxton was the one who invented marking menus, that he was the one behind the patent, that he was the senior user interface researcher at SGI/Alias, and that I should talk to him. He never mentioned Gordon Kurtenbach, only Bill Buxton.
So I called Bill Buxton at Alias, who stonewalled and claimed that there was no patent on marking menus (which turned out to be false, because he was just being coy for legal reasons). He said he felt insulted that I would think he would patent something that we both knew very well was covered by prior art.
At the time I didn’t know the term, but that’s what we now call “gaslighting”: https://en.wikipedia.org/wiki/Gaslighting
Gee, who do we all know who lies and then tries to turn it all around to blame the person who they bullied, and then tries to play the victim themselves? https://en.wikipedia.org/wiki/Donald_Trump
Gordon Kurtenbach, who did the work and got the patent that Alias marketing people were bragging about in Bill Buxton’s name agrees:
Gordon> Don, I read and understand your sequence of events. Thanks. It sounds like it was super frustrating, to put it mildly. Also, I know, having read dozens of patents, that patents are the most obtuse and maddening things to read. And yes, the patent lawyers will make the claims as broad as the patent office will allow. So you were right to be concerned. Clearly, marketing is marketing, and love to say in-precise things like “patented marking menus”.
Gordon> At the time Bill or I could have said to you “off the record, its ok, just don’t use the radial/linear combo”. I think this was what Bill was trying to say when he said “there’s no patent on marking menus”. That was factually true. However, given that Max was the main rival, we didn’t want to do them any favors. So those were the circumstances that lead to those events.
What’s ironic is that Autodesk now owns both Alias and 3D Studio Max. Gordon confirmed that Alias’s FUD did indeed discourage Kinetix from implementing marking menus or pie menus, which were not actually covered by the patent:
Gordon> After Autodesk acquired Alias, I talked to the manager who was interested in getting pie menus in Max. Yes, he said he that the Alias patents discouraged them from implementing pie menus but they didn’t understand the patents in any detail. Had you at the time said “as long we don’t use the overflows we are not infringing” that would have been fine. I remember at the time thinking “they never read the patent claims”.
Don> The 3D Studio Max developers heard about the Alias marking menu patent from Alias marketing long before I heard of it from them on the trade show floor.
Don> The reason I didn’t know the patent only covered overflows was that I had never seen the patent, of course. And when I asked Buxton about it, he lied to me that “there is no marking menu patent”. He was trying to be coy by pretending he didn’t understand which patent I was talking about, but his intent was to deceive and obfuscate in order to do as much harm to Kinetix 3D Studio Max users as possible, and unfortunately he succeeded at his unethical goal.
What’s even worse is that in Buxton’s zeal to attack 3D Studio Max users, he also attacked users of free software tools like Blender.
The Alias Marking Menu Patent Discouraged the Open Source Blender Community from Using Pie Menus for Decades
Here is another example that of how that long term marketing FUD succeeded in holding back progress: the Blender community was discussing when the marking menu patent would expire, in anticipation of when they might finally be able to use marking menus in blender (even though it has always been fine to use pie menus).
https://blenderartists.org/t/when-will-marking-menu-patent-expire/618541
As the following discussion shows, there is a lot of purposefully sewn confusion and misunderstanding about the difference between marking menus and pie menus, and what exactly is patented, because of the inconsistent and inaccurate definitions and mistakes in the papers and patents and Alias’s marketing FUD:
“Hi. In a recently closed topic regarding pie menus, LiquidApe said that marking menus are a patent of Autodesk, a patent that would expire shortly. The question is: When ? When could marking menus be usable in Blender ? I couldn’t find any info on internet, mabie some of you know.”
The good news: Decades late due to patents and FUD, pie menus have finally come to 3D Studio Max just recently (January 2018)!
Radially - Pie menu editor for 3ds Max: https://www.youtube.com/watch?v=sjLYmobb8vI
Seriously? Where is emacs?
It’s still recovering from Kyle Machulis’s loving.
https://www.youtube.com/watch?v=D1sXuHnf_lo
Hold up! This is a gallery of IDEs, not OSs :)