I’d hoped to see them compared for day-to-day usage. Is there a reason I should choose one over the other?
Similarly skimmed the article hoping for that. I’ve been a fan of middleman for ages, but made my personal blog use wintersmith since it seemingly would let me be a bit more freeform in having different templates and organize each post in it’s own folder.
Haven’t read the paper entirely, but enjoyed the walkthrough of design decision references and the list of other implementations. You may also want to share your work to https://twitter.com/clausatz who’s trying to make a museum of xanadu/zigzag implementations.
(fenfire was one of the projects that still makes me think of how where zigzag could go, https://twitter.com/i/web/status/1012438773253263361 )
Thanks for pointing me to this guy! It looks like he might already be in touch with the team but I’ve sent him some info anyhow.
(Internal Xanadu documentation, particularly of the timeline of unreleased implementations, is messy and piecemeal, so even if he’s in touch with Ted that doesn’t mean he’s actually getting a clear picture of the timeline.)
I would love to use my smartphone less. I laze around on it in the morning and at night. I wish there was something that autodisabled my phone when I’m on my bed.
That all said, the smartness of a smartphone is invaluable for me. I use to to look up directions, change plans on the fly, learn about artists when I see artwork, read in the subway. I need a humane smart phone, instead of one filled with apps that aim to colonize my mind.
I sometimes save time by eating out. Quick meals at home are cheaper and less time consuming that going somewhere, ordering, waiting for the food, eating, and coming back. But anything more involved than oatmeal is hard for someone living alone. It takes lots of time to organize recipes, get the ingredients, and cook. You have to choose between eating lots of leftovers or even more time per meal. It can be sensible to eat out if you can, from a time point of view.
Regarding smartphone usage, I agree I can’t see myself switching away from a device that has substantial daily utility. For starters you can play with blocking websites. I felt like my productivity tanked when switching from Android to iOS, yet only recently realized you can have the same effect of editing the hosts file via settings-restrictions-websites-limitadultcontent, then adding some ‘never allow sites’
Regarding cooking, I feel like that’s a mindset shift. I went from wanting food to be automated to having a process of cooking some routine dishes (tacos, smoothies, pizzas from scratch, rice/quinoa stirfry) in a way that I pretty much know what ingredients to keep around. Time wise things can be mixed too - stretching while the eggs are cooking, doing a bodyweight workout while the oven is heating up/sauce is getting reduced. I live alone and travel pretty regularly, just try to plan ahead and in the rare case give veggies/fruit I can’t use to a neighbor.
Thanks! I didn’t know you could do hosts blocking on iOS. I hadn’t thought about mixing cooking with exercise like that.
I started making avacado toast as a quick foray into cooking. Then I learned a mouse has been eating my bread. So now I’m block on a mouse trap getting shipped over. What a life man.
Cool to hear! FWIW, I find myself keeping sliced bread in the freezer then toasting it since I eat a loaf fairly slowly (over the course of a few weeks).
My big reasons for not going back to a feature phone are maps and particularly in-car maps (I use Android Auto, and don’t need a separate sat nav as a result). But other than that it just makes me waste time on places like twitter and lobsters. :)
I’m the same. By way of a halfway approach, I use a oneplus 5t and Lineage for Microg. There are cheaper, compatible options out there but I plan to keep this phone for a good few years.
Using Lineage for MicroG means I’m not signed into Google services, but still have access to maps and signal if I want them. The F-Droid app store is a lot lighter than Google Play. I haven’t tried Android Auto though. It might be worth a look, although it might not fully meet every use case.
My hopes are for Light Phone 2 now. https://www.indiegogo.com/projects/light-phone-2-design I pre-ordered one, and basic messaging + navigation would be perfect.
Kinesis Advantage. I’ve been using them for almost twenty years, and other than some basic remapping, I don’t customize.
Also Kinesis Advantage for over a decade. On the hardware side I’ve only mapped ESC to where Caps Lock would be. On the OS side I’ve got a customized version of US Dvorak with scandinavian alphabet.
I’d like to try a maltron 3d keyboard with integrated trackball mouse. It’s got better function keys too, and a numpad in the middle where there’s nothing except leds on the kinesis.
Me too. I remap a few keys like the largely useless caps-lock and otherwise I don’t program it at all. It made my wrist pain disappear within a couple weeks of usage though.
My only “problem” with the Kinesis, and it’s not even my problem, was that the office complained about the volume of the kicks while I was on a call taking notes.
So I switch between the Kinesis and a Apple or Logitech BT keyboard for those occasions.
Yeah, its not that click, it’ the other one from the switches :-)
I can be a heavy typer and for whatever reason, these keys stand out more than I expected to others behind the microphone.
I prefer the kinesis freestyle2. I like the ability to move the two halves farther apart (broad shoulders) and the tilt has done wonders for my RSI issues.
similar, largely I like that I can put the magic trackpad in between the two halves and have something that feels comparable to using the laptop keyboard. I got rid of my mouse years ago but I’m fairly biased on a trackpad’s potential.
I’ve sometimes thought about buying a microsoft folding keyboard and cutting/rewiring it to serve as a portable setup. Have also thought of making a modified version of the nyquist keyboard to be a bit less ‘minimal’ - https://twitter.com/vivekgani/status/939823701804982273
…and remember the pac-man rule when talking in the hallway: http://ericholscher.com/blog/2017/aug/2/pacman-rule-conferences/
That’s really neat. I haven’t ever thought about it that way. I think the default way my friends and I have done that in past is standing side by side so people can walk up to us directly. Thinking on it now, the Pacman setup might work better by letting them ease in instead of walking directly toward us. Bothers some people who are still worth having around. Might try it some time.
Yeah, I’d guess try whatever works - it may vary depending on whether it’s a local meetup with people you mostly know versus a larger conference. Learned about the pacman thing from Liz Marley who gave a lightning talk about it at seattle xcoders a while back.
There was decent discussion on this from jonathan edward’s initial tweet for this last year: https://twitter.com/jonathoda/status/859106006504177664
Swear there was also a related HN discussion but can’t find it.
It’s the 30 year anniversary of CHI’88 (May 15–19, 1988), where Jack Callahan, Ben Shneiderman, Mark Weiser and I (Don Hopkins) presented our paper “An Empirical Comparison of Pie vs. Linear Menus”. We found pie menus to be about 15% faster and with a significantly lower error rate than linear menus!
So I’ve written up a 30 year retrospective:
This article will discuss the history of what’s happened with pie menus over the last 30 years (and more), present both good and bad examples, including ideas half baked, experiments performed, problems discovered, solutions attempted, alternatives explored, progress made, software freed, products shipped, as well as setbacks and impediments to their widespread adoption.
Here is the main article, and some other related articles:
Pie Menus: A 30 Year Retrospective. By Don Hopkins, Ground Up Software, May 15, 2018. Take a Look and Feel Free!
https://medium.com/@donhopkins/pie-menus-936fed383ff1
This is the paper we presented 30 years ago at CHI’88:
An Empirical Comparison of Pie vs. Linear Menus. Jack Callahan, Don Hopkins, Mark Weiser () and Ben Shneiderman. Computer Science Department University of Maryland College Park, Maryland 20742 () Computer Science Laboratory, Xerox PARC, Palo Alto, Calif. 94303. Presented at ACM CHI’88 Conference, Washington DC, 1988.
https://medium.com/@donhopkins/an-empirical-comparison-of-pie-vs-linear-menus-466c6fdbba4b
Open Sourcing SimCity. Excerpt from page 289–293 of “Play Design”, a dissertation submitted in partial satisfaction of the requirements for the degree of Doctor in Philosophy in Computer Science by Chaim Gingold.
https://medium.com/@donhopkins/open-sourcing-simcity-58470a275446
Recommendation Letter for Krystian Samp’s Thesis: The Design and Evaluation of Graphical Radial Menus. I am writing this letter to enthusiastically recommend that you consider Krystian Samp’s thesis, “The Design and Evaluation of Graphical Radial Menus”, for the ACM Doctoral Dissertation Award.
https://medium.com/@donhopkins/don-hopkins-october-31-2012-e0166ec3a26c
Constructionist Educational Open Source SimCity. Illustrated and edited transcript of the YouTube video playlist: HAR 2009: Lightning talks Friday. Videos of the talk at the end.
How to Choose with Pie Menus — March 1988.
https://medium.com/@donhopkins/how-to-choose-with-pie-menus-march-1988-2519c095ba59
BAYCHI October Meeting Report: Natural Selection: The Evolution of Pie Menus, October 13, 1998.
https://medium.com/@donhopkins/baychi-october-meeting-report-93b8e40aa600
The Sims Pie Menus. The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo.
https://medium.com/@donhopkins/the-sims-pie-menus-49ca02a74da3
The Design and Implementation of Pie Menus. They’re Fast, Easy, and Self-Revealing. Originally published in Dr. Dobb’s Journal, Dec. 1991.
https://medium.com/@donhopkins/the-design-and-implementation-of-pie-menus-80db1e1b5293
Gesture Space.
https://medium.com/@donhopkins/gesture-space-842e3cdc7102
Empowered Pie Menu Performance at CHI’90, and Other Weird Stuff. A live performance of pie menus, the PSIBER Space Deck and the Pseudo Scientific Visualizer at the CHI’90 Empowered show. And other weird stuff inspired by Craig Hubley’s sound advice and vision that it’s possible to empower every user to play around and be an artist with their computer.
OLPC Sugar Pie Menu Discussion Excerpts from the discussion on the OLPC Sugar developer discussion list about pie menus for PyGTK and OLPC Sugar.
https://medium.com/@donhopkins/olpc-sugar-pie-menu-discussion-738577e54516
Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser. By Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William Weiland.
Pie Menu FUD and Misconceptions. Dispelling the fear, uncertainty, doubt and misconceptions about pie menus.
https://medium.com/@donhopkins/pie-menu-fud-and-misconceptions-be8afc49d870
The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989. Written by Don Hopkins, October 1989. University of Maryland Human-Computer Interaction Lab, Computer Science Department, College Park, Maryland 20742.
https://medium.com/@donhopkins/the-shape-of-psiber-space-october-1989-19e2dfa4d91e
The Amazing Shneiderman. Sung to the tune of “Spiderman”, with apologies to Paul Francis Webster and Robert “Bob” Harris, and with respect to Ben Shneiderman.
https://medium.com/@donhopkins/the-amazing-schneiderman-9df99def882f
And finally this has absolutely nothing to do with pie menus, except for the shape of a pizza pie:
The Story of Sun Microsystems PizzaTool. How I accidentally ordered my first pizza over the internet.
https://medium.com/@donhopkins/the-story-of-sun-microsystems-pizzatool-2a7992b4c797
Also, a more modern pie demo was tried within quicksilver by nick jitkoff many years ago: https://www.youtube.com/watch?v=d4LkTstvUL4&feature=youtu.be&t=18m17s
(Shameless plug and not a pie menu, I spent a chunk of time experimenting with a gesture/zooming palette menu of sorts with https://thimblemac.com - the idea was to find something that felt a bit more scalable than a pie menu layouts but I arguably went too far in the steps to trigger the gesture for it. )
Wow, that is really nice! Does it somehow magically integrate itself with existing unmodified unsuspecting tools like Photoshop and Sketchup? Could it be applied to many other applications too? I’ll read your blog post about making thimble.
But before I do, and speaking of magic, I’d love to introduce you to Morgan Dixon’s brilliant and important work on Prefab: The Pixel-Based Reverse Engineering Toolkit.
I wrote a summary and links to his work in relation to an idea I have called “aQuery – like jQuery for Accessibility” on Hacker News. I’d love to figure out a way to work on this stuff so many people can easily use it!
https://news.ycombinator.com/item?id=11520967
Here are some ideas and discussion about aQuery – like jQuery for accessibility – which would be useful for implementing this kind of stuff out of of existing window systems and desktop applications.
http://donhopkins.com/mediawiki/index.php/AQuery
Morgan Dixon’s work is truly breathtaking and eye opening, and I would love for that to be a core part of a scriptable hybrid Screen Scraping / Accessibility API approach.
Screen scraping techniques are very powerful, but have limitations. Accessibility APIs are very powerful, but have different limitations. But using both approaches together, screencasting and re-composing visual elements, and tightly integrating it with JavaScript, enables a much wider and interesting range of possibilities.
Think of it like augmented reality for virtualizing desktop user interfaces. The beauty of Morgan’s Prefab is how it works across different platforms and web browsers, over virtual desktops, and how it can control, sample, measure, modify, augment and recompose guis of existing unmodified applications, even dynamic language translation, so they’re much more accessible and easier to use!
–
James Landay replies:
Don,
This is right up the alley of UW CSE grad student Morgan Dixon. You might want to also look at his papers.
–
Don emails Morgan Dixon:
Morgan, your work is brilliant, and it really impresses me how far you’ve gone with it, how well it works, and how many things you can do with it!
I checked out your web site and videos, and they provoked a lot of thought so I have lots of questions and comments.
I really like the UI Customization stuff, and also the sideviews!
Combining your work with everything you can do with native accessibility APIs, in an HTML/JavaScript based, user-customizable, scriptable, cross platform user interface builder like (but transcending) HyperCard would be awesome!
I would like to discuss how we could integrate Prefab with a Javascriptable, extensible API like aQuery, so you could write “selectors” that used prefab’s pattern recognition techniques, bind those to JavaScript event handlers, and write high level widgets on top of that in JavaScript, and implement the graphical overlays and gui enhancements in HTML/Canvas/etc like I’ve done with Slate and the WebView overlay.
Users could literally drag controls out of live applications, plug them together into their own “stacks”, configure and train and graphically customize them, and hook them together with other desktop apps, web apps and services!
For example, I’d like to make a direct manipulation pie menu editor, that let you just drag controls out of apps and drop them into your own pie menus, that you can inject into any application, or use in your own guis. If you dragged a slider out of an app into the slice of a pie menu, it could rotate it around to the slice direction, so that the distance you moved from the menu center controlled the slider!
While I’m at it, here’s some stuff I’m writing about the jQuery Pie Menus. http://donhopkins.com/mediawiki/index.php/JQuery_Pie_Menus
Web Site: Morgan Dixon’s Home Page.
https://web.archive.org/web/20170616115503/http://morgandixon.net/
Web Site: Prefab: The Pixel-Based Reverse Engineering Toolkit.
https://web.archive.org/web/20130104165553/http://homes.cs.washington.edu/~mdixon/research/prefab/
Video: Prefab: What if We Could Modify Any Interface? Target aware pointing techniques, bubble cursor, sticky icons, adding advanced behaviors to existing interfaces, independent of the tools used to implement those interfaces, platform agnostic enhancements, same Prefab code works on Windows and Mac, and across remote desktops, widget state awareness, widget transition tracking, side views, parameter preview spectrums for multi-parameter space exploration, prefab implements parameter spectrum preview interfaces for both unmodified Gimp and Photoshop:
http://www.youtube.com/watch?v=lju6IIteg9Q
PDF: A General-Purpose Target-Aware Pointing Enhancement Using Pixel-Level Analysis of Graphical Interfaces. Morgan Dixon, James Fogarty, and Jacob O. Wobbrock. (2012). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’12. ACM, New York, NY, 3167-3176. 23%.
Video: Content and Hierarchy in Prefab: What if anybody could modify any interface? Reverse engineering guis from their pixels, addresses hierarchy and content, identifying hierarchical tree structure, recognizing text, stencil based tutorials, adaptive gui visualization, ephemeral adaptation technique for arbitrary desktop interfaces, dynamic interface language translation, UI customization, re-rendering widgets, Skype favorite widgets tab:
http://www.youtube.com/watch?v=w4S5ZtnaUKE
PDF: Content and Hierarchy in Pixel-Based Methods for Reverse-Engineering Interface Structure. Morgan Dixon, Daniel Leventhal, and James Fogarty. (2011). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’11. ACM, New York, NY, 969-978. 26%.
Video: Sliding Widgets, States, and Styles in Prefab. Adapting desktop interfaces for touch screen use, with sliding widgets, slow fine tuned pointing with magnification, simulating rollover to reveal tooltips: https://www.youtube.com/watch?v=8LMSYI4i7wk
Video: A General-Purpose Bubble Cursor. A general purpose target aware pointing enhancement, target editor:
http://www.youtube.com/watch?v=46EopD_2K_4
PDF: Prefab: Implementing Advanced Behaviors Using Pixel-Based Reverse Engineering of Interface Structure. Morgan Dixon and James Fogarty. (2010). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’10. ACM, New York, NY, 1525-1534. 22%
PDF: Prefab: What if Every GUI Were Open-Source? Morgan Dixon and James Fogarty. (2010). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’10. ACM, New York, NY, 851-854.
Morgan Dixon’s Research Statement:
Community-Driven Interface Tools
Today, most interfaces are designed by teams of people who are collocated and highly skilled. Moreover, any changes to an interface are implemented by the original developers and designers who own the source code. In contrast, I envision a future where distributed online communities rapidly construct and improve interfaces. Similar to the Wikipedia editing process, I hope to explore new interface design tools that fully democratize the design of interfaces. Wikipedia provides static content, and so people can collectively author articles using a very basic Wiki editor. However, community-driven interface tools will require a combination of sophisticated programming-by-demonstration techniques, crowdsourcing and social systems, interaction design, software engineering strategies, and interactive machine learning.
–
agumonkey writes:
Small temporary answer while I unwind all the linked content, Dixon’s target aware pointing is already missing so much. I wonder how on earth nobody in smartphone land thought to implement something similar. I’m already hooked :)
–
Don replies:
It’s missing from many contexts where it would be very useful, including mobile. It’s related in many ways to those mobile gui, web browser and desktop app testing harnesses. It could be implemented as a smart scriptable “double buffered” VNC server (for maximum efficiency and native Accessibility API access) or client (for maximum flexibility but less efficient).
The way jQuery widgets can encapsulate native and browser specific widgets with a platform agnostic api, you could develop high level aQuery widgets like “video player” that knew how to control and adapt many different video player apps across different platforms (youtube or vimeo in browser, vlc on windows or mac desktop, quicktime on mac, windows media player on windows, etc). Then you can build much higher level apps out of widgets like that.
Target aware pointing is one of many great techniques he shows can be layered on top of existing interfaces, without modifying them.
I’d like to integrating all those capabilities plus the native Accessibility API of each platform into a JavaScript engine, and write jQuery-like selectors for recognizing patterns of pixels and widgets, creating aQuery widgets that tracked input, drew overlays, implemented text to speech and voice control interfaces, etc.
His research statement sums up where it’s leading: Imagine wikipedia for sharing gui mods!
Berkeley Systems (the flying toaster screen saver company) made one of the first screen readers for the Mac in 1989 and Windows in 1994.
https://en.wikipedia.org/wiki/OutSpoken
Richard Potter, Ben Shneiderman and Ben Benderson wrote a paper called Pixel Data Access for End-User Programming and graphical Macros, that references a lot of earlier work.
The integrations with Sketch/SketchUp was done via AXAPI, using https://github.com/robrix/Haxcessibility with some small extensions to scan the top menu. So yes, for those apps there’s no modification, the setup so far’s been done by curated plist/icon assets contained within my app, but in theory this could be made to be user-editable & leverage icons contained within the app bundles.
Initially I started with Illustrator via an API (and dropped it due to API bugginess). And while the website still shows Photoshop support I dropped it because it used another buggy network-based api. The lesson learned is that on the desktop accessibility hacks proved to be much easier, but not entirely perfect - occasionally there’s still times where AXAPI seemingly decides not to cooperate and rejects requests until switching away from the window and back.
Off the top of my head, a couple other apps in the space of leveraging macos AXAPI would be shortcat ( https://shortcatapp.com/ ), bettertouchtool (I thiink it does some axapi traversing: https://folivora.ai/ ), and ghostnote ( https://www.ghostnoteapp.com/ )
Prefab: Wow, took a look at https://www.youtube.com/watch?v=w4S5ZtnaUKE and got enthused enough to make a quick montage of their work ( https://twitter.com/vivekgani/status/997727951583035393 ). Have definately had some similar thoughts but never thought to dig through HCI papers to see existing work in the space.
aQuery: Will have to look into this idea more, sounds doable though I’d be curious about whether it could also be structured allow for subscribing to state changes (e.g. a button going from enabled to disabled). I find myself still debating whether to go down a more ‘pure’ path towards a whole environment where things are designed to be altered (dynamicland, smalltalk - https://youtu.be/AnrlSqtpOkw?t=10m41s )
Wow, this is wonderful stuff and pieces of the puzzle I never heard about!
Another puzzle piece that I’m interested in, that’s useful for live performance, video manipulation, screen scraping, image processing, pattern matching and vision based approaches like Prefab, is the ability to efficiently send images between apps running on the same system, via the GPU, without any performance penalty of copying between GPU and system memory (or even GPU and GPU memory unnecessarily)!
Integrating uncooperative apps and libraries with virtual webcams and virtual framebuffers is also very useful, and can bridge many gaps between existing monolithic apps you can’t otherwise modify or extend with plugins.
There is a library for the Mac called Syphon, and a similar one for Windows called Spout, that’s used by the VJ community and other people to efficiently integrate many programs like Max/MSP/Jitter, Pure Data, Quartz Composer, Processing, Unity3D, AfterEffects, VirtualDJ, FreeFrameGL, VLC, Java, Cinder, Vuo, Plask, Open Frameworks, Omnidome, Little Projection Mapping Tool, and even CEF (Chrome Embedding Framework), etc.
Syphon is an open source Mac OS X technology that allows applications to share frames - full frame rate video or stills - with one another in realtime. Now you can leverage the expressive power of a plethora of tools to mix, mash, edit, sample, texture-map, synthesize, and present your imagery using the best tool for each part of the job. Syphon gives you flexibility to break out of single-app solutions and mix creative applications to suit your needs.
http://syphon.v002.info/faq.php
https://github.com/Syphon/Syphon-Framework
Syphon is a Mac OS X technology to allow applications to share video and still images with one another in realtime, instantly.
It’s quite useful to have a web browser in the mix, so this is an essential ingredient:
https://github.com/vibber/CefWithSyphon
Syphon is built on top of the macOS “IOSurface” API:
https://developer.apple.com/documentation/iosurface
Web browsers like Chrome and Safari that break the browser up into separate heavy weight processes for a user interface process and multiple rendering processes also use IOSurface to efficiently share images in the GPU between processes.
iOS also supports IOSurface, and that’s how WKWebView works (which runs Safari in another hidden background process and embeds the browser in the foreground iOS app process). But you can’t multitask on IOS or run Max and AfterEffects, so it doesn’t make much sense for Syphon to support iOS. Syphon only makes sense on desktop computers.
IOSurface currently supports both OpenGL and Metal. But unfortunately Syphon is totally OpenGL based. So sadly you can’t use it with Unity3D if you need to use the Metal device drivers (which are vastly preferable to OpenGL).
https://shapeof.com/archives/2017/12/moving_to_metal_episode_iii.html
IOSurface is neat. A shared bitmap that can cross between programs, and it’s got a relatively easy API including two super critical functions named IOSurfaceLock and IOSurfaceUnlock. I mean, if you’re sharing the data across process boundaries then you’ll need to lock things so that the two apps don’t step on each other’s toes. But of course if you’re not sharing it across processes, then you can ignore those locks, right? Right?
And then a single file came back, named IOSurface2D.mm, which was some obscure sample code from Apple that I had received at one point a number of years ago.
As I said, the documentation is thin and examples are rare! But the Safari and Chrome web browser source code, bug reports and discussion group is a great source of cutting-edge information, because internal Apple developers are working on it who have access to secret information that’s not public, and know how to make it perform well.
I think it’s even possible for an OpenGL app and a Metal app to communicate with each other via IOSurface. (But Syphon doesn’t currently support that.)
The web browsers have moved on to using a newer higher level but mostly undocumented API, that I believe is built on top of IOSurface, called CARemoteLayerClient / CARemoteLayerServer.
https://developer.apple.com/documentation/quartzcore/caremotelayerclient
https://developer.apple.com/documentation/quartzcore/caremotelayerserver
Here are some comments on supporting it in Chrome:
https://bugs.chromium.org/p/chromium/issues/detail?id=102340
Safari must also be using something like IOSurfaces to make pixel data available across processes, so I’m guessing similar performance must be achievable in Chrome.
Safari uses private SPI to share surfaces, which has different behaviors than IOSurface, so things that Safari can do aren’t necessarily feasible in other browsers (which is why Invalidate Core Animation exists in the first place).
I’m not sure what advantages of using CARemoteLayerClient/Server are over IOSurface, or what “SPI” means exactly, but I think it has to do with this header file:
https://github.com/WebKit/webkit/blob/master/Source/WebCore/PAL/pal/spi/cocoa/IOSurfaceSPI.h
At any rate, there are currently two problems with Syphon, which is otherwise very wonderful but getting long in the tooth:
it’s not a cross platform API so it doesn’t support Windows. (But there is Spout for that.)
it depends on OpenGL and doesn’t support Metal. Ideally it would support both.
perhaps there is a good reason to do the same thing as the web browsers are now doing and use CARemoteLayerClient/Server instead of IOSurface, but I don’t know the issues involved or why they switched, and the documentation is sparse.
On Windows, there’s a similar library called Spout that supports many different applications and libraries like Unity3D, OBS (Open Broadcaster Software), FreeframeGL, Java, Processing, Max/MSP/Jitter, Ableton Live, Virtual Webcam, OpenFrameworks, Cinder, etc:
https://cycling74.com/forums/syphon-recorder-equivalent-on-windows-or-other-tricks
Senders and receivers include FreeframeGL plugins, a Java interface for Processing, Jitter externals for Max/Msp, VIZZable modules for Ableton Live, and a Virtual Webcam as a universal receiver. There is also example code for creating your own applications with openFrameworks and Cinder. Now these applications running on Windows can share video with each other in a similar way to Syphon for OSX.
For compatible graphics hardware, OpenGL textures are shared by way of DirectX using the NVIDIA DirectX/Opengl interop extension. If hardware is not compatible, SPOUT provides a backup by way of CPU memory.
Ideally I’d love to have one glorious cross platform API that supports whatever transport (or transports) work best on each platform, and even transparently supports less efficient APIs like simply sending images over shared system memory, uncompress or compressed over the network, WebRTC, etc, and also support whatever graphics APIs you need to integrate any particular existing app.
Then the same library would let you integrate apps using different graphics APIs on the same system super-efficiently, and even Macs and Windows systems on the same network to communicate as efficiently as possible by streaming video.
Huh, I think I’m starting to get your vision - you want the hacks/piping work built to fling the existing surfaces of ‘personal computing’ to facilitate the other forms of computing (social, presentative, etc.) - one use i’ve also been thinking of has been ‘protected sharing’ - e.g. screencasts or recording where the user selects areas and text is magically blurred out.
FWIW, iOS is probably vastly different now but I remember that Adam Bell ( https://twitter.com/b3ll ) sorta played in the space of getting app framebuffers a while ago when jailbreaking was still easy - https://youtu.be/Ox09rWJTuCA?t=18m26s .
If these PIE menus are so awesome, why can’t I just use them?
In Android, there as an patch on (now discontinued) Paranoid Android fork, later respawned as SlimPIE but retored due to problematic maintenance in newer Android. There are some overlay-like applications for that though, but they all suffer from minor problems and feel like extraordinary additions not integrated well with the rest of OS.
Using such thing as context menu in GTK or Qt could be probably possible (on X.org at least, with XShape ext.) but there are too many places where people assumed that menus are in form of rectangle (both in 3rd party apps and library itself) that you would probably need to rewrite too many LOCs
Not even saying anything about WinNT, as they moved scrollbars rendering code into kernelspace because “it’s faster lol”
I don’t know too much about MacOS GUI rendering mechanics, but they already did a great improvement over standard UIs with that global menu thing, which is also pretty hard to reimplement in any other environment without breaking stuff
I’ll be really glad to see the software world migrating to more ergonomic UI/UX concept instead of flattening the world for fun and profit (I mean, reducing amount of skilled graphicians), even when they use some hybrid concepts intead of full PIE, for example “trees” or regular menus expanding from root PIE items. This particular concept has been well presented in Sword Art Online anime and also kinda predicted the UX paradigms for holographic/floating interfaces:
Like most widget types that are poorly supported or unsupported in popular GUI toolkits, I see pie menus almost exclusively in games – since people writing games have the expectation that they’ll need to essentially roll their own GUI toolkit in order to make it sufficiently themable anyhow.
It’s an unfortunate state of affairs: widget functionality, once you’ve got past the initial learning curve, is proportional to the degree to which the widget behavior matches the user’s internal mental model of the task, and so expressive UI elements have the ability to make real work much more efficient; limiting the use of good GUIs to video game menus means UI design is limited in its capacity to make anything but our leisure time easier.
(As a side note: SAO is a bad example of UI design in anime – the titular game is, on many levels, not professional enough to make it to market, and the way the menus are laid out is no exception. Geoff Thew explains this in detail in one of his video essays. The way SAO uses pie menus is sort of a shallow copy of how pie menus have been used in games for the past decade, as understood by someone who has never played a video game. A good example of an interesting use of pie menus in a real game is the conversation system in Mass Effect – where segments have their size changed in order to make certain responses more likely, and where pie areas correspond to emotions or tactics.)
Great points!
I wrote about “Ersatz Pie Menus” in this additional article, “Pie Menu FUD and Misconceptions: Dispelling the fear, uncertainty, doubt and misconceptions about pie menus.”
Ersatz Pie Menus
Richard Stallman likes to classify an Emacs-like text editor that totally misses the point of Emacs by not having an extension language as an “Ersatz Emacs”.
In the same sense, there are many “Ersatz Pie Menus” that may look like pie menus on the surface, but don’t actually track or feel like pie menus, or benefit from all of their advantages, because they aren’t carefully designed and implemented to optimize for Fitts’s Law by being based purely on the direction between stroke endpoints instead of the entire path, minimizing the distance to the targets, and maximizing the size of the targets.
Microsoft Surface Dial: Someone on Hacker News asked me: Any thoughts on Microsoft’s Surface Dial radial menu?
Good question — glad you asked! (No, really! ;) Turning a dial is a totally different gesture than making directional strokes, so they are different beasts, and a dial lacks the advantages pie menus derive from exploiting Fitts’s Law. […]
Beautiful but Ersatz Pie Menu Example – the graphics are wonderful but the tracking is all wrong: http://pmg.softwaretailoring.net/
Turning Is Not Like Stroking: In terms of “Big O Notation”, pull down menus, click wheels, and carousel selection is linear O(n), while with a pie menu you only have to perform one short directional gesture to select any item, so selection is constant O(1) (with a small constant, the inner inactive radius of the hole in the middle, which you can make larger if you’re a spaz).
Yucky Pie Menus Recipes
Bedazzling and Confusing Graphics and Animations […]
Rectangular Label Targets Instead of Wedge Shaped Slice Targets […]
Triggering Items and Submenus on Cursor Motion Distance Instead of Clicking […]
Not Starting Pie Menus Centered on the Cursor […]
Improperly Handling Screen Edges […]
Improperly Handling Mouse-Ahead Display Preemption and Quick Gestures on Busy Computers […]
Yummy Pie Menu Recipes
I’m certainly not saying that pie menus should never be graphically slick or have lots of cool animations. Just that they should be thoughtfully designed and purposefully easy to use first, so they deeply benefit users from Fitts’s Law, instead of just trying to impress users with shallow useless surface features.
Spectacular Example: Simon Schneegans’ Gnome-Pie, the slick application launcher for Linux
I can’t understate how much I like this. Not only is it slick, beautiful, and elegantly animated, but it’s properly well designed in all the important ways that make it Fitts’s Law Friendly and easy to use, and totally deeply customizable by normal users! It’s a spectacularly useful tour-de-force that Linux desktop users can personalize to their heart’s content.
Gnome-Pie — Simon Schneegans
Homepage of Gnome-Pie, the slick application launcher for Linux. simmesimme.github.io
Gnome-Pie is a slick application launcher which I’m creating for Linux. It’s eye candy and pretty fun to work with. It offers multiple ways to improve your desktop experience.
Check out the project’s homepage @ http://gnome-pie.simonschneegans.de
I saw. You had a lot of interesting stuff on your Medium account.
(Are you likely to post more on HyperTIES? As a Xanadu-er & someone interested in the history of pre-web hypertext systems, I find it interesting, since it has a pretty distinct look & feel and seems like it might have more interesting UI ideas to copy. The ‘pop-out’ mechanism for linked areas in images interested me when I saw it mentioned.)
One of the good ideas was that every article had a brief definition, which it would show to you the first time you clicked a link, without leaving where you were. That’s a feature I wish was universally supported by the web, so you didn’t have to leave your current page to find out something about the link before following it.
You could then click again (or click on the definition), or pop up a pie menu to open the link in the current or the other window. Also you could turn pages and navigate with the pie menus, swiping in obvious directions, like with an iPad, but having the pie menu to provide a visual affordance of which gestures are available (“self revealing” gestures).
Here are a couple of articles about HyperTIES that I haven’t moved to Medium yet:
Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser. By Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William Weiland. http://www.donhopkins.com/drupal/node/102
Abstract: Since browsing hypertext can present a formidable cognitive challenge, user interface design plays a major role in determining acceptability. In the Unix workstation version of Hyperties, a research-oriented prototype, we focussed on design features that facilitate browsing. We first give a general overview of Hyperties and its markup language. Customizable documents can be generated by the conditional text feature that enables dynamic and selective display of text and graphics. In addition we present:
an innovative solution to link identification: pop-out graphical buttons of arbitrary shape.
application of pie menus to permit low cognitive load actions that reduce the distraction of common actions, such as page turning or window selection.
multiple window selection strategies that reduce clutter and housekeeping effort. We preferred piles-of-tiles, in which standard-sized windows were arranged in a consistent pattern on the display and actions could be done rapidly, allowing users to concentrate on the contents.
Pie menus to permit low cognitive load actions: To avoid distraction of common operations such as page turning or window selection, pie menus were used to provide gestural input. This rapid technique avoids the annoyance of moving the mouse or the cursor to stationary menu items at the top or bottom of the screen.
HyperTIES Hypermedia Browser and Emacs Authoring Tool for NeWS. http://www.donhopkins.com/drupal/node/101
That has a screen dump, an architectural diagram, a list of interesting features, data structures, and links to the C, Forth, PostScript, Emacs MockLisp, and HyperTIES markup language source code!
Here’s a demo of HyperTIES and the pop-out embedded menus:
HCIL Demo - HyperTIES Browsing: Demo of NeWS based HyperTIES authoring tool, by Don Hopkins, at the University of Maryland Human Computer Interaction Lab.
https://www.youtube.com/watch?v=fZi4gUjaGAM
A funny story about the demo that has the photo of the three Sun founders whose heads puff up when you point at them:
When you point at a head, it would swell up, and you pressed the button, it would shrink back down again until you released the button again.
HyperTIES had a feature that you could click or press and hold on the page background, and it would blink or highlight ALL of the links on the page, either by inverting the brightness of text buttons, or by popping up all the cookie-cut-out picture targets (we called them “embedded menus”) at the same time, which could be quite dramatic with the three Sun founders!
Kind of like what they call “Big Head Mode” these days! https://www.giantbomb.com/big-head-mode/3015-403/
I had a Sun workstation set up on the show floor at Educom in October 1988, and I was giving a rotating demo of NeWS, pie menus, Emacs, and HyperTIES to anyone who happened to walk by. (That was when Steve Jobs came by, saw the demo, and jumped up and down shouting “That sucks! That sucks! Wow, that’s neat. That sucks!”)
The best part of the demo was when I demonstrated popping up all the heads of the Sun founders at once, by holding the optical mouse up to my mouth, and blowing and sucking into the mouse while secretly pressing and releasing the button, so it looked like I was inflating their heads!
One other weird guy hung around through a couple demos, and by the time I got back around to the Emacs demo, he finally said “Hey, I used to use Emacs on ITS!” I said “Wow cool! So did I! What’s was your user name?” and he said “WNJ”.
It turns out that I had been giving an Emacs demo to Bill Joy all that time, then popping his head up and down by blowing and sucking into a Sun optical mouse, without even recognizing him, because he had shaved his beard!
He really blindsided me with that comment about using Emacs, because I always thought he was more if a vi guy. ;)
Accidentally stumbled upon the xanadu-esque side of this conversation. Just to throw in a thought, it’s been on my mind to attempt applying a high level of polish to the federated wiki project such that it facilitated a way to zoom in on reading one thing at a time (a zoom of sorts). So..a ‘big head mode’ of sorts to let you focus on consuming (or editing an article when you needed it then zoom back out to see the connections.
The other crazy thought on my mind is seeing if there’s a way to bend existing oss text editors (particularly atom.io) to facilitate more freeform things like ‘code bubbles’ or liquidtext.
I just added this example to the article, which you may have missed (since it wasn’t there until a few minutes ago):
Spectacular Example: Simon Schneegans’ Gnome-Pie, the slick application launcher for Linux
I can’t understate how much I like this. Not only is it slick, beautiful, and elegantly animated, but it’s properly well designed in all the important ways that make it Fitts’s Law Friendly and easy to use, and totally deeply customizable by normal users! It’s a spectacularly useful tour-de-force that Linux desktop users can personalize to their heart’s content.
Gnome-Pie - Simon Schneegans
Homepage of Gnome-Pie, the slick application launcher for Linux. simmesimme.github.io
Gnome-Pie is a slick application launcher which I’m creating for Linux. It’s eye candy and pretty fun to work with. It offers multiple ways to improve your desktop experience.
Check out the project’s homepage @ http://gnome-pie.simonschneegans.de
If that doesn’t blow your mind, check this out – there are so many great things about it:
I used a great pie interface on Android for a while, though I cannot remember what it was called and am failing to find screenshots online. It helped considerably with one-handed operation.
Trying to restrain myself to just a couple:
Outdoors / Mapping
iNaturalist / Seek - apps for identifying plants, Seek is a newer one oriented for kids. I believe both are open source.
Go Map!! - It’s an openstreetmap editor tailored for iOS. I met the developer behind this recently - was fairly impressed that he wrote his own coregraphics map renderer for this.
Exercise
Start Stretching - it’s based on https://www.phraktured.net/starting-stretching.html . Yes, it’s a basic non-hardcore user app - but we should all be stretching more :)
Writing
Scrawl Notes - made by a local developer I know. Nothing hardcore about it, it’s just a note taker that’s accessible as a widget for quick notetaking. Also has a companion mac app.
Reading
V for wikipedia - I haven’t bought this app yet but it’s made by an expert at design/typography. The maker talks about the design decisions here: https://www.youtube.com/watch?v=YM2Nj691PMo
Interesting. At one point I was strongly interested in making a ups that would use a minimal number of supercaps instead of a battery and just shut down a server within a minute or two. Thinking aloud, there’s also some opportunities in making the alert connection wireless (perhaps a PoE passthrough that talked IPMI and wirelessly back to the ups) or at least doing turn off via Bluetooth.
Laziness (and seeing a decent deal for a regular pure sine ups) eventually got me. The big hurdle I had in thinking in terms of avoiding a sla battery is that they’re cheap compared to supercaps. There were some small companies that designed supercap UPSes years ago and my guess is they were jumping on patent filings and thinking prices would eventually fall.
Oddly, my biggest takeaway from this is to consider the Eve V in the future - particularly like how the tablet can be put up on an easel and the same keyboard will still work detached off it’s own battery/bluetooth.
Also, for the comments seeking linux graphics tools I’d recommend checking out the VGC project ( https://www.vgc.io/ ) - basically the author was researching vector design tools (particularly for animation) with https://www.vpaint.org/ and is now trying to commercialize it over the next couple years.
I’m not giving up colorful text schemes or immersive experiences anytime soon, but one nice affordance of the acme scheme the author uses is that it could make coding on low-color outdoor displays (e-ink, transflective lcd, epaper) more usable.
very surprising that the BSDs weren’t given heads up from the researchers. Feels like would be a list at this point of people who could rely on this kind of heads up.
The more information and statements that come out, the more it looks like Intel gave the details to nobody beyond Apple, Microsoft and the Linux Foundation.
Admittedly, macOS, Windows, and Linux covers almost all of the user and server space. Still a bit of a dick move; this is what CERT is for.
Plus, the various BSD projects have security officers and secure, confidential ways to communicate. It’s not significantly more effort.
Right.
And it’s worse than that when looking at the bigger picture: it seems the exploits and their details were released publicly before most server farms were given any head’s up. You simply can’t reboot whole datacenters overnight, even if the patches are available and you completely skip over the vetting part. Unfortunately, Meltdown is significant enough that it might be necessary, which is just brutal; there have to be a lot of pissed ops out there, not just OS devs.
To add insult to injury, you can see Intel PR trying to spin Meltdown as some minor thing. They seem to be trying to conflate Meltdown (the most impactful Intel bug ever, well beyond f00f) with Spectre (a new category of vulnerability) so they can say that everybody else has the same problem. Even their docs say everything is working as designed, which is totally missing the point…
Wasn’t there a post on here not long ago about Theo breaking embargos?
Note that I wrote and included a suggested diff for OpenBSD already, and that at the time the tentative disclosure deadline was around the end of August. As a compromise, I allowed them to silently patch the vulnerability.
He agreed to the patch on an already extended embargo date. He may regret that but there was no embargo date actually broken.
@stsp explained that in detail here on lobste.rs.
So I assume Linux developers will no longer receive any advance notice since they were posting patches before the meltdown embargo was over?
I expect there’s some kind of risk/benefit assessment. Linux has lots of users so I suspect it would take some pretty overt embargo breaking to harm their access to this kind of information.
OpenBSD has (relatively) few users and a history of disrespect for embargoes. One might imagine that Intel et al thought that the risk to the majority of their users (not on OpenBSD) of OpenBSD leaking such a vulnerability wasn’t worth it.
Even if, institutionally, Linux were not being included in embargos, I imagine they’d have been included here: this was discovered by Google Project Zero, and Google has a large investment in Linux.
Actually, it looks like FreeBSD was notified last year: https://www.freebsd.org/news/newsflash.html#event20180104:01
By late last year you mean “late December 2017” - I’m going to guess this is much later than the other parties were notified.
macOS 10.13.2 had some related fixes to meltdown and was released on December 6th. My guess is vendors with tighter business relationships (Apple, ms) to Intel started getting info on it around October or November. Possibly earlier considering the bug was initially found by Google back in the summer.
Windows had a fix for it in November according to this: https://twitter.com/aionescu/status/930412525111296000
A sincere but hopefully not too rude question: Are there any large-scale non-hobbyist uses of the BSDs that are impacted by these bugs? The immediate concern is for situations where an attacker can run untrusted code like in an end user’s web browser or in a shared hosting service that hosts custom applications. Are any of the BSDs widely deployed like that?
Of course given application bugs these attacks could be used to escalate privileges, but that’s less of a sudden shock.
there are/were some large scale deployments of BSDs/derived code. apple airport extreme, dell force10, junos, etc.
people don’t always keep track of them but sometimes a company shows up then uses it for a very large number of devices.
Presumably these don’t all have a cron job doing cvsup; make world; reboot against upstream *BSD. I think I understand how the Linux kernel updates end up on customer devices but I guess I don’t know how a patch in the FreeBSD or OpenBSD kernel would make it to customers with derived products. As a (sophisticated) customer I can update the Linux kernel on my OpenWRT based wireless router but I imagine Apple doesn’t distribute the Airport Extreme firmware under a BSD license.
Oh, this article again. Alan comments on it being an old interview recontextualized on hn: https://news.ycombinator.com/item?id=15261691
Checked and he has a youtube channel: https://www.youtube.com/user/szeloof
Max also talked about this a year ago on the changelog: https://changelog.com/podcast/232 .
The relevant part - https://changelog.com/podcast/232#transcript-113
I’ve been relying on pinboard, but eventually want to use something with higher-fidelity archives. Googling revealed this ‘awesome list’: https://github.com/iipc/awesome-web-archiving
Other things I’ve archived in one-off scenarios:
My problem isn’t so much the act of archiving, it’s figuring out how to organize it all to consume it or aggregate it for searching later - I’d like to at least sit down for a few days to comb through all my pinboard tags and give it some better structure (which leads to another thing - I kinda wish pinboard supported tag autocomplete).
Thanks for sharing awesome list and your backup achievements.
I perfectly understand your worry for old out-of-prints books.
I agree also on how hard good organizing, i.e. useful for further processes on it, like mentioned consuming or indexing.
One of my main problems regarding videos I back up, is that I have no solution for properly tracking what I watched already. I watch stuff on various devices, mobile phone, laptop, etc. and because full channel backup can take a lot of space, I have to move most of it to disk, which isn’t on-line all the time, because my home server, from which I watch stuff, has limited storage.
Have you used plex? It sorta tries to track watching, at least to allow you to continue where you left off, and the mobile apps let you sync to watch offline. XBMC/kodi may also do this.
It’s definately on the mainstream business/life advice side, but The Principles by Ray Dalio.
Went through the first quarter which is largely an autobiography of himself - some of the advice is obvious, some against my personal goals, but has some reminders and has made me bookmark some history books to read.
Yeah. Bret Victor briefly reiterated this a few months ago: https://twitter.com/worrydream/status/881021457593057280 .
Next level: port electron to windows 95, run Slack on this.
Ironically, the maker of the win95-in-electron hack works at… slack. https://github.com/felixrieseberg
That would be quite a hack. I doubt Electron could even be made to run on Windows 95. Once Windows 98 came out, Win95 was all but forgotten by 99% of the computing world in short order. I would guess that most programs of the pre-Win7 era that are still actually useful have roughly this level of support:
Pre-Win7 would have been Windows Vista. Nearly all programs should have run on Windows XP that were being developed on Vista. Typically you’re going to want to target the current release and at least the last major release. I think you’re correct about 98 and 95 though. Even today with Visual Studio 2017 compiling C++ I can target Windows 7, although I think by default you only get to target Windows 10 and Windows 8.x