I think the OpenDoc thinking lives on in attempts like Fluid[1], and also in block-based CMSs like Wordpress, Arc, et al. Combine any area where the output is a document, and there is a need for composable multi-media content sources, with software development’s natural drive for encapsulation and abstraction and you get back to something a OpenDoc-like.
Absolutely the web makes this easier than it was before, but if a document publishing platform is a bad place to start when creating a generic application development platform, it’s arguably an even worse place to start building a generic editing platform – I’ve yet to see a public one with truly great accessibility support, for example. Which is a problem, because getting the editing UX right is crucial.
This is unfair to 90s Apple, who spent a lot of time in that “wilderness” period trying to push UI in all sorts of new ways. Dylan. OpenDoc parts. CyberDog. MPW worksheets were the ancestors of the notebook interfaces that are lauded here (but why can I not get CommanDo in a modern shell?) The Newton. There is a long list of inventive approaches. It’s not that they didn’t try. It’s not that they thought users were stupid. It’s that none of it resonated with humans like the Mac-style WIMP did.
It is unfair to application makers, who opened up vistas with their “demo-tier” efforts. PageMaker and QuarkXpress revolutionised the worlds of print and journalism. Photoshop birthed an industry, as did Director. Avid, Final Cut Pro, AutoCAD, hell, PowerPoint and Excel. Mathematica. HyperCard. iTunes. The web browser. All of them “fragile, inflexible”. None of them composable or pipable. All of them opinionated. But powerful tools all the same. Some literally changed the world.
It is unfair to us, the users. It’s not Apple who treat us as stupid, it is the post - acting like if only we weren’t dumb we would reject these interfaces in favour of something “better”. But better at what? The WIMP did just enough to be a platform for other things and got out of the way. If we ended up with JavaScript when we could have had Lisp, well, there are reasons that pattern repeats.
And finally, it’s just outdated. No, we don’t really have “the document” anymore. We barely have WIMP these days. The primary computing paradigm doesn’t have windows, a mouse or pointers. For better or worse we have touch interfaces. And there are a whole different set of rants to be written about those. But we don’t find solutions to their problems by “going back to the Alto”.
I criticized the Macintosh project specifically – for failing to introduce new ideas & for failing to reproduce interesting and useful parts of the thing they were copying. I do not extend this criticism to the Newton team, or to the NeXT project (which involved substantially the same set of people as the Macintosh did).
Lots of interesting UI ideas appeared in the late 70s and early 80s. It’s a shame to me that the only one that has heavily influenced modern design is the one that’s least novel & most obviously flawed.
The worst parts of the Macintosh interpretation of Alto ideas infest the web & touch interfaces.
The problem is that the interface doesn’t “get out of the way” unless you submit to someone else’s methods. The basic flexibility of the computer as a general purpose machine is broken by the application model, in which each application is even more opinionated than any hand tool can be. Users don’t, as a rule, drop into a flow state from comfort with their applications: they struggle with forcing applications (whose behavior they aren’t allowed to directly change) to produce some semblance of the desired behavior, and then blame themselves for ‘not being good with computers’ when they are defeated by the developer’s lack of foresight.
Ultimately, market success is not a good proxy for design quality. There are too many confounding factors. Sufficiently good marketing will allow a bad design to survive indefinitely (and sometimes even to succeed) while bad marketing will sink a good design; injections of cash from investors will allow even unprofitable companies to survive indefinitely, and frequent cash injections by people whose preferences have little to do with the market or the product are extremely common in this industry; a bad design by a big or successful company can be forced through & held afloat by other, more profitable products. These phenomena (familiar from recent situations with Twitter, Google, & Facebook) aren’t new: the Amiga was a victim of these circumstances too, famously.
Having programmed the Amiga in the 90s [1] there isn’t that much of a difference between the Amiga and Mac OS. They both had menus across the top of the screen [2] and they both had controls like scrolls bars and buttons [3]. They both had a desktop metaphor. Had the Amiga survived the mid-90s, you might have well ranted against its GUI as much as the Mac.
Why did the Mac survive and the Amiga die? Many reasons. One, Commodore couldn’t sell ice to Africans. Two, while the hardware was impressive for 1985, the GUI was too tied to it to and thus by the mid-90s, the hardware wasn’t special and in some way, lagged behind other systems. And it’s “killer app” was too niche to make it profitable (video production). The Mac was able to evolve with the times (more color, higher resolution) coupled with it’s “killer apps” (desktop publishing, graphics imaging) it was able to survive (getting Jobs back at Apple certainly helped Apple [4]).
Your article might have been better with examples of a better UI that you think should be possible. As it stands, the article kind of reads as “it’s all crap! Start over!” but without any guidance. We can’t read your mind.
[1] It was a joy to program for. From the 68000 to the programmable hardware to the OS it was a complete joy to work with.
[2] The Amiga menu would only show up when you pressed the right mouse button, and you could include images as part of the menu but that’s the only real differences between the two.
[3] The Amiga had three primitive “gadget” types—the boolean gadget, the proportional gadget and the text gadget, of which all other controls (like a scroll bar) could be made. But that’s the issue—all you had were the “atoms”—the programmer was responsible for building up a scroll bar or a drop-down combo box.
[4] And with him now gone gone, Apple seems to have lost its way again.
When I compare the Amiga to the Macintosh, I’m generally comparing the Amiga 1000 to the first-generation Macintosh & the Macintosh Plus. The two machines existed at the same time, but the Amiga had high-resolution color graphics while the Macintosh had monochrome (not even greyscale) & the Amiga had double the horsepower at half the price. It’s not so surprising that Amigas weren’t selling great in the early 90s, after decades of mismanagement; it’s more surprising that the Amiga 1000 didn’t blow the Macintosh out of the water & totally murder the entire Apple brand in 1985.
Your article might have been better with examples of a better UI that you think should be possible. As it stands, the article kind of reads as “it’s all crap! Start over!” but without any guidance. We can’t read your mind.
I didn’t really expect such a big, general audience for this. I just pieced together bits of text I had already posted on Mastodon & SSB in order to create a companion piece for all the other stuff I’d written on the subject. Some of those cover historical systems that I think are underappreciated (though I’m planning to write a lot more about that), while others cover rules & principles that I think would produce better interfaces if followed.
I literally put this together, showed it to the folks on mastodon who have been arguing with me about UI design for two years already, and went to bed. When I woke up it had four thousand views. So, I’ve been spending the day trying to reintroduce context for the folks who came to it without reading the previous material.
I didn’t mention market success, which is flawed as a metric just as you describe. I said that of all the attempts at interesting UIs (which by no means stopped in the early 80s), only the Macintosh-esque WIMP (and later the iOS-esque touch interfaces) resonated with users. That is, people were drawn to them because they enabled them to get the things done with computers that they wanted to get done. Some businesses transformed that into market success, but that was arguably a by-product.
People, it turns out, are willing to “submit to someone else’s methods” in exchange for getting their jobs done. You seriously beg the question when you argue they don’t drop into a flow state — having watched professionals use Excel, Quark, Photoshop, and 3d Studio Max, I’d argue that’s precisely what they do.
Are users giving up some ideological, hypothetical, flexibility by submitting to the tyranny of the application model? Possibly. Should they care? Until there’s some compelling example of what new thing they could achieve by resisting the application model, no.
For every user I’ve seen drop into a flow state when using Excel, I’ve seen ten equally experienced users spend the entire time frustrated with it. And, for every time I’ve seen a user drop into a flow state using Excel, I’ve seen them spend hours trying to re-articulate a problem into a convenient tabular model ten times.
As programmers, we all know how nice it is to be able to completely rebuild our environment to suit the problem we’re trying to solve. This is most of what programmers do: make abstractions that lower the difference between our mental model of a problem and the underlying infrastructure. And, when we try to do the opposite & solve a problem with a set of mental tools that don’t fit, we end up tired and miserable, banging on a buggy and unmaintainable piece of crap.
There isn’t really a gulf between non-technical user & programmer – there’s a continuum. Most people are perfectly capable of stepping further into the space between user & programmer than they are currently allowed to by the walled-garden structure of applications. For instance, inter-process piping is a lot easier to conceptualize than the kinds of hacks that self-identified non-programmers regularly invent to solve common problems using spreadsheets; inter-process message-passing by gesture (like with the Alto) is conceptually similar to how musicians chain effects pedals & the only place it’s commonly used is music programs.
For those intrigued by HyperCard, there is a modern day descendant that is able to produce standalone software for macs, windows, linux, iOS and Android called LiveCode. It is a joy to use, specially when you’re just building tools for your personal use, more focused in solving your own problems than building the next unicorn thing. Nothing is faster, IMHO, than dragging and droping a bunch of controls, writing some glue script, and scratching some itch. It can (and is) used to ship real products in all platforms. I think it shines in Desktop cross-platform development and for internal tools, those are my preferred use-cases for it.
It looks great to me. Pricey though, and very clearly targeted at devs. The partially unrealised beauty of Hypercard was anyone could use it, and making Stacks didn’t feel like you were making software.
You can actually have it for free. They have an open source GPL version at https://livecode.org/, it is targeted at professional developers but I know a ton of non-developers using it (I used to be a quite active member of their community) and techers in K12 space using it as well so you can still have that feeling of using stacks in HC.
I always use it when I need to transform data for some other stuff I am doing, or to scrape stuff for personal use, or to do some form of batch processing.
A recent example, I was building a little API backend server for a project that is not related to LiveCode, just normal web stuff. It was easier for me to cook up a little stack with buttons, fields and some diagnostics and debug my little server while I was building it than use those generic tools such as insomnia.
Those generic tools are awesome, pretty and powerful, but my own handmade tools, which I create specifically to a given project have the advantage of being tailored to whatever I am building, so they might be ugly but they are my special-purpose tools that help me develop and debug my own projects. There is a lot of developer ergonomics when you can quickly come up with GUI tools to debug whatever you’re doing.
Another example is when I had to do some complex batch renaming of thousand of files. The process involved a little crawler going on disk into the folders, inspecting some configuration files relative to the folders and renaming stuff inside them. I could do this with visual feedback, progress bars, etc…
I am quite bad at design but I am scratching my itches, not selling itch scratchers. My little stacks help me a lot.
I miss classic MacOS a lot. I feel like many of the better ideas of classic OS were lost in the NeXT takeover. I use and endorse the use of OS X (or whatever they’re calling it), but it’s really the best of a bad lot in terms of user interaction.
I am the opposite. I liked NeXTSTEP a lot, and feel like a lot of ideas were lost when merging with the Mac. Although, to be honest, even more ideas were discarded more recently, after the merge with the Mac.
I’ll start first with features lost during the NeXTSTEP (OPENSTEP actually) to Mac OS X transition.
The biggest loss is of course detachable menus. Detachable menus meant any user could create whatever GUI required out of any application. Dynamic GUI controlled by the user! For free!
IIRC Rhapsody (the pre-release intermediate version between OPENSTEP and Mac OS X Server 1.0) had a menu bar, like the subsequent Mac OS X, however, it still provided detachable menus, giving you the best of both worlds. I wish we retained that.
On NeXTSTEP you could also run applications on one machine, and display their interface on another machine, sort of like X forwarding.
Now, about more recent changes, it’s not as much about removing existing functionality, but rather a shift in the culture. The model of interaction has shifted dramatically. NeXTSTEP was document-oriented, and multi window applications were the norm. It was very common for applications to interact with one another. Today, the world-view is application-centric, and most applications are single-window. Support for full screen applications is deeply enamored in the system.
This is particularly visible with Photos.app, an application that essentially manages multiple documents, but which offers a single-window application-centric interface. They even hid the documents (files), you won’t find them in Finder.app unless you know where to look in the .app container.
This transition has happened gradually, and it continues.
Tear-off menus were also a feature of the classic Mac OS (though they were nowhere near as universal or useful as the NeXT ones). I miss them both in macOS.
I think that interaction shift is ultimately down to the web, not the merger with the Mac. The original Mac was document-oriented as well (within an over-riding “active app” paradigm). But people have come to expect to interact within a single window, either a browser or a mobile device, and that’s utterly changed desktop apps as well.
The thing I miss most from OS 9 is the tab drawers in the Finder. Well, the whole Finder really. And the Chooser. For all the stick it got, discovering local services was never so simple again.
There was a big focus on documents across the board in the 90s - I remember all of the Microsoft hoopla about Cairo and later WinFS that were supposed to enable it. IMHO we’ve gone backwards over the past few years - a lot of modern productivity applications make it more difficult to work with multiple documents.
I know I sound like a bit of a stuck record but my perception is that desktop UIs have stagnated over the past 10-15 years, perhaps longer. In the 80s and 90s we had a lot of different platforms pushing different paradigms but today we have Windows 10, fundamentally the same as Windows 95, and mac OS 10.12, fundamentally the same as 10.00. Unfortunately the mainstream open source desktop platforms do their best to adopt/clone/tweak their closed source counterparts. Yes, we have the web, but a lot of it is just making prettier versions of the same desktop UIs we had in 1994.
They have absolutely stagnated. I don’t think documents is a fruitful route to take, to be honest — they’re a concept rooted in a world where most output was destined for a printer — but we can surely, surely have an alternative to the current paradigm.
Yes, documents, as envisioned in the ‘90s are dead. But I would want a data-oriented workflow, rather than an application-centric universes, where each application manages its own opaque data silo, or manages some data stored somewhere unknown in the cloud.
I wish the concepts of data, data representation, and data storage were clearly delimited and independently manageable.
Of course Plan 9 is one possible incarnation of these concepts, albeit not one that can provide what people expect in this day and age. But there are others possible incarnations, I’m sure.
In Plan 9 the user is in control where data resides, and where the computation happens, the namespace is the mapping between these resources.
Data can be on some remote file server, and can be processed by the local CPU, or it can be on some local file server and be processed by a remote CPU; it works just as well. Of course, preferably you chose to run the computations close to the data (both the data and the computation being remote), but you are in no way required to do so.
Of course typeless bag-of-bytes type of files are not what the general population expects today, but it doesn’t take much imagination conceiving having rich data providers communicating with data consumers through open protocols. The user can pick and chose who provides these independent services, or he can provide them himself (or some subset of them).
The protocols can even be application-specific protocols, as long as they are open protocols. They don’t even need to be restricted to network I/O. Perhaps it is desirable to run the computation closer to the data that the internet can provide. I mean protocol in the general sense of two systems talking together through whatever means, which could even mean some form of running two services under the same kernel instance, as long as there is a standard way to do this.
And then the user can chose what application to use, to display this data managed by some system, and processed by another, through another protocol (or the same protocol).
Of course typeless bag-of-bytes type of files are not what the general population expects today, but it doesn’t take much imagination conceiving having rich data providers communicating with data consumers through open protocols. The user can pick and chose who provides these independent services, or he can provide them himself (or some subset of them).
Classic Mac OS had the beginnings of this, and the unquestioned regression in handling of files and data from Classic Mac OS to OS X (file extensions? Are you FOR SERIOUS?) was the worst sin of the transition, in my opinion.
Am I missing some amazing rich features in the world of pagers? All I want is for something to pause the terminal when a screenful is reached, let me page back sometimes and that’s about it. more(1) does seem to handle that, primitive or not.
A variant of that feature I use pretty often in less is pressing & to search instead of /. That hides all lines except the matching ones, like an interactive grep.
Yes. Less has a few tricks. “shift >” will take you to the end of the file. “shift <” back to the beginning. “/” allows you to search. Typing “10” will take you to line 10. “?” allows you to search backwards. I actually use all of these features every day.
Interesting. I didn’t support the guy’s views on gay marriage, but I was excited to see how someone with his technical expertise could lead the company. If it were up to me, I think he should have stayed if he was truly the best person for the job as long as his views didn’t affect the way he did his job.
EDIT: Especially given the statement in his personal blog post a few days ago, I think he should’ve been given a better chance.
That post said he would not “ask for trust free of context” and should be allowed to “show, not tell” but then his next actions were to refuse to discuss his position except in a 1:1 setting, and to essentially double-down on his intolerance with a weird justification about Indonesia.
The showing was all too telling. It was really that which disqualified him from the job.
CEO, even of a technical company, is not simply a technical position, it’s a position as a leader, a figurehead, a frontman, an ambassador. Mozilla faces multiple challenges in the coming months and years that will require someone skilled at these sorts of social politics and navigating passionately held views (cf DRM, mp4 etc).
That their CEO couldn’t even manage to navigate this with his own position - that he in effect put his head in the sand and refused to discuss it or even attempt to justify it - boded poorly for his ability to do it for the entire organisation.
The “if you disagree with me you’re oppressing poor Indonesians who can’t speak for themselves” thing is what really pushed me over the edge. What an asshole. I can’t imagine this having played out any other way, his response was pathetic and deserving of contempt.
I think the OpenDoc thinking lives on in attempts like Fluid[1], and also in block-based CMSs like Wordpress, Arc, et al. Combine any area where the output is a document, and there is a need for composable multi-media content sources, with software development’s natural drive for encapsulation and abstraction and you get back to something a OpenDoc-like.
Absolutely the web makes this easier than it was before, but if a document publishing platform is a bad place to start when creating a generic application development platform, it’s arguably an even worse place to start building a generic editing platform – I’ve yet to see a public one with truly great accessibility support, for example. Which is a problem, because getting the editing UX right is crucial.
[1] https://www.theverge.com/2021/6/17/22538144/microsoft-fluid-components-documents-office-teams-onenote-outlook-whiteboard
This is unfair to 90s Apple, who spent a lot of time in that “wilderness” period trying to push UI in all sorts of new ways. Dylan. OpenDoc parts. CyberDog. MPW worksheets were the ancestors of the notebook interfaces that are lauded here (but why can I not get CommanDo in a modern shell?) The Newton. There is a long list of inventive approaches. It’s not that they didn’t try. It’s not that they thought users were stupid. It’s that none of it resonated with humans like the Mac-style WIMP did.
It is unfair to application makers, who opened up vistas with their “demo-tier” efforts. PageMaker and QuarkXpress revolutionised the worlds of print and journalism. Photoshop birthed an industry, as did Director. Avid, Final Cut Pro, AutoCAD, hell, PowerPoint and Excel. Mathematica. HyperCard. iTunes. The web browser. All of them “fragile, inflexible”. None of them composable or pipable. All of them opinionated. But powerful tools all the same. Some literally changed the world.
It is unfair to us, the users. It’s not Apple who treat us as stupid, it is the post - acting like if only we weren’t dumb we would reject these interfaces in favour of something “better”. But better at what? The WIMP did just enough to be a platform for other things and got out of the way. If we ended up with JavaScript when we could have had Lisp, well, there are reasons that pattern repeats.
And finally, it’s just outdated. No, we don’t really have “the document” anymore. We barely have WIMP these days. The primary computing paradigm doesn’t have windows, a mouse or pointers. For better or worse we have touch interfaces. And there are a whole different set of rants to be written about those. But we don’t find solutions to their problems by “going back to the Alto”.
I criticized the Macintosh project specifically – for failing to introduce new ideas & for failing to reproduce interesting and useful parts of the thing they were copying. I do not extend this criticism to the Newton team, or to the NeXT project (which involved substantially the same set of people as the Macintosh did).
Lots of interesting UI ideas appeared in the late 70s and early 80s. It’s a shame to me that the only one that has heavily influenced modern design is the one that’s least novel & most obviously flawed.
The worst parts of the Macintosh interpretation of Alto ideas infest the web & touch interfaces.
The problem is that the interface doesn’t “get out of the way” unless you submit to someone else’s methods. The basic flexibility of the computer as a general purpose machine is broken by the application model, in which each application is even more opinionated than any hand tool can be. Users don’t, as a rule, drop into a flow state from comfort with their applications: they struggle with forcing applications (whose behavior they aren’t allowed to directly change) to produce some semblance of the desired behavior, and then blame themselves for ‘not being good with computers’ when they are defeated by the developer’s lack of foresight.
Ultimately, market success is not a good proxy for design quality. There are too many confounding factors. Sufficiently good marketing will allow a bad design to survive indefinitely (and sometimes even to succeed) while bad marketing will sink a good design; injections of cash from investors will allow even unprofitable companies to survive indefinitely, and frequent cash injections by people whose preferences have little to do with the market or the product are extremely common in this industry; a bad design by a big or successful company can be forced through & held afloat by other, more profitable products. These phenomena (familiar from recent situations with Twitter, Google, & Facebook) aren’t new: the Amiga was a victim of these circumstances too, famously.
Having programmed the Amiga in the 90s [1] there isn’t that much of a difference between the Amiga and Mac OS. They both had menus across the top of the screen [2] and they both had controls like scrolls bars and buttons [3]. They both had a desktop metaphor. Had the Amiga survived the mid-90s, you might have well ranted against its GUI as much as the Mac.
Why did the Mac survive and the Amiga die? Many reasons. One, Commodore couldn’t sell ice to Africans. Two, while the hardware was impressive for 1985, the GUI was too tied to it to and thus by the mid-90s, the hardware wasn’t special and in some way, lagged behind other systems. And it’s “killer app” was too niche to make it profitable (video production). The Mac was able to evolve with the times (more color, higher resolution) coupled with it’s “killer apps” (desktop publishing, graphics imaging) it was able to survive (getting Jobs back at Apple certainly helped Apple [4]).
Your article might have been better with examples of a better UI that you think should be possible. As it stands, the article kind of reads as “it’s all crap! Start over!” but without any guidance. We can’t read your mind.
[1] It was a joy to program for. From the 68000 to the programmable hardware to the OS it was a complete joy to work with.
[2] The Amiga menu would only show up when you pressed the right mouse button, and you could include images as part of the menu but that’s the only real differences between the two.
[3] The Amiga had three primitive “gadget” types—the boolean gadget, the proportional gadget and the text gadget, of which all other controls (like a scroll bar) could be made. But that’s the issue—all you had were the “atoms”—the programmer was responsible for building up a scroll bar or a drop-down combo box.
[4] And with him now gone gone, Apple seems to have lost its way again.
When I compare the Amiga to the Macintosh, I’m generally comparing the Amiga 1000 to the first-generation Macintosh & the Macintosh Plus. The two machines existed at the same time, but the Amiga had high-resolution color graphics while the Macintosh had monochrome (not even greyscale) & the Amiga had double the horsepower at half the price. It’s not so surprising that Amigas weren’t selling great in the early 90s, after decades of mismanagement; it’s more surprising that the Amiga 1000 didn’t blow the Macintosh out of the water & totally murder the entire Apple brand in 1985.
I didn’t really expect such a big, general audience for this. I just pieced together bits of text I had already posted on Mastodon & SSB in order to create a companion piece for all the other stuff I’d written on the subject. Some of those cover historical systems that I think are underappreciated (though I’m planning to write a lot more about that), while others cover rules & principles that I think would produce better interfaces if followed.
I literally put this together, showed it to the folks on mastodon who have been arguing with me about UI design for two years already, and went to bed. When I woke up it had four thousand views. So, I’ve been spending the day trying to reintroduce context for the folks who came to it without reading the previous material.
I didn’t mention market success, which is flawed as a metric just as you describe. I said that of all the attempts at interesting UIs (which by no means stopped in the early 80s), only the Macintosh-esque WIMP (and later the iOS-esque touch interfaces) resonated with users. That is, people were drawn to them because they enabled them to get the things done with computers that they wanted to get done. Some businesses transformed that into market success, but that was arguably a by-product.
People, it turns out, are willing to “submit to someone else’s methods” in exchange for getting their jobs done. You seriously beg the question when you argue they don’t drop into a flow state — having watched professionals use Excel, Quark, Photoshop, and 3d Studio Max, I’d argue that’s precisely what they do.
Are users giving up some ideological, hypothetical, flexibility by submitting to the tyranny of the application model? Possibly. Should they care? Until there’s some compelling example of what new thing they could achieve by resisting the application model, no.
For every user I’ve seen drop into a flow state when using Excel, I’ve seen ten equally experienced users spend the entire time frustrated with it. And, for every time I’ve seen a user drop into a flow state using Excel, I’ve seen them spend hours trying to re-articulate a problem into a convenient tabular model ten times.
As programmers, we all know how nice it is to be able to completely rebuild our environment to suit the problem we’re trying to solve. This is most of what programmers do: make abstractions that lower the difference between our mental model of a problem and the underlying infrastructure. And, when we try to do the opposite & solve a problem with a set of mental tools that don’t fit, we end up tired and miserable, banging on a buggy and unmaintainable piece of crap.
There isn’t really a gulf between non-technical user & programmer – there’s a continuum. Most people are perfectly capable of stepping further into the space between user & programmer than they are currently allowed to by the walled-garden structure of applications. For instance, inter-process piping is a lot easier to conceptualize than the kinds of hacks that self-identified non-programmers regularly invent to solve common problems using spreadsheets; inter-process message-passing by gesture (like with the Alto) is conceptually similar to how musicians chain effects pedals & the only place it’s commonly used is music programs.
There’s longer piece waiting to be written about the disappearance of WIMP. It’s a pretty ground shaking change in how we use computers.
For those intrigued by HyperCard, there is a modern day descendant that is able to produce standalone software for macs, windows, linux, iOS and Android called LiveCode. It is a joy to use, specially when you’re just building tools for your personal use, more focused in solving your own problems than building the next unicorn thing. Nothing is faster, IMHO, than dragging and droping a bunch of controls, writing some glue script, and scratching some itch. It can (and is) used to ship real products in all platforms. I think it shines in Desktop cross-platform development and for internal tools, those are my preferred use-cases for it.
It looks great to me. Pricey though, and very clearly targeted at devs. The partially unrealised beauty of Hypercard was anyone could use it, and making Stacks didn’t feel like you were making software.
You can actually have it for free. They have an open source GPL version at https://livecode.org/, it is targeted at professional developers but I know a ton of non-developers using it (I used to be a quite active member of their community) and techers in K12 space using it as well so you can still have that feeling of using stacks in HC.
I’m curious as to what you’ve built with this (just to get an idea of what the sorts of itch-scratching things are easily done with this)
I always use it when I need to transform data for some other stuff I am doing, or to scrape stuff for personal use, or to do some form of batch processing.
A recent example, I was building a little API backend server for a project that is not related to LiveCode, just normal web stuff. It was easier for me to cook up a little stack with buttons, fields and some diagnostics and debug my little server while I was building it than use those generic tools such as insomnia.
Those generic tools are awesome, pretty and powerful, but my own handmade tools, which I create specifically to a given project have the advantage of being tailored to whatever I am building, so they might be ugly but they are my special-purpose tools that help me develop and debug my own projects. There is a lot of developer ergonomics when you can quickly come up with GUI tools to debug whatever you’re doing.
Another example is when I had to do some complex batch renaming of thousand of files. The process involved a little crawler going on disk into the folders, inspecting some configuration files relative to the folders and renaming stuff inside them. I could do this with visual feedback, progress bars, etc…
I am quite bad at design but I am scratching my itches, not selling itch scratchers. My little stacks help me a lot.
So is LiveCode a project of your own then?
No, it is not. I am just one of its users.
What the hell, @rain1.
Find something kinder to do with your time.
Bringing other people down while adding nothing to the conversation isn’t something we need here.
I miss classic MacOS a lot. I feel like many of the better ideas of classic OS were lost in the NeXT takeover. I use and endorse the use of OS X (or whatever they’re calling it), but it’s really the best of a bad lot in terms of user interaction.
I am the opposite. I liked NeXTSTEP a lot, and feel like a lot of ideas were lost when merging with the Mac. Although, to be honest, even more ideas were discarded more recently, after the merge with the Mac.
Just curious what you see those most recent losses were?
I’ll start first with features lost during the NeXTSTEP (OPENSTEP actually) to Mac OS X transition.
The biggest loss is of course detachable menus. Detachable menus meant any user could create whatever GUI required out of any application. Dynamic GUI controlled by the user! For free!
IIRC Rhapsody (the pre-release intermediate version between OPENSTEP and Mac OS X Server 1.0) had a menu bar, like the subsequent Mac OS X, however, it still provided detachable menus, giving you the best of both worlds. I wish we retained that.
On NeXTSTEP you could also run applications on one machine, and display their interface on another machine, sort of like X forwarding.
Now, about more recent changes, it’s not as much about removing existing functionality, but rather a shift in the culture. The model of interaction has shifted dramatically. NeXTSTEP was document-oriented, and multi window applications were the norm. It was very common for applications to interact with one another. Today, the world-view is application-centric, and most applications are single-window. Support for full screen applications is deeply enamored in the system.
This is particularly visible with Photos.app, an application that essentially manages multiple documents, but which offers a single-window application-centric interface. They even hid the documents (files), you won’t find them in Finder.app unless you know where to look in the .app container.
This transition has happened gradually, and it continues.
Tear-off menus were also a feature of the classic Mac OS (though they were nowhere near as universal or useful as the NeXT ones). I miss them both in macOS.
I think that interaction shift is ultimately down to the web, not the merger with the Mac. The original Mac was document-oriented as well (within an over-riding “active app” paradigm). But people have come to expect to interact within a single window, either a browser or a mobile device, and that’s utterly changed desktop apps as well.
The thing I miss most from OS 9 is the tab drawers in the Finder. Well, the whole Finder really. And the Chooser. For all the stick it got, discovering local services was never so simple again.
There was a big focus on documents across the board in the 90s - I remember all of the Microsoft hoopla about Cairo and later WinFS that were supposed to enable it. IMHO we’ve gone backwards over the past few years - a lot of modern productivity applications make it more difficult to work with multiple documents.
I know I sound like a bit of a stuck record but my perception is that desktop UIs have stagnated over the past 10-15 years, perhaps longer. In the 80s and 90s we had a lot of different platforms pushing different paradigms but today we have Windows 10, fundamentally the same as Windows 95, and mac OS 10.12, fundamentally the same as 10.00. Unfortunately the mainstream open source desktop platforms do their best to adopt/clone/tweak their closed source counterparts. Yes, we have the web, but a lot of it is just making prettier versions of the same desktop UIs we had in 1994.
They have absolutely stagnated. I don’t think documents is a fruitful route to take, to be honest — they’re a concept rooted in a world where most output was destined for a printer — but we can surely, surely have an alternative to the current paradigm.
Yes, documents, as envisioned in the ‘90s are dead. But I would want a data-oriented workflow, rather than an application-centric universes, where each application manages its own opaque data silo, or manages some data stored somewhere unknown in the cloud.
I wish the concepts of data, data representation, and data storage were clearly delimited and independently manageable.
Of course Plan 9 is one possible incarnation of these concepts, albeit not one that can provide what people expect in this day and age. But there are others possible incarnations, I’m sure.
In Plan 9 the user is in control where data resides, and where the computation happens, the namespace is the mapping between these resources.
Data can be on some remote file server, and can be processed by the local CPU, or it can be on some local file server and be processed by a remote CPU; it works just as well. Of course, preferably you chose to run the computations close to the data (both the data and the computation being remote), but you are in no way required to do so.
Of course typeless bag-of-bytes type of files are not what the general population expects today, but it doesn’t take much imagination conceiving having rich data providers communicating with data consumers through open protocols. The user can pick and chose who provides these independent services, or he can provide them himself (or some subset of them).
The protocols can even be application-specific protocols, as long as they are open protocols. They don’t even need to be restricted to network I/O. Perhaps it is desirable to run the computation closer to the data that the internet can provide. I mean protocol in the general sense of two systems talking together through whatever means, which could even mean some form of running two services under the same kernel instance, as long as there is a standard way to do this.
And then the user can chose what application to use, to display this data managed by some system, and processed by another, through another protocol (or the same protocol).
Classic Mac OS had the beginnings of this, and the unquestioned regression in handling of files and data from Classic Mac OS to OS X (file extensions? Are you FOR SERIOUS?) was the worst sin of the transition, in my opinion.
Am I missing some amazing rich features in the world of pagers? All I want is for something to pause the terminal when a screenful is reached, let me page back sometimes and that’s about it. more(1) does seem to handle that, primitive or not.
A variant of that feature I use pretty often in
less
is pressing&
to search instead of/
. That hides all lines except the matching ones, like an interactivegrep
.Yes. Less has a few tricks. “shift >” will take you to the end of the file. “shift <” back to the beginning. “/” allows you to search. Typing “10” will take you to line 10. “?” allows you to search backwards. I actually use all of these features every day.
less
can switch into and out of ‘follow’ mode withF
, which means you get the benefit of a pager, but also of e.gtail -f
.Interesting. I didn’t support the guy’s views on gay marriage, but I was excited to see how someone with his technical expertise could lead the company. If it were up to me, I think he should have stayed if he was truly the best person for the job as long as his views didn’t affect the way he did his job.
EDIT: Especially given the statement in his personal blog post a few days ago, I think he should’ve been given a better chance.
That post said he would not “ask for trust free of context” and should be allowed to “show, not tell” but then his next actions were to refuse to discuss his position except in a 1:1 setting, and to essentially double-down on his intolerance with a weird justification about Indonesia.
The showing was all too telling. It was really that which disqualified him from the job.
CEO, even of a technical company, is not simply a technical position, it’s a position as a leader, a figurehead, a frontman, an ambassador. Mozilla faces multiple challenges in the coming months and years that will require someone skilled at these sorts of social politics and navigating passionately held views (cf DRM, mp4 etc).
That their CEO couldn’t even manage to navigate this with his own position - that he in effect put his head in the sand and refused to discuss it or even attempt to justify it - boded poorly for his ability to do it for the entire organisation.
The “if you disagree with me you’re oppressing poor Indonesians who can’t speak for themselves” thing is what really pushed me over the edge. What an asshole. I can’t imagine this having played out any other way, his response was pathetic and deserving of contempt.
My question: Does this new age of moral purity scale, or will we have to know all the politics of everyone we associate with?
“New age of moral purity”? Give me a break.
People in high profile positions are expected to avoid controversy, because nobody likes controversy. News at eleven.