Damn, do people get militant with their preferences.
The author covered important details and put them in perspective in a very fair maner. Yet both here and in the orange site, lot’s of people are being very ferocious with their debunking tone.
I learned C in the 90s both on windows and Linux. The user experience with visual studio 6 was superior to what we had today. Creating different kinds of projects with a wizard and inspecting the generates contents, was a great way to learn. And linking libraries and such things just worked glitch free.
On the other hand, getting things to compile on Linux, was everything but intuitive for the learner. Even editing files was challenging. I don’t know why people feel the need to deny what was obvious.
I has dipped my toe in basic and batch files years before using msedit and I completely agree with the author. It was very intuitive and self explaining. With the right amount of functionality.
Full disclaimer: I am one of those people that lives in the terminal.
I believe visual studio code is the spiritual grandchild of msedit, turbo Pascal, etc. They kept the UI paradigm simple, they didn’t threw the keyboard centric user base away just because it’s a gui, and they put extra effort keeping it snappy, given it is a webview.
I had the opposite experience. I used the wizards in Visual Studio (mostly Visual C++ 5) and found it to be part way between two local optima. I used VB5 and it hid a lot of things in magic files (if you ever looked at all of the files in a VB project, a load of them were simply lists of field values) but I didn’t need to understand them. When I started writing *NIX software, there were no magic files and I needed to understand everything. VS generated a bunch of magic files for pre compiled headers and for things that went into the resource compiler, but I actually needed to understand (and, sometimes, modify them).
NeXT’s interface builder and Borland’s Windows IDEs were much better at maintaining this separation between things that were code and edited by humans and things that were expected to be edited in a rich editor and treated as opaque outside that editor.
I also prefer to program on Linux nowaday. To the point that I don’t find a windows computer usable for a developer. And if I should be honest, even a Mac is barely useful with all its silly heavy handed deviations from what’s proven to work.
But times are different. Back then, with very limited, or even inexistent, internet acces, things needed to be not only intuitive, but discoverable and usable with the very first increments of knowledge.
Different distros placed libs and all sorts of things in different places. If your code didn’t compile for some reason, you couldn’t just install a missing dependency with a package manager. Or even Google dor such information. Neither Google nor packaged managers existed. Even reading from a CD-ROM requires messing with mount points and whatnot. Imagine a curious kid being required to learn all this.
What did you use as text editor on Linux? I think the author is on point with his comparison between msedit and Emacs… And if we talk about vi, kids would restart their computer if they tried to check it out.
Or go back a bit further to the Lisp machines of the 1980’s or Smalltalk. Turbo Pascal and company were getting a fraction of that power wedged into a microcomputer. Or to NeXT with its graphical development environment that is basically what macOS still provides today. The fact that we’re talking about TUI IDEs from the early 1990’s years after the Macintosh and Amiga were released, tells you just how backwards PCs were.
The Unix systems were never well designed interfaces with excellent tools. They just let workstation companies start with BSD Lite to save on OS development costs and later POSIX ended up being required by policy so many places that selling a Unix in the workstation market was easier.
I never seen GUI programs (that use common GUI toolkits at least) that can be used with keyboard only. Some programs (especially IDEs) have lots of keyboard shortcuts but still there are constantly troubles with input focus: sometimes I have to press tab thousand times, sometimes tab doesn’t even work, sometimes focus is there but poorly indicated. Or I just don’t know how to use such programs correctly.
I’m not a fan of unix terminal TUIs, however. Unlike DOS, keyboard input is always broken in terminals, especially alt and esc keys. Programs that use GUI, but are actually more like TUI, like graphical emacs, however, work fine.
I remember seeing TUI database programs in 90s at point-of-sales, manager workspaces and so on and remember that operators used these at very high speed and low latency. Compared to today, where they had to reposition mouse after each typed word, especially when using web-based UIs.
The Unix philosophy, from which Linux inherited a lot, does not go along well with the concept of an IDE like the ones that the author “wants”: you can absolutely get the same experience, but not with a single program that does everything for you, rather by combining a few.
“Linux is my IDE”, as they say: put together a tiling window manager (or tmux), a good modern terminal editor like Helix (or a well-configured emacs/vim/neovim with language server support) and a couple more small utilities and you have a beautiful IDE experience that’s IMHO on par with what you can get with VSCode+plug-ins. But “the OS is the IDE”, not one specific program.
I think you get a better experience without an IDE. But that is because we are already in possession of the knowledge required to work efficiently. I don’t want UI elements cluttering my screen, I don’t need to perform repetitive sequential clicking tasks if I can automate anything I want with a shell script, and so on.
The point is that I needed to learn all these things, and while it certainly paid of for me, it did require a big initial investment.
Linux comes from an industrial/professional background, windows was mostly for home users. Even MS-DOS, it was much more limited than UNIX, bit it did what it claimed and was accessible to grasp.
You’re missing the main point of the article, which is about UI.
First, GUIs enforced standardisation of UIs. GUIs took off thanks to the Mac in 1984, and original classic 68000 MacOS. Then, a year later, this came to the PC thanks to Windows.
Windows 1 was rubbish. Windows 2 was still very poor. Windows 2 co-evolved with and shared DNA with OS/2, and OS/2 1.x was pretty much junk too.
Windows 3 (1990) was actually OK and was a big hit. Everyone started to see GUIs everywhere. Thanks to GUIs and GUI programming guidelines, from 1990 or so, standardised UIs with GUI-style design came to DOS and commercial DOS apps as well.
That UI design was defined and specified by IBM and it’s called CUA.
On DOS, MS Word 1, 2, 3, 4 and 5.0 had a weird MS proprietary UI. DOS Word 5.5 (now freeware, so you can download it and try it in DOSbox) had a standard UI with a menu bar at the top, pull-down menus, dialog boxes, etc.
WordPerfect up to 4.2 has a weird proprietary UI. WordPerfect 5 had a standard CUA UI.
CUA came to databases such as FoxPro and Clipper. CUA came to presentation programs. CUA came to outliners such as GrandView. It came to everything.
Early (meaning 1980s) DOS apps had a different UI for every different program.
Late (meaning 1990s) DOS apps had a standardised uniform UI, with a design inherited from Windows, which inherited it from MacOS and IBM CUA.
This continued even as Win95 made DOS invisible, and then WinME made it inaccessible, and then WinXP went mainstream and didn’t contain DOS at all any more. Still, at the command prompt, apps had a standard UI and a standard look and feel, and if you knew how to drive Windows from the keyboard, you could drive these DOS and command-line Win32 apps just the same way.
(And of course if you had a mouse and it was configured in DOS then they worked with point-and-click just the same way.)
What the blog post is about is that just around the time that Linux was evolving, the GUI era of standadised UIs came to text-mode apps as well, and that was a huge benefit and made them much easier to use.
But the community busily building Linux with 1970s tools like Emacs and Vi didn’t notice this was happening. It would have been relatively easy to bolt a CUA-compliant TUI onto Emacs and Vi and Dselect and LinuxConf and make menuconfig and all the other text-mode Linux tools, but the Unix folks didn’t notice this trend of standardisation and harmonisation, so it didn’t happen.
So while on native PC OSes, text-mode apps got hugely better in the 1990s and they ended up attractive and usable, even if you didn’t have a GUI or the resources to run one, this never happened on Linux (or the BSDs) and they remain trapped in the hell of weird proprietary UIs for every full-screen app.
Goodness me that’s a very GUI-oriented take! Perhaps I’m misinterpreting the tone or what’s meant, but I have to strongly disagree with the (to me) rather pejorative take on how “Unix folks didn’t notice [the] trend of standardisation and harmonisation”, for example.
That phrasing suggests that such a trend was:
(a) indeed towards standardisation and harmonisation. Having used GUIs for decades, I’d suggest that this trend’s destination is still a way away
(b) even a good thing and something to which said Unix folks might aspire. Far from it, as those, like me, even today, prefer a command line based interface, would argue.
To your other delightfully pejorative reference to “busily building Linux with 1970s tools like Emacs and Vi” - I guess this also applies to folks today building our cloud infrastructure with 1970s concepts such as environment variables, shells, pipes, and the like. Do you think we should tell them?
(a) indeed towards standardisation and harmonisation. Having used GUIs for decades, I’d suggest that this trend’s destination is still a way away
TBH I think that desktop/laptop computer GUI design, and indeed website design, has regressed considerably in the last 25Y or so.
There was a sweet spot of harmony, back before smartphones and around the time of WinXP and the early releases of Ubuntu.
(b) even a good thing and something to which said Unix folks might aspire. Far from it, as those, like me, even today, prefer a command line based interface, would argue.
Both nothing in this in any way, shape or form contradicts the CLI approach. The tools the blog post, and I, are talking about are shell tools you use on the console, or over ssh or whatever, and no GUI is needed or present.
To your other delightfully pejorative reference to “busily building Linux with 1970s tools like Emacs and Vi”
It is. I am making a reference to one of my own articles:
I have been using Vi since 1988. I utterly loathe it. I chose the 1976 date because that was the first release of the original vi.
I guess this also applies to folks today building our cloud infrastructure with 1970s concepts such as environment variables, shells, pipes, and the like. Do you think we should tell them?
I think we should tell them to modernise their full-screen tools to conform to 1990s UI standards. Then they will get more users and a better experience, because there is a standard for UIs and it is coming up on 40 years old now, and it’s time to get with the program.
Er, no it isn’t. It is the direct opposite of that.
Let me spell it out:
A major factor in the rise of desktop GUIs was that they strongly encouraged standardisation of UIs. Apple published the original Human Interface Guidelines to help this.
IBM studied these – an example – and created CUA. This is very much not GUI oriented and the original CUA guides, which I recently put some of online, focus on IBM mainframe apps for text-only terminals.
The CUA guidelines were also adopted in Windows and OS/2, even in OS/2 1.0, which did not have a GUI at all.
By the era when Windows 3.0 was making GUIs on the PC actually usable for the mass market, many PCs couldn’t usefully run Windows or people didn’t want it because they had a big installed base of DOS apps. The DOS app vendors kept selling upgrades to their DOS apps by making them CUA-compliant, and that’s why even big names like WordPerfect and Microsoft itself issued new text-based CUA-UI versions:
It gave a reason for existing customers to upgrade.
It made the apps easier to use and so attracted new users.
It allowed them to offer a unified UI across their editions (Windows, OS/2, MacOS, DOS, even the Amiga and ST) and that meant that help files and paper documentation could also be shared.
It was modern and it looked good – the core point of the original blog post at the top of this page.
This is not about Windows or Mac. It’s not about GUIs. It’s about how the rise of the Mac and of Windows influenced DOS and text-only apps. It’s about how text-only console-mode apps, especially on DOS but on other OSes as well, gradually adopted the standardisation happening in human-computer interface design. It’s about how the UI of non-GUI apps improved when it adopted the conventions of UIs.
And my point is that apps like Vi/Vim/Elvis/Neovim/whatever, and Emacs, and Joe/Pico/Nano/FreeBSD EE could benefit if they did exactly the same sort of modernisation that big industry players such as WordPerfect did when it moved from WP 4.x to WP 5.x, 35 years ago.
So long as there’s a way to go back to the old UI, even experienced users would not be inconvenienced for even a second.
Run Emacs -> is there a config file? Y: disable new UI / N: load new CUA UI -> old and new users are happy.
I write for a living. I publish 1-2 technical articles per day, Mon-Fri, that are read by 10s of thousands of people, and I have been doing this (at varying frequencies) since 1995.
I can absolutely 100% promise to you that most people do not understand what they read. I get, at a very conservative estimate, one hundred times more responses from people who misunderstood what I wrote than grasped it.
You seemed, and TBH still seem, to be doing that.
I responded to a post about text-only programs’ UI with an appeal for standardisation of text-only UIs and you said it was “very GUI oriented”.
I carefully spelled out why it wasn’t, and how it need not affect users who know the old systems, and you haven’t even registered that: you’ve just repeated your accusation, defended your misunderstanding of my reasoning, and tantamount to called me a troll.
WTF am I supposed to do?
I tried to carefully enumerate why what I was saying is not at all what you summarised it as, and you respond seconds later without answering a single thing I said.
The kindest response I can give to this is please revise your comments carefully - editing is very important, even for a comments section. In my initial reading of your comment, I wasn’t even sure what the central thesis of your initial comment even was, or what aspect of lorddimwit’s comment you were addressing. After rereading it, it seemed to be a springboard for a long history about CUA. It came off as meandering (why bring up Windows ME?), excessively long (a lot of that history is not relevant for what you want to say), and emotionally charged (anything opinion often is, but even the history felt like you had an ax to grind). Just saying something like “I think consistent conventions made PC applications easier to use than contemporary Unix ones” would get your original point across in a single sentence.
The reply to qmacro was better at making your central thesis more obvious, but it felt accusatory and patronizing towards him. Not to mention it felt even more emotionally charged than the last comment. It’s the kind of attitude that makes people not want to reply.
If you have to bring up your professional credentials as a writer, it doesn’t reflect well on your writing.
My impression is that people are reading the title or first line of the post, not reading – or worse, skimming and thinking they are reading, but actually failing to successfully capture any gist – and reacting to their misinterpretation of what is being said.
I comment as such, and people do the same with the comments.
Calling this out is not reacting negatively IHMO. Pointing out to people that they are not understanding what they are commenting on is not a hostile reaction. Not all negativity is bad. It is necessary to be negative to achieve balance.
Yep, the author is looking at these tools through rose-tinted glasses. I started out with Turbo Pascal and C (and some 8-bit assemblers that were similarly integrated) but once I had regular access to Unix and Linux I never looked back.
I have extremely fond memories (not rose tinted) of my “IDE” in the late 1980’s working for a large energy company in London. Typical then was to run IBM mainframe hardware with the corresponding OSes - often a hypervisor like VM/CMS hosting DOS/VSE or, in my case, MVS/XA. And within MVS/XA, the online realtime (time-sharing) interactivity was provided by TSO/E, but then you got to the “IDE” which was the supremely flexible and powerful ISPF, combined with the ROSCOE editor and then access (from an ISPF panel) to JES2 to look at job queues, output and manage that content (if you’re curious, there are good screenshots to be had via e.g. Google image search for ISPF, ROSCOE and JES2).
Absolutely fantastic experience and yes, I count that as another example of a terminal based interactive development environment (in which I was extremely efficient) along the same lines as “UNIX is my IDE” (which I say now, using tmux, bash and neovim in dev containers). I interacted with my IDE through a 3278 green screen terminal.
One of the best parts was that you could extend the functionality of your environment by creating custom panels, and scripting them with REXX or CLIST. One example is where I created a REXX script to check the job class of a batch job that was going to run an (IMS) Batch Message Processing step; such jobs were only allowed to run in a certain job class, and so any inappropriate class allocation was caught on submission from within the (ROSCOE) editor, rather than minutes (or hours) later when the job was scheduled and then came back in error.
For comparison, in writing this post, I fired up Turbo C++ in DOSBox and I was able to create a “hello world” project and navigate the environment in minutes—all without prior knowledge (everything I had known has been forgotten by now). The environment is intuitive and, as an IDE, integrated all around.
I used to be in a class of relatively young people which had lab classes and exams done in Turbo C++. Needless to say, most people did not take to the environment well. It’s very easy to judge an environment by its merits when you already know what not to do.
Every time the fans in my sate of the art laptop spin up just because I am running an IDE (JetBrains stuff) I feel like the industry took a very wrong turn somewhere. I can understand it if it’s some initial index creation or something, but if it was just that I wouldn’t be complaining.
It seems the only alternative is not using an IDE.
And then there’s bugs upon bugs. Just to be clear that’s not just JetBrains. It’s been at least a decade since I felt like at least the ordinary day to day stuff worked fine.
Again the alternative seems to be not using an IDE.
So maybe that’s what I should go for. Helix looks really nice right now, and might be editor enough. I keep coming back to it, but the time in between appears to be just enough for me to not get used to it. Maybe a New Year’s resolution to change that would be in order.
Don’t get me wrong, I find the whole IDE vs Editor discussion a bit silly, especially because there isn’t that clear obvious line and in many situations people don’t use all the IDE features anyways, but do stuff that they could also do with some button or feature or plugin on the command line. I think however that with every complex enough software there is always pain points and sometimes it makes sense to take a step back and maybe look somewhere else, like into the past to get rid of some of these pain points. With ideas from the past it might even be that the downsides went away. For example a higher resolution, a GUI, cheap, fast internet, a computer mouse, more memory, more storage, windows, etc. might make it a much better idea today.
And related: So, over the last few years (think pandemic times) I ended up trying confirm the suggestion that old stuff just seems to be good because of nostalgia, because it reminds you of a time where you had more time, because of selective memory, etc. So far I barely have been able to confirm that. Usually old stuff is still good, and in fact when it’s something like an open source project or somehow managed to keep developing it’s actually a lot better. So if you have something old, and see yourself thinking back at “how back then it just worked” I highly suggest to try it! Either way you’ll know if it really was better, and maybe you’ll be surprised about how it’s even better nowadays.
So in short: Try out old things. Ideas, software, etc. Maybe they end up being great, maybe they give you inspiration, maybe you just learn that they were bad in first place.
I’ve been wondering recently what would happen by applying older principles to current IDEs.
One observation is back in the 80s, each language needed to include an editor, debugger, linker, etc, because DOS didn’t include any of it.
The UNIX toolchain seems designed to operate in memory constrained environments. Open an editor, change something, throw that state away. Compile one file, throw state away, move to the next. Reload state to link, then throw it away, etc. Typically we aren’t memory constrained anymore and these tradeoffs don’t make much sense. However as an industry we tend to “double down” on designs, so having taken this approach, compilers and linkers have become much larger and seek to optimize everything to an extreme degree since once state is loaded it makes sense to spend a bit of time working on it.
At the same time, editors have been integrating compiler frontends for syntax highlighting and code navigation. That ends up being memory intensive, because it wants to have navigation state for the entire project.
The next logical step is to integrate a backend, making it possible to compile code in real time. Particularly by reducing the extent of optimization, compiling a single function ought to take a millisecond or two - it’s possible to do this at the speed of keystrokes. Global changes, like structure definitions, can be tracked with a dependency graph, similar to a spreadsheet. Any source change should be able to efficiently determine what to compile.
If you have compiled in memory code that’s maintained in real time, a build step is just “save.” This also leads to interesting places like “saving” a binary that includes its own source code, so a compiled program can just be opened and edited. These files can record undo information, so it’d be possible to have a single file that has enough state to generate previous versions.
The nerd in me would really enjoy being able to open the IDE in itself, change source, and save a new version.
compiling a single function ought to take a millisecond or two - it’s possible to do this at the speed of keystrokes.
In Emacs and Slime, use C-c C-c to compile a function, C-c C-k to compile the whole file.
(Emacs isn’t the only decent editor nowadays)
Global changes, like structure definitions, can be tracked with a dependency graph, similar to a spreadsheet.
I think we saw this in Clojure land.
a build step is just “save.”
that’s save-lisp-and-die to save a CL image, with :executable t to save a binary. It looks like Smalltalk is even more a “ball of mud”.
(In CL we write in source files managed by a VCS and we deploy fresh binaries, not balls of mud -though we can develop against a running image in prod if that’s our thing…)
open the IDE in itself, change source, and save a new version.
DOS was horrible and I don’t miss the IDEs from that era but the TUI was nice. OpenWatcom includes a Vi implementation that sports a Turbo-style TUI. I use it on Windows for quick edits now and then.
It was in a lot of ways, but running it on a 21st century laptop is interesting.
It is so blindingly unbelievably fast, it is so tiny and fast and simple that it’s very easy to learn your way around, and there are an absolute tonne of apps out there for free.
it didn’t do much did it? :) TSRs were confusing. No way to find out if it loaded okay. I also didn’t understand the “stay resident” part so it was even more annoying. As soon as I learned about flat mode and found DJGPP+(CWS)DPMI I stopped using the Turbo IDEs. One of my favourite DOS programs was the Oberon System. I even started it from autoexec.bat so it felt like booting straight into it. So much good stuff packed in a floppy! Then switched to OS/2 for a few months and then finally to Slackware. Took a few more years to find the BSDs.
I have been meaning to play with Oberon on DOS, it’s true.
It didn’t do much and IME loading it up with TSRs made it worse, not better.
But some of the apps were great. Frankly MS Word 6 or WordPerfect 6 for DOS did all I needed, then and now. There were good spreadsheets, outliners, databases, networking clients to talk to everything, and that at the time was all I needed.
Slap a multitasker like DESQview on top and it was very workable.
I didn’t and don’t do much development, but TBH, I preferred QuickBASIC to any C ever. ;-)
From the title, I was expecting Visual Basic 6. A lot of functionality was enabled by going graphical, and that doesn’t necessarily mean it’s bloated. VS6 would run super fast nowadays.
The closest analogue I can think of is Pelles C. There is Geany, but it’s trying to be multilingual. There’s BlueJ, but Java is almost always a bit slower than a native solution. What are the best minimal IDEs that use GUIs?
Object Pascal has always had good IDEs. Delphi 7 was a gold standard. FreePascal has a bunch of free and open IDEs: Lazarus, mseide and fpGUI/IDE. MSE in particular was quite impressive because it was a one-man effort. It was so lightweight that you can launch the IDE within itself recursively and do inception style changes to it. Sadly Martin Schreiber passed away a few years ago.
There’s no reason not to use an IDE you like when editing files on remote machines. VS Code and Jetbrains editors have tools that make it no different than editing local files. You can open an entire project on a remote machine and avoid the bottleneck of network communication when editing, it only matters when you’re saving the files you have open, which happens infrequently. An advantage of this setup is that for example the remote machine’s cpu and memory are not being used up by the editor’s frontend (pretty important when editing on something like a RPi).
Unfortunately, those tend to be platform specific. I can’t use the VS Code/IDEA ones at work because of this, and because they’re proprietary, probably not holding out hope.
Instead, it basically abstracts away the transport layer and hides the details about data transfer and caching. It’s more generic than ssh and accessing remote files and supports smb, adb, sftp, sudo, docker, and a few others.
Almost every command that takes a file path will work with Tramp, so you can use magit with remote repositories, compile projects on remote machines, use dired to browse the filesystem, and even use the paths in eshell.
The eshell integration is really useful, and you can run cd /ssh:whoever@myserver:~/src/whatever/ to effectively create a new ssh session, with emacs following your current directory around the remote machine just like it does locally.
The “server mode” kaveman mentioned is completely different and orthogonal to Tramp or remote files - it lets you have multiple connections to a single instance of emacs. The purpose is to allow new files opening outside of emacs to open in an already running instance of Emacs instead of launching a new copy of Emacs. By default it works with a Unix domain socket and doesn’t allow remote connections.
TRAMP brings the remote file system to emacs running locally, similar to sshfs. Emacs also has a server mode. VS Code actually ships a bigish blob to the remote host for relaying editing ops. The jetbrains remote editing blob weighs half a gig! The old NetBeans C++ plugin had good support for remote development over ssh with local caching. Sadly, it got left out of the Apache transition. Sam was designed for this use case and therefore is quite light and works well over ssh. The deadpixi version adds a bunch of conveniences (B command loading files from the remote host etc) but it is just an editor so no LSP. Support for plumbing would be quite useful. (@lorddimwit, hint hint).
TUIs are consistent and work years later, my .vimrc from over a decade ago still works. vi seems to be everywhere, and usually a ctags variant is available. I’ve written a lot of code through ssh/tmux/VIM, but I really appreciate having a non-TUI for the workflows.
Software has spent decades trying to recapture what Smalltalk had. Jetbrains IDEs are expensive but are overall the best experience I’ve used.
Checking assembly in CLion (similar experience with Rider and viewing IL):
Right click > “Show Assembly”
Dual panel view of the code/assembly for the current compiler/config settings, like a diff view, with lines colored and sweeps annotating which lines correspond to which assembly.
Edit code, hit refresh and verify assembly changes.
LLM usage in CLion is like a weird cross between a code review and pair programming with an AI:
Prompt for code generation
Diff view appears
Update prompt, use diff-view like I would normally when making a commit to finalize output
As I mentioned on the orange site, I worked through that period, but never used the TUI IDEs, preferring to use the CLI compilers, and distinct editors. This was possibly because I was cross developing for embedded systems, and the TUI environments were of little to no help in such a case.
So it was CLI compilers, makefiles, and either BRIEF or a DOS based version of vi. The knowledge gained from working that way was immediately transportable and applicable to any other environment, employer, and task. Whereas the TUI based stuff was not.
Using BRIEF was nice because one could “compile” a makefile, and get the main integration benefit of the TUI systems - jump to first or next error. However testing always involved leaving the environment to install thing on the target system. I’d generally use BRIEF, but would occasionally switch to vi if the task was too awkward in BRIEF.
Moreover the “70s” based tools were inherently scriptable, and hence extensible, whereas each of the TUI based things was its own little playpen.
There was a post recently about building up an EMacs configuration from scratch. I commented about one early decision in that post, to disable emacs’s menubar, as being hostile to new users as the menu bar was great for feature discovery.
I got strongly similar vibes from this article’s praise of DOS TUIs.
Damn, do people get militant with their preferences. The author covered important details and put them in perspective in a very fair maner. Yet both here and in the orange site, lot’s of people are being very ferocious with their debunking tone.
I learned C in the 90s both on windows and Linux. The user experience with visual studio 6 was superior to what we had today. Creating different kinds of projects with a wizard and inspecting the generates contents, was a great way to learn. And linking libraries and such things just worked glitch free. On the other hand, getting things to compile on Linux, was everything but intuitive for the learner. Even editing files was challenging. I don’t know why people feel the need to deny what was obvious. I has dipped my toe in basic and batch files years before using msedit and I completely agree with the author. It was very intuitive and self explaining. With the right amount of functionality.
Full disclaimer: I am one of those people that lives in the terminal.
I believe visual studio code is the spiritual grandchild of msedit, turbo Pascal, etc. They kept the UI paradigm simple, they didn’t threw the keyboard centric user base away just because it’s a gui, and they put extra effort keeping it snappy, given it is a webview.
I had the opposite experience. I used the wizards in Visual Studio (mostly Visual C++ 5) and found it to be part way between two local optima. I used VB5 and it hid a lot of things in magic files (if you ever looked at all of the files in a VB project, a load of them were simply lists of field values) but I didn’t need to understand them. When I started writing *NIX software, there were no magic files and I needed to understand everything. VS generated a bunch of magic files for pre compiled headers and for things that went into the resource compiler, but I actually needed to understand (and, sometimes, modify them).
NeXT’s interface builder and Borland’s Windows IDEs were much better at maintaining this separation between things that were code and edited by humans and things that were expected to be edited in a rich editor and treated as opaque outside that editor.
I also prefer to program on Linux nowaday. To the point that I don’t find a windows computer usable for a developer. And if I should be honest, even a Mac is barely useful with all its silly heavy handed deviations from what’s proven to work.
But times are different. Back then, with very limited, or even inexistent, internet acces, things needed to be not only intuitive, but discoverable and usable with the very first increments of knowledge.
Different distros placed libs and all sorts of things in different places. If your code didn’t compile for some reason, you couldn’t just install a missing dependency with a package manager. Or even Google dor such information. Neither Google nor packaged managers existed. Even reading from a CD-ROM requires messing with mount points and whatnot. Imagine a curious kid being required to learn all this.
What did you use as text editor on Linux? I think the author is on point with his comparison between msedit and Emacs… And if we talk about vi, kids would restart their computer if they tried to check it out.
Or go back a bit further to the Lisp machines of the 1980’s or Smalltalk. Turbo Pascal and company were getting a fraction of that power wedged into a microcomputer. Or to NeXT with its graphical development environment that is basically what macOS still provides today. The fact that we’re talking about TUI IDEs from the early 1990’s years after the Macintosh and Amiga were released, tells you just how backwards PCs were.
The Unix systems were never well designed interfaces with excellent tools. They just let workstation companies start with BSD Lite to save on OS development costs and later POSIX ended up being required by policy so many places that selling a Unix in the workstation market was easier.
’80s, ’90s, ’00s, ’10s, ’20s & I still want to be in the TUI regardless of decade
Why?
I never seen GUI programs (that use common GUI toolkits at least) that can be used with keyboard only. Some programs (especially IDEs) have lots of keyboard shortcuts but still there are constantly troubles with input focus: sometimes I have to press tab thousand times, sometimes tab doesn’t even work, sometimes focus is there but poorly indicated. Or I just don’t know how to use such programs correctly.
I’m not a fan of unix terminal TUIs, however. Unlike DOS, keyboard input is always broken in terminals, especially alt and esc keys. Programs that use GUI, but are actually more like TUI, like graphical emacs, however, work fine.
I remember seeing TUI database programs in 90s at point-of-sales, manager workspaces and so on and remember that operators used these at very high speed and low latency. Compared to today, where they had to reposition mouse after each typed word, especially when using web-based UIs.
That’s reasonable. It’s more that the text mode is forcing certain bad decisions to be unavailable.
Has all the features I need, generally very lightweight, works the same over a network, is great for pairing
TUI apps generally don’t force their terrible color and font choices on you and don’t waste screen space with useless toolbars and margins.
The Unix philosophy, from which Linux inherited a lot, does not go along well with the concept of an IDE like the ones that the author “wants”: you can absolutely get the same experience, but not with a single program that does everything for you, rather by combining a few.
“Linux is my IDE”, as they say: put together a tiling window manager (or tmux), a good modern terminal editor like Helix (or a well-configured emacs/vim/neovim with language server support) and a couple more small utilities and you have a beautiful IDE experience that’s IMHO on par with what you can get with VSCode+plug-ins. But “the OS is the IDE”, not one specific program.
I think you get a better experience without an IDE. But that is because we are already in possession of the knowledge required to work efficiently. I don’t want UI elements cluttering my screen, I don’t need to perform repetitive sequential clicking tasks if I can automate anything I want with a shell script, and so on.
The point is that I needed to learn all these things, and while it certainly paid of for me, it did require a big initial investment. Linux comes from an industrial/professional background, windows was mostly for home users. Even MS-DOS, it was much more limited than UNIX, bit it did what it claimed and was accessible to grasp.
Hot take, partial shitpost.
IDE’s like those in TFA were necessary because multitasking wasn’t available or mature enough. Unix was (and is) my IDE.
You’re missing the main point of the article, which is about UI.
First, GUIs enforced standardisation of UIs. GUIs took off thanks to the Mac in 1984, and original classic 68000 MacOS. Then, a year later, this came to the PC thanks to Windows.
Windows 1 was rubbish. Windows 2 was still very poor. Windows 2 co-evolved with and shared DNA with OS/2, and OS/2 1.x was pretty much junk too.
Windows 3 (1990) was actually OK and was a big hit. Everyone started to see GUIs everywhere. Thanks to GUIs and GUI programming guidelines, from 1990 or so, standardised UIs with GUI-style design came to DOS and commercial DOS apps as well.
That UI design was defined and specified by IBM and it’s called CUA.
On DOS, MS Word 1, 2, 3, 4 and 5.0 had a weird MS proprietary UI. DOS Word 5.5 (now freeware, so you can download it and try it in DOSbox) had a standard UI with a menu bar at the top, pull-down menus, dialog boxes, etc.
WordPerfect up to 4.2 has a weird proprietary UI. WordPerfect 5 had a standard CUA UI.
CUA came to databases such as FoxPro and Clipper. CUA came to presentation programs. CUA came to outliners such as GrandView. It came to everything.
Early (meaning 1980s) DOS apps had a different UI for every different program.
Late (meaning 1990s) DOS apps had a standardised uniform UI, with a design inherited from Windows, which inherited it from MacOS and IBM CUA.
This continued even as Win95 made DOS invisible, and then WinME made it inaccessible, and then WinXP went mainstream and didn’t contain DOS at all any more. Still, at the command prompt, apps had a standard UI and a standard look and feel, and if you knew how to drive Windows from the keyboard, you could drive these DOS and command-line Win32 apps just the same way.
(And of course if you had a mouse and it was configured in DOS then they worked with point-and-click just the same way.)
What the blog post is about is that just around the time that Linux was evolving, the GUI era of standadised UIs came to text-mode apps as well, and that was a huge benefit and made them much easier to use.
But the community busily building Linux with 1970s tools like Emacs and Vi didn’t notice this was happening. It would have been relatively easy to bolt a CUA-compliant TUI onto Emacs and Vi and Dselect and LinuxConf and
make menuconfigand all the other text-mode Linux tools, but the Unix folks didn’t notice this trend of standardisation and harmonisation, so it didn’t happen.So while on native PC OSes, text-mode apps got hugely better in the 1990s and they ended up attractive and usable, even if you didn’t have a GUI or the resources to run one, this never happened on Linux (or the BSDs) and they remain trapped in the hell of weird proprietary UIs for every full-screen app.
I have written about the same thing myself:
Fans of original gangster editors, look away now: It’s Tilde, a text editor that doesn’t work like it’s 1976.
Goodness me that’s a very GUI-oriented take! Perhaps I’m misinterpreting the tone or what’s meant, but I have to strongly disagree with the (to me) rather pejorative take on how “Unix folks didn’t notice [the] trend of standardisation and harmonisation”, for example.
That phrasing suggests that such a trend was:
(a) indeed towards standardisation and harmonisation. Having used GUIs for decades, I’d suggest that this trend’s destination is still a way away
(b) even a good thing and something to which said Unix folks might aspire. Far from it, as those, like me, even today, prefer a command line based interface, would argue.
To your other delightfully pejorative reference to “busily building Linux with 1970s tools like Emacs and Vi” - I guess this also applies to folks today building our cloud infrastructure with 1970s concepts such as environment variables, shells, pipes, and the like. Do you think we should tell them?
To engage with your later points:
TBH I think that desktop/laptop computer GUI design, and indeed website design, has regressed considerably in the last 25Y or so.
There was a sweet spot of harmony, back before smartphones and around the time of WinXP and the early releases of Ubuntu.
Both nothing in this in any way, shape or form contradicts the CLI approach. The tools the blog post, and I, are talking about are shell tools you use on the console, or over ssh or whatever, and no GUI is needed or present.
It is. I am making a reference to one of my own articles:
Fans of original gangster editors, look away now: It’s Tilde, a text editor that doesn’t work like it’s 1976.
I have been using Vi since 1988. I utterly loathe it. I chose the 1976 date because that was the first release of the original
vi.I think we should tell them to modernise their full-screen tools to conform to 1990s UI standards. Then they will get more users and a better experience, because there is a standard for UIs and it is coming up on 40 years old now, and it’s time to get with the program.
Er, no it isn’t. It is the direct opposite of that.
Let me spell it out:
A major factor in the rise of desktop GUIs was that they strongly encouraged standardisation of UIs. Apple published the original Human Interface Guidelines to help this.
IBM studied these – an example – and created CUA. This is very much not GUI oriented and the original CUA guides, which I recently put some of online, focus on IBM mainframe apps for text-only terminals.
The CUA guidelines were also adopted in Windows and OS/2, even in OS/2 1.0, which did not have a GUI at all.
By the era when Windows 3.0 was making GUIs on the PC actually usable for the mass market, many PCs couldn’t usefully run Windows or people didn’t want it because they had a big installed base of DOS apps. The DOS app vendors kept selling upgrades to their DOS apps by making them CUA-compliant, and that’s why even big names like WordPerfect and Microsoft itself issued new text-based CUA-UI versions:
This is not about Windows or Mac. It’s not about GUIs. It’s about how the rise of the Mac and of Windows influenced DOS and text-only apps. It’s about how text-only console-mode apps, especially on DOS but on other OSes as well, gradually adopted the standardisation happening in human-computer interface design. It’s about how the UI of non-GUI apps improved when it adopted the conventions of UIs.
And my point is that apps like Vi/Vim/Elvis/Neovim/whatever, and Emacs, and Joe/Pico/Nano/FreeBSD EE could benefit if they did exactly the same sort of modernisation that big industry players such as WordPerfect did when it moved from WP 4.x to WP 5.x, 35 years ago.
So long as there’s a way to go back to the old UI, even experienced users would not be inconvenienced for even a second.
Run Emacs -> is there a config file? Y: disable new UI / N: load new CUA UI -> old and new users are happy.
Wow, thanks for spelling it out for me. I wasn’t capable of reading text and interpreting it for myself.
I write for a living. I publish 1-2 technical articles per day, Mon-Fri, that are read by 10s of thousands of people, and I have been doing this (at varying frequencies) since 1995.
I can absolutely 100% promise to you that most people do not understand what they read. I get, at a very conservative estimate, one hundred times more responses from people who misunderstood what I wrote than grasped it.
You seemed, and TBH still seem, to be doing that.
I responded to a post about text-only programs’ UI with an appeal for standardisation of text-only UIs and you said it was “very GUI oriented”.
I carefully spelled out why it wasn’t, and how it need not affect users who know the old systems, and you haven’t even registered that: you’ve just repeated your accusation, defended your misunderstanding of my reasoning, and tantamount to called me a troll.
WTF am I supposed to do?
I tried to carefully enumerate why what I was saying is not at all what you summarised it as, and you respond seconds later without answering a single thing I said.
:’(
The kindest response I can give to this is please revise your comments carefully - editing is very important, even for a comments section. In my initial reading of your comment, I wasn’t even sure what the central thesis of your initial comment even was, or what aspect of lorddimwit’s comment you were addressing. After rereading it, it seemed to be a springboard for a long history about CUA. It came off as meandering (why bring up Windows ME?), excessively long (a lot of that history is not relevant for what you want to say), and emotionally charged (anything opinion often is, but even the history felt like you had an ax to grind). Just saying something like “I think consistent conventions made PC applications easier to use than contemporary Unix ones” would get your original point across in a single sentence.
The reply to qmacro was better at making your central thesis more obvious, but it felt accusatory and patronizing towards him. Not to mention it felt even more emotionally charged than the last comment. It’s the kind of attitude that makes people not want to reply.
If you have to bring up your professional credentials as a writer, it doesn’t reflect well on your writing.
My impression is that people are reading the title or first line of the post, not reading – or worse, skimming and thinking they are reading, but actually failing to successfully capture any gist – and reacting to their misinterpretation of what is being said.
I comment as such, and people do the same with the comments.
Calling this out is not reacting negatively IHMO. Pointing out to people that they are not understanding what they are commenting on is not a hostile reaction. Not all negativity is bad. It is necessary to be negative to achieve balance.
Wow, there seems to be a lot of anger there. That’s partially why I am limiting my responses.
Yep, the author is looking at these tools through rose-tinted glasses. I started out with Turbo Pascal and C (and some 8-bit assemblers that were similarly integrated) but once I had regular access to Unix and Linux I never looked back.
I have extremely fond memories (not rose tinted) of my “IDE” in the late 1980’s working for a large energy company in London. Typical then was to run IBM mainframe hardware with the corresponding OSes - often a hypervisor like VM/CMS hosting DOS/VSE or, in my case, MVS/XA. And within MVS/XA, the online realtime (time-sharing) interactivity was provided by TSO/E, but then you got to the “IDE” which was the supremely flexible and powerful ISPF, combined with the ROSCOE editor and then access (from an ISPF panel) to JES2 to look at job queues, output and manage that content (if you’re curious, there are good screenshots to be had via e.g. Google image search for ISPF, ROSCOE and JES2).
Absolutely fantastic experience and yes, I count that as another example of a terminal based interactive development environment (in which I was extremely efficient) along the same lines as “UNIX is my IDE” (which I say now, using tmux, bash and neovim in dev containers). I interacted with my IDE through a 3278 green screen terminal.
One of the best parts was that you could extend the functionality of your environment by creating custom panels, and scripting them with REXX or CLIST. One example is where I created a REXX script to check the job class of a batch job that was going to run an (IMS) Batch Message Processing step; such jobs were only allowed to run in a certain job class, and so any inappropriate class allocation was caught on submission from within the (ROSCOE) editor, rather than minutes (or hours) later when the job was scheduled and then came back in error.
I do miss those mainframe-based IDE times.
I used to be in a class of relatively young people which had lab classes and exams done in Turbo C++. Needless to say, most people did not take to the environment well. It’s very easy to judge an environment by its merits when you already know what not to do.
Every time the fans in my sate of the art laptop spin up just because I am running an IDE (JetBrains stuff) I feel like the industry took a very wrong turn somewhere. I can understand it if it’s some initial index creation or something, but if it was just that I wouldn’t be complaining.
It seems the only alternative is not using an IDE.
And then there’s bugs upon bugs. Just to be clear that’s not just JetBrains. It’s been at least a decade since I felt like at least the ordinary day to day stuff worked fine.
Again the alternative seems to be not using an IDE.
So maybe that’s what I should go for. Helix looks really nice right now, and might be editor enough. I keep coming back to it, but the time in between appears to be just enough for me to not get used to it. Maybe a New Year’s resolution to change that would be in order.
Don’t get me wrong, I find the whole IDE vs Editor discussion a bit silly, especially because there isn’t that clear obvious line and in many situations people don’t use all the IDE features anyways, but do stuff that they could also do with some button or feature or plugin on the command line. I think however that with every complex enough software there is always pain points and sometimes it makes sense to take a step back and maybe look somewhere else, like into the past to get rid of some of these pain points. With ideas from the past it might even be that the downsides went away. For example a higher resolution, a GUI, cheap, fast internet, a computer mouse, more memory, more storage, windows, etc. might make it a much better idea today.
And related: So, over the last few years (think pandemic times) I ended up trying confirm the suggestion that old stuff just seems to be good because of nostalgia, because it reminds you of a time where you had more time, because of selective memory, etc. So far I barely have been able to confirm that. Usually old stuff is still good, and in fact when it’s something like an open source project or somehow managed to keep developing it’s actually a lot better. So if you have something old, and see yourself thinking back at “how back then it just worked” I highly suggest to try it! Either way you’ll know if it really was better, and maybe you’ll be surprised about how it’s even better nowadays.
So in short: Try out old things. Ideas, software, etc. Maybe they end up being great, maybe they give you inspiration, maybe you just learn that they were bad in first place.
I’ve been wondering recently what would happen by applying older principles to current IDEs.
One observation is back in the 80s, each language needed to include an editor, debugger, linker, etc, because DOS didn’t include any of it.
The UNIX toolchain seems designed to operate in memory constrained environments. Open an editor, change something, throw that state away. Compile one file, throw state away, move to the next. Reload state to link, then throw it away, etc. Typically we aren’t memory constrained anymore and these tradeoffs don’t make much sense. However as an industry we tend to “double down” on designs, so having taken this approach, compilers and linkers have become much larger and seek to optimize everything to an extreme degree since once state is loaded it makes sense to spend a bit of time working on it.
At the same time, editors have been integrating compiler frontends for syntax highlighting and code navigation. That ends up being memory intensive, because it wants to have navigation state for the entire project.
The next logical step is to integrate a backend, making it possible to compile code in real time. Particularly by reducing the extent of optimization, compiling a single function ought to take a millisecond or two - it’s possible to do this at the speed of keystrokes. Global changes, like structure definitions, can be tracked with a dependency graph, similar to a spreadsheet. Any source change should be able to efficiently determine what to compile.
If you have compiled in memory code that’s maintained in real time, a build step is just “save.” This also leads to interesting places like “saving” a binary that includes its own source code, so a compiled program can just be opened and edited. These files can record undo information, so it’d be possible to have a single file that has enough state to generate previous versions.
The nerd in me would really enjoy being able to open the IDE in itself, change source, and save a new version.
Look into Smalltalk, you can change both the language and the IDE from inside it, but I don’t understand how it would work in a team setting.
Some thing are in Common Lisp:
In Emacs and Slime, use C-c C-c to compile a function, C-c C-k to compile the whole file.
(Emacs isn’t the only decent editor nowadays)
I think we saw this in Clojure land.
that’s save-lisp-and-die to save a CL image, with :executable t to save a binary. It looks like Smalltalk is even more a “ball of mud”.
(In CL we write in source files managed by a VCS and we deploy fresh binaries, not balls of mud -though we can develop against a running image in prod if that’s our thing…)
That’s LispWorks, AllegroCL or Lem https://lem-project.github.io/ (for CL and other languages with its LSP).
DOS was horrible and I don’t miss the IDEs from that era but the TUI was nice. OpenWatcom includes a Vi implementation that sports a Turbo-style TUI. I use it on Windows for quick edits now and then.
It was in a lot of ways, but running it on a 21st century laptop is interesting.
It is so blindingly unbelievably fast, it is so tiny and fast and simple that it’s very easy to learn your way around, and there are an absolute tonne of apps out there for free.
it didn’t do much did it? :) TSRs were confusing. No way to find out if it loaded okay. I also didn’t understand the “stay resident” part so it was even more annoying. As soon as I learned about flat mode and found DJGPP+(CWS)DPMI I stopped using the Turbo IDEs. One of my favourite DOS programs was the Oberon System. I even started it from autoexec.bat so it felt like booting straight into it. So much good stuff packed in a floppy! Then switched to OS/2 for a few months and then finally to Slackware. Took a few more years to find the BSDs.
I have been meaning to play with Oberon on DOS, it’s true.
It didn’t do much and IME loading it up with TSRs made it worse, not better.
But some of the apps were great. Frankly MS Word 6 or WordPerfect 6 for DOS did all I needed, then and now. There were good spreadsheets, outliners, databases, networking clients to talk to everything, and that at the time was all I needed.
Slap a multitasker like DESQview on top and it was very workable.
I didn’t and don’t do much development, but TBH, I preferred QuickBASIC to any C ever. ;-)
From the title, I was expecting Visual Basic 6. A lot of functionality was enabled by going graphical, and that doesn’t necessarily mean it’s bloated. VS6 would run super fast nowadays.
The closest analogue I can think of is Pelles C. There is Geany, but it’s trying to be multilingual. There’s BlueJ, but Java is almost always a bit slower than a native solution. What are the best minimal IDEs that use GUIs?
Object Pascal has always had good IDEs. Delphi 7 was a gold standard. FreePascal has a bunch of free and open IDEs: Lazarus, mseide and fpGUI/IDE. MSE in particular was quite impressive because it was a one-man effort. It was so lightweight that you can launch the IDE within itself recursively and do inception style changes to it. Sadly Martin Schreiber passed away a few years ago.
There’s no reason not to use an IDE you like when editing files on remote machines. VS Code and Jetbrains editors have tools that make it no different than editing local files. You can open an entire project on a remote machine and avoid the bottleneck of network communication when editing, it only matters when you’re saving the files you have open, which happens infrequently. An advantage of this setup is that for example the remote machine’s cpu and memory are not being used up by the editor’s frontend (pretty important when editing on something like a RPi).
Unfortunately, those tend to be platform specific. I can’t use the VS Code/IDEA ones at work because of this, and because they’re proprietary, probably not holding out hope.
FWIW Emacs has had this feature practically forever.
But Emacs runs great on the RPi, too.
Does TRAMP actually run the editor on the other end, or just filesystem munging stuff, equivalent to i.e. sshfs in VS Code?
It doesn’t run the editor on the other end.
Instead, it basically abstracts away the transport layer and hides the details about data transfer and caching. It’s more generic than ssh and accessing remote files and supports smb, adb, sftp, sudo, docker, and a few others.
Almost every command that takes a file path will work with Tramp, so you can use magit with remote repositories, compile projects on remote machines, use dired to browse the filesystem, and even use the paths in eshell.
The eshell integration is really useful, and you can run
cd /ssh:whoever@myserver:~/src/whatever/to effectively create a new ssh session, with emacs following your current directory around the remote machine just like it does locally.The “server mode” kaveman mentioned is completely different and orthogonal to Tramp or remote files - it lets you have multiple connections to a single instance of emacs. The purpose is to allow new files opening outside of emacs to open in an already running instance of Emacs instead of launching a new copy of Emacs. By default it works with a Unix domain socket and doesn’t allow remote connections.
TRAMP brings the remote file system to emacs running locally, similar to sshfs. Emacs also has a server mode. VS Code actually ships a bigish blob to the remote host for relaying editing ops. The jetbrains remote editing blob weighs half a gig! The old NetBeans C++ plugin had good support for remote development over ssh with local caching. Sadly, it got left out of the Apache transition. Sam was designed for this use case and therefore is quite light and works well over ssh. The deadpixi version adds a bunch of conveniences (B command loading files from the remote host etc) but it is just an editor so no LSP. Support for plumbing would be quite useful. (@lorddimwit, hint hint).
TRAMP hooks into the emacs shell command infrastructure as well as filesystem access, so things like M-x compile and magit work on a remote system.
No mention of CodeWarrior? I used it on Mac OS in the 90s, which gasp is about 30 years ago. It was also on BeOS at the time.
TUIs are consistent and work years later, my
.vimrcfrom over a decade ago still works.viseems to be everywhere, and usually actagsvariant is available. I’ve written a lot of code through ssh/tmux/VIM, but I really appreciate having a non-TUI for the workflows.Software has spent decades trying to recapture what Smalltalk had. Jetbrains IDEs are expensive but are overall the best experience I’ve used.
Checking assembly in CLion (similar experience with Rider and viewing IL):
LLM usage in CLion is like a weird cross between a code review and pair programming with an AI:
As I mentioned on the orange site, I worked through that period, but never used the TUI IDEs, preferring to use the CLI compilers, and distinct editors. This was possibly because I was cross developing for embedded systems, and the TUI environments were of little to no help in such a case.
So it was CLI compilers, makefiles, and either BRIEF or a DOS based version of vi. The knowledge gained from working that way was immediately transportable and applicable to any other environment, employer, and task. Whereas the TUI based stuff was not.
Using BRIEF was nice because one could “compile” a makefile, and get the main integration benefit of the TUI systems - jump to first or next error. However testing always involved leaving the environment to install thing on the target system. I’d generally use BRIEF, but would occasionally switch to vi if the task was too awkward in BRIEF.
Moreover the “70s” based tools were inherently scriptable, and hence extensible, whereas each of the TUI based things was its own little playpen.
There was a post recently about building up an EMacs configuration from scratch. I commented about one early decision in that post, to disable emacs’s menubar, as being hostile to new users as the menu bar was great for feature discovery.
I got strongly similar vibes from this article’s praise of DOS TUIs.