1. 4

    That is possibly the nerdiest talk I have ever seen. I loved it. :-D Thank you to all the presenters!

    It is interesting that there’s a considerable degree of overlap with my own talk at FOSDEM back in 2018… Lisp machines, Genera, Smalltalk, single-level store and IBM i, and more…

    1. 4

      I somehow was under the impression that 9front was the current locus of Plan 9 development activity.

      Is that incorrect?

      1. 4
        1. 2

          9front is a fork of plan9. It is active, yes, but so is 9legacy.

          9front also has questionable ethics and “flavourful” aesthetic.

          1. 1

            9front also has questionable ethics

            If you are referring to the (breathtakingly tasteless) image that they had in their FQA file, it is gone. That is good. I think it was a terrible idea but I do not think it was in any way an endorsement of what it depicted. I think some people just have a very much more robust idea of what is funny or acceptable than others.

          2. 1

            It is an oversimplification, at least.

            In addition to 9front and 9legacy, I am aware of:

            The latter 2 are certainly active, but all other than 9atom seem to be to some degree.

          1. 12

            Can we please stop using tabs altogether (the last vestigial remain of MDI) and move towards BeOS’s Stack paradigm in which each window title is “a tab” and you can stack together different windows?

            The stack paradigm is easier for the users: one concept (windows) instead of two similar-but-different concepts (windows and tab-inside-windows).

            A graphical example: https://www.haiku-os.org/docs/userguide/en/images/gui-images/gui-s+t.gif

            1. 7

              Every time I see tabs mentioned, I’m thinking about window management. The window manager is too weak, and therefore the application themselves had to step in and invent their own.

              So basically agreeing :)

              1. 5

                Counterpoint: there are two major kinds of tabs, which the article seems to think about as dynamic and static. I would call them task-windows and multi-rooted trees.

                A task-window is the kind of tab that MDI and browsers use: effectively a complete copy of the application main window, but space-compressed. A perfect window manager might be the best way to handle these, but it would have to be good enough for every application’s needs. I haven’t met a perfect window manager yet, but I haven’t seen them all.

                A multi-rooted tree is most often found in giant config/preference systems, e.g VLC and LibreOffice. It could be represented as a single-rooted tree, but the first level branches are trunk-sized and so independent that a user will often only want to tweak things in one branch. Separating them out into tabs is a pretty reasonable abstraction. It’s not the only way of breaking the tree up, but it maps nicely from category to tab, subcategory to delineated section.

                1. 3

                  Another counterpoint is Firefox with a lot of tabs. There are some optimizations that Firefox can do because it has access to domain knowledge. Like not loading all of the tabs on start. It could probably unload tabs as well. In order to do that the window manager needs to expose a richer interface.

              2. 4

                Microsoft was working on a feature called Sets for Windows 10 that would basically do this. I was very sad to learn that the project was axed though even after making it into an Insiders build :(

                1. 4

                  Former BeOS fan here.

                  Please no. :-(

                  This is all IMHO, but…

                  There are at least 2 different & separate usage cases here.

                  № 1, I am in some kind of editor or creation app & I want multiple documents open so I can, say, cut-and-paste between them. If so, I probably mainly want source & destination, or main workpiece & overflow. Here, title-bar tabs work.

                  № 2, I’m in a web browser, where my normal usage pattern goes: home page → lots of tabs (50+) → back down to a few → back to lots of tabs (and repeat)

                  In this instance, tabs at the top no longer work. They shrink to invisibility or unreadability. In this use case, I want them on the side, where I can read lots of lines of text in a column of a fixed width. Hierarchical (as in Tree-Style Tabs) is clever, yes, but I don’t need I already have a hierarchy: a browser window, and inside that, tabs. Those 2 levels are enough; I rarely need more than 2 or 3 browser windows, so I don’t need lots of levels of hierarchy in the tabs and TST is unnecessary overload and conceptual bloat.

                  The fact that Chrome can’t do this is why I still use Firefox. Or on machines where I don’t have to fight with Github, Waterfox, which is a fork in which XUL addons still work & I don’t need to lose another line of vertical space to the tab bar. In Waterfox as in pre-Quantum Firefox, I can combine my tabs sidebar with my bookmarks sidebar, and preserve most of those precious vertical pixels.

                  We have widescreens now. We have nothing but widescreens now. Vertical space is expensive, horizontal space is cheap. Let’s have window title bars wherever we want. How about on the side, like wm2? https://upload.wikimedia.org/wikipedia/commons/3/3b/Wm2.png

                  That worked well. That can mix-and-match with BeOS-style tabs very neatly.

                  1. 2

                    I like the idea, and have tried it in Haiku, but in practice it was harder to use for me.

                    Maybe I missed some shortcuts? I was dragging windows to each other manually.

                    Maybe it’s just a matter of getting used to it? I don’t know.

                    1. 3

                      Applications, like web browsers, could create shortcuts for creating a new window as a tab of the current window. I think that would make it near identical in terms of behavior.

                    2. 2

                      Compare with tabbed application windows in current macOS. In almost any Cocoa app, tabs can be torn off to make their own window, or merged with another window. I’m not as familiar with Be, but the main differences seem to be that tabs still go in a bar under the title and can only be joined when they’re the same kind. I’m curious how stacking windows of different kinds would feel. Maybe a window would become more like a task workspace.

                    1. 1

                      Images of VMs and bootable live USB sticks running two different FAT32-capable versions of DOS. Partly because fiddling with DOS is quite fun in this era of vastly-complex multi-everything OSes, partly to see if it can be done, partly because I do have a sort of eventual notional product in mind.

                      I am using PC DOS 7.1 (not 7.01) and DR OpenDOS for this. I am not a big fan of FreeDOS – partly because it’s a little too different from real 1980s DOS for my preferences, and partly because I find the developer community rather unfriendly and unwelcoming. So, now that IBM offers PC DOS 7.1 as a free download, and Lineo made DR-DOS 7.01 FOSS before changing their minds and closing it again with DR-DOS 7.02, I have some alternatives to play with.

                      WIP links can be found here and in previous posts. https://liam-on-linux.livejournal.com/78306.html

                      1. 2

                        On PCs at home and work, original IBM Model Ms, UK layout. Via PS/2 to USB convertors where necessary, or directly into PS/2 ports if available.

                        On my Macs at home (iMac Retina 27” and i5 Mac mini), two original Apple Extended IIs, via ADB to USB convertors.

                        I find the key feel is integral. When my fingers feel Apple keyswitches, I use Apple keystrokes, e.g. Cmd-X/C/V for Cut/Copy/Paste. When I feel IBM buckling springs, I use Windows/Linux keystrokes, e.g. Ctrl-X/C/V. As the IBMs do not have Windows (Super) keys, I remap CapsLock to Windows. In Win NT (which I rarely use) this is a built-in feature; I use a free tool called SharpKeys to set the mapping. In Linux, it’s an option in some desktops, including Ubuntu Unity, but sadly not in Xfce, so I use xmodmap.

                        I find it very confusing to use a Model M on a Mac – I’ve tried. It feels like a PC so my muscle memory wants to use PC keystrokes. Ditto, using a Mac ’board on a PC. It feels weird and disturbing. I also have a couple of older, pre-IBM-layout Mac ’boards but the layout is a pain and I rarely use them.

                        I loathe chiclet keyboards. Early Apple ones were the least horrid but they still are unpleasant. The butterfly-switch ones are deeply horrible to use for me. I like a substantial travel and enjoy the feel of mechanical key-switches.

                        I am not a keyboard hobbyist – in fact I don’t think I have ever bought a keyboard in my life. I just saved all these when the machines were being thrown away. (If possible I saved and rehomed the computers, too.)

                        1. 2

                          Enjoyed the talk. Especially some of the history I didn’t know about Oberon.

                          Just not sure why the suggestion after Smalltalk was Dylan, a Lisp that looks nothing like a Lisp and is less popular than every other other Lisp. There’s already great interest in a Lisp OS (other than Emacs) so it just seems like pet favorites here or a dislike for Lisp syntax, but alright.

                          I generally agree with moving back to environments that integrate a programming language, though. Have you by chance considered the Web?

                          I mean a realistic approach would be fusing a Blink runtime to Linux, or using ChromiumOS as a base, and having JS as a mutable, dynamic language + WebAssembly system.

                          We’re already heading that way, although we’d need to embrace open ECMAScript Modules and Web Components as the building blocks instead of minified bundles, and we’d need to stop abusing the semantics of the web, treating HTML and CSS as purely build artifacts (things that are hard to read and extend).

                          1. 1

                            Enjoyed the talk. Especially some of the history I didn’t know about Oberon.

                            Thanks!

                            Just not sure why the suggestion after Smalltalk was Dylan

                            Part of the plan is to make something that is easy and fun. It will be limited at first compared to the insane incomprehensible unfathomable richness of a modern *nix or Windows OS. Very limited. So if it is limited, then I think it has to be fun and accessible and easy and comprehensible to have any hope of winning people over.

                            Lisp is hard. It may be the ultimate programming language, the only programmable programming language, but the syntax is not merely offputting, it is profoundly inaccessible for a lot of ordinary mortals. Just learning an Algol-like language is not hard. BASIC was fun and accessible. The right languages are toys for children, and that’s good.

                            Today, I have met multiple professional Java programmers who have next to no grasp of the theory, or of algorithms or any comp-sci basic principles… but they can bolt together existing modules just fine and make useful systems.

                            Note: I am not saying that this is a good way to build business logic, but it is how a lot of organizations do it.

                            There is a ton of extra logic that one must internalize to make Lisp comprehensible. I suspect that there is a certain type of mind for whom this stuff is accessible, easily acquired, and then they find it intuitive and obvious and very useful.

                            But I think that that kind of mind is fairly rare, and I do not think that this kind of language – code composed of lists, containing naked ASTs – will ever be a mass-market proposition.

                            Dylan, OTOH, did what McCarthy originally intended. It wrapped the bare lists in something accessible, and they demonstrated this by building an exceptionally visual, colourful, friendly graphical programming language in it. It was not intended for building enterprise servers; it was built to power an all-graphical pocket digital assistant, with a few meg of RAM and no filesystem.

                            Friendly and fun, remember. Accessible, easy, simple above all else. Expressly not intended to be “big and professional like GNU.”

                            But underneath Dylan’s friendly face is the raw power of Lisp.

                            So the idea is that it gives you the best of both worlds, in principle. For mortals, there’s an easy, colourful, fun toy. But one you can build real useful apps in.

                            And underneath that, interchangeable and interoperable with it, is the power of Lisp – but you don’t need to see it or interact with it if you don’t want to.

                            And beneath that is Oberon, which lets you twiddle bits if you need to in order to write a device driver or a network stack for a new protocol. Or create a VM and launch it, so you can have a window with Firefox in it.

                            Have you by chance considered the Web?

                            Oh dear gods, no!

                            There is an old saying in comp sci, attributed to David Wheeler: “We can solve any problem by introducing an extra level of indirection.”

                            It is often attributed to Butler Lampson, one of the people at PARC who designed and built the Xerox Alto, Dolphin and Dorado machines. He is also said to have added a rider: “…except for the problem of too many layers of indirection.”

                            The idea here is to strip away a dozen layers of indirection and simplify it down to the minimum number of layers that can provide a rich, programmable, high-level environment that does not require users to learn arcane historical concepts such as “disks” or “directories” or “files”, or “binaries” and “compilers” and “linkers”. All that is ancient history, implementation baggage from 50 years of Unix.

                            The WWW was a quick’n’dirty, kludgy implementation of hypertext on Unix, put together using NeXTstations. The real idea of hypertext came from Ted Nelson’s Xanadu.

                            The web is half a dozen layers of crap – a protocol [1] that carries composite documents [2] built from Unix text files [3] and rendered by a now massively complex engine [4] whose operation can be modified by a clunky text-based scripting language [5] which needed to be JITted and accelerated by a runtime environment [6]. It is a mess.

                            It is more or less exactly what I am trying to get away from. The idea of implementing a new OS in a minimal 2 layers, replacing a dozen layers, and then implementing that elegant little design by embedding it inside a clunky half-dozen layers hosted on top of half a dozen layers of Unix… I recoil in disgust, TBH. It is not merely inefficient, it’s profane, a desecration of the concept.

                            Look, I am not a Christian, but I was vaguely raised as one. There are a few nuggets of wisdom in the Christian bible.

                            Matthew 7:24-27 applies.

                            “Therefore, whosoever heareth these sayings of Mine and doeth them, I will liken him unto a wise man, who built his house upon a rock. And the rain descended and the floods came, and the winds blew and beat upon that house; and it fell not, for it was founded upon a rock. And every one that heareth these sayings of Mine and doeth them not, shall be likened unto a foolish man, who built his house upon the sand; and the rain descended, and the floods came, and the winds blew, and beat upon that house; and it fell, and great was the fall of it.”

                            Unix is the sand here. An ever-shifting, impermanent base. Put more layers of silt and gravel and mud on top, and it’s still sand.

                            I’m saying we take bare x86 or ARM or RISC-V. We put Oberon on that, then Smalltalk or Lisp on Oberon, and done. Two layers, one of them portable. The user doesn’t even need to know what they’re running on, because they don’t have a compiler or anything like that.

                            You’re used to sand. You like sand. I can see that. But more sand is not the answer here. The answer is a high-pressure host that sweeps away all the sand.

                            1. 2

                              Hey, I appreciate the detailed response. I generally agree with your thesis, but I’m going to touch on some of your points.

                              [Lisp syntax] it is profoundly inaccessible for a lot of ordinary mortals.

                              I am going to have to strongly disagree here (unless we’re talking dense Common Lisp with its decades of quirky features). The Lisp syntax has few rules to learn (if not the least of any language other than Forth), is educationally friendly with tools like Dr Racket, and is one of the easiest to teach first-time programmers due to its obvious evaluation flow.

                              All one needs to know is how parenthesis work in mathematics. All one has to understand “how the data flows” is to look at the position in the expression and perform substitution of values as they’d learn in grade school.

                              (a  (b  c)
                                  (d  f))
                              

                              It is visually a sideways tree one can see of words and indentations. And thus can be visually rendered using a colorful friendly tree-collapsing UI if need be with drag and drop expressions. No other language can have such an interaction model with their complex syntax rules.

                              Carmack, for example, chose to teach his son programming with Racket: https://twitter.com/id_aa_carmack/status/569688211158511616?lang=en

                              Colleges have been teaching Scheme as a first programming language for years with SICP.

                              Really the discussion of Lisp syntax is one done to death; this is me beating a horse fossil at this point. There’s the value in the ease of understanding syntax, and the less obvious value of meta-programming, so the only other thing I’d add is that, we could just build a Lisp OS, and create Smalltalk as a Racket style #lang on top of it. You’re not going to find a better language to let people have their pet favorite syntax than a Lisp that will let you create that syntax.

                              Dylan could also just be implemented as a Racket-like #lang. (I’m not saying though that Racket is the ideal language, just that a meta-programmable language is an ideal, low level, substrate, to build things upon.)

                              But I think that that kind of mind is fairly rare, and I do not think that this kind of language – code composed of lists, containing naked ASTs – will ever be a mass-market proposition.

                              This is, of course, why DSLs exist, and why Racket has entire language abstractions on top of it, as I mentioned with Smalltalk and Dylan. Good Lisp designs actually scales better when you create powerful abstractions on top of it, making a complicated system accessible to “mere mortals”. Non-meta languages simply cannot scale.

                              It is more or less exactly what I am trying to get away from. The idea of implementing a new OS in a minimal 2 layers, replacing a dozen layers, and then implementing that elegant little design

                              Yes, but, this is a tremendous undertaking. The web, for better or worse, is literally the closest thing we have today to a Smalltalk-y dynamic user-editable/configurable/extensible system. ChromiumOS is the closest we have to that being a full OS a user can just play with out of the box. What other system today can you just press F12 and hack to pieces?

                              I myself got into programming in the 90s by just clicking “View Source” and discovering the funky syntax required to make websites. I’ve mentored and assisted many kids today doing just the same. The web is the closest we have to this expression.

                              Now, I’m not disagreeing that we shouldn’t try to create that awesome minimal rebirth of systems. It’s one of my personal desires to see it happen, which is why I’m replying and was so interested in the talk. We’ve absolutely cornered ourselves with complicated designs, I absolutely agree. I was mostly just pointing out we have a path towards at least something that gives us a shard of the benefits of such a system with the web.

                              The rest thought, I agree at a high level, so I’ll leave things at that.

                              1. 1

                                Hey, you’re welcome. I’m delighted when anyone wants to engage. :-)

                                But a serious answer deserved a serious response, so I slept on it, and, well, as you can see, it took some time. I don’t even the excuse that “Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte.”

                                If you are curious to do so, you might be amused to look through my older tech-blog posts – for example this or this.

                                The research project that led to these 3 FOSDEM talks started over a decade ago when I persuaded my editor that retrocomputing articles were popular & I went looking for something obscure that nobody else was writing about.

                                I looked at various interesting long-gone platforms or technologies – some of the fun ones were Apollo Aegis & DomainOS, SunDew/NeWS, the Three Rivers PERQ etc. – that had or did stuff nothing else did. All were either too obscure, or had little to no lasting impact or influence.

                                What I found, in time, were Lisp Machines. A little pointy lump in the soil, which as I kept digging turned into the entire Temple of Damanhur. (Anyone who’s never heard of that should definitely look it up.) And then as I kept digging, the entire war for the workstation, between whole-dynamic-environment languages (Lisp & Smalltalk, but there are others) and the reverse, the Unix way, the easy-but-somehow-sad environment of code written in a unsafe, hacky language, compiled to binaries, and run on an OS whose raison d’être is to “keep ‘em separated”: to turn a computer into a pile of little isolate execution contexts, which can only pass info to one another via plain text files. An ugly, lowest-common-denominator sort of OS but which succeeded and thrived because it was small, simple, easy to implement and to port, relatively versatile, and didn’t require fancy hardware.

                                That at one time, there were these two schools – that of the maximally capable, powerful language, running on expensive bespoke hardware but delivering astonishing abilities… versus a cheap, simple, hack of a system that everyone could clone, which ran on cheap old minicomputers, then workstations with COTS 68K chips, then on RISC chips.

                                (The Unix Haters Handbook was particularly instructive. Also recommended to everyone; it’s informative, it’s free and it’s funny.)

                                For a while, I was a sort of Lisp zealot or evangelist – without ever having mastered it myself, mind. It breaks my brain. “The Little Lisper” is the most impenetrable computer publication I’ve ever tried, and failed, to read.

                                A lot of my friends are jaded old Unix pros, like me having gone through multiple proprietary flavours before coming to Linux. Or possibly a BSD. I won serious kudos from my first editor when I knew how to properly shutdown a Tadpole SPARCbook with:

                                sync
                                sync
                                sync
                                halt
                                

                                “What I tell you three times is true!” he crowed.

                                Very old Unix hands remember LispMs. They’ve certainly met lots of Lisp evangelists. They got very tired of me banging on about it. Example – a mate of mine said on Twitter:

                                « A few years ago it was lisp is the true path. Before that is was touchscreens will kill the keyboard. »

                                The thing is, while going on about it, I kept digging, kept researching. There’s more to life than Paul Graham essays. Yes, the old LispM fans were onto something; yes, the world lost something important when they were out-competed into extinction by Unix boxes; yes, in the right hands, it achieves undreamed-of levels of productivity and capability; yes, the famous bipolar Lisp programmer essay.

                                But there are other systems which people say the same sorts of things about. Not many. APL, but even APL fans recognise it has a niche. Forth, mainly for people who disdain OSes as unnecessary bloat and roll their own. Smalltalk. A handful of others. The “Languages of the Gods”.

                                Another thing I found is people who’d bounced off Lisp. Some tried hard but didn’t get it. Some learned it, maybe even implemented their own, but were unmoved by it and drifted off. A lot of people deride it – L.I.S.P. = Lotsa Insignificant Stupid Parentheses, etc. – but some of them do so with reason.

                                I do not know why this. It may be a cultural thing, it may be one of what forms of logic and of reasoning feel natural to different people. I had a hard time grasping algebra as a schoolchild. (Your comment about “grade school” stuff is impenetrable to me. I’m not American so I don’t know what “grade school” is, I cannot parse your example, and I don’t know what level it is aimed at – but I suspect it’s above mine. I failed ‘O’ level maths and had to resit it. The single most depressing moment of my biology degree was when the lecturer for “Intro to Statistics” said he knew we were all scared, but it was fine; for science undergraduates like us, it would just be revision of our maths ‘A’ level. If I tried, I’d never even have got good enough exam scores to be rejected for a maths ‘A’ level.)

                                When I finally understood algebra, I “got” it and it made sense and became a useful tool, but I have only a weak handle on it. I used to know how to solve a quadratic equation but I couldn’t do it now.

                                I never got as far as integration or differentiation. I only grasped them at all when trying to help a member of staff with her comp-studies homework. It’s true: the best way to learn something is to teach it.

                                Edsger Dijkstra was a grumpy git, but when he said:

                                “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration”

                                … and…

                                “The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offence.”

                                … I kind of know what he meant. I disagree, obviously, and I am not alone, but he did have a core point.

                                I think possibly that if someone learned Algol-style infix notation when they were young, and it’s all they’ve ever known, if someone comes along and tells them that it’s all wrong, to throw it away, and do it like this – or possibly (this(like(do(it)))) – instead, it is perfectly reasonable to reject it.

                                Recently I used the expression A <> B to someone online and they didn’t understand. I was taken aback. This is BASIC syntax and was universal when I was under 35. No longer. I rephrased it as A != B and they understood immediately.

                                Today, C syntax is just obvious and intuitive. As Stephen Diehl said:

                                « C syntax is magical programmer catnip. You sprinkle it on anything and it suddenly becomes “practical” and “readable”. »

                                I submit that there are some people who cannot intuitively grasp the syntaxless list syntax of Lisp. And others who can handle it fine but dislike it, just as many love Python indentation and others despise it. And others who maybe could but with vast effort and it will forever hinder them.

                                Comparison: I am 53 years old, I emigrated to the Czech Republic 7 years ago and I now have a family here and will probably stay. I like it here. There are good reasons people still talk about the Bohemian lifestyle.

                                But the language is terrifying: 4 genders, 7 cases, all nouns have 2 plurals (2-4 & >=5), a special set of future tenses for verbs of motion, & two entire sets of tenses – verb “aspects”, very broadly one for things that are happening in the past/present/future but are incomplete, and one for things in the past or present that are complete.

                                After 6 years of study, I am an advanced beginner. I cannot read a headline.

                                Now, context: I speak German, poorly. I learned it in 3 days of hard work travelling thence on a bus. I speak passable French after a few years of it at school. I can get by in Spanish, Norwegian and Swedish from a few weeks each.

                                I am not bad at languages, and I’m definitely not intimidated by them. But learning your first Slavic language in your 40s is like climbing Everest with 2 broken legs.

                                No matter how hard I try, I will never be fluent. I won’t live long enough.

                                Maybe if I started Russian at 7 instead of French, I’d be fine, but I didn’t. But 400 million people speak Slavic languages and have no problems with this stuff.

                                I am determined. I will get to some useful level if it kills me. But I’ll never be any good and I doubt I’ll ever read a novel in it.

                                I put it to you that Lisp is the same thing. That depending on aptitude or personality or mindset or background, for some people it will be easy, for some hard, and for some either impossible or simply not worth the bother. I know many Anglophones (and other first-language speakers) who live in Czechia who just gave up on Czech. For a lot of people, it’s just too hard as an adult. My first course started with 15 students and ended with 3. This is on the low side of normal; 60% of students quit in the first 3 months, after paying in full.

                                And when people say that “look, really, f(a,b) is the same thing as (f a,b)” or tell us that we’ll just stop seeing the parentheses after a while (see slides 6 & 7 ) IT DOES NOT HELP. In fact, it’s profoundly offputting.

                                I am regarded as a Lisp evangelist among some groups of friends. I completely buy and believe, from my research, that it probably is the most powerful programming language there’s ever been.

                                But the barrier to entry is very, very high, and it would better serve the Lisp world to recognise and acknowledge this than to continue 6 decades of denialism.

                                Before this talk, I conferred with 2 very smart programmer friends of mine about the infix/prefix notation issue. ISTM that it should be possible to have a smart editor that could convert between the two, or even round-trip convert a subset of them.

                                This is why I proposed Dylan on top of Lisp, not just Lisp. Because Lisp frightens people and puts them off, and that is not their fault or failing. There was always meant to be an easier, more accessible form for the non-specialists. Some of my favourite attempts were CGOL and Lisp wizard David A. Moon’s PLOT. If Moon thinks it’s worth doing, we should listen. You might have heard of this editor he wrote? It’s called “Emacs”. I hear it’s quite something.

                                1. 2

                                  it took some time

                                  Oh boy, I really don’t want to take up your time.

                                  For a while, I was a sort of Lisp zealot or evangelist – without ever having mastered it myself, mind.

                                  I myself am no Common Lisp expert. It’s an old language with odd behavior and too many macros. I personally use Clojure and find it extremely ergonomic for application development. I find modern Schemes in general to be fairly ergonomic as well, but maybe a bit too too many parens compared to Clojure.

                                  Clojure does a good job of limiting parens, and introducing reader macros of [] for vectors and {} for hash-maps and it works out exceedingly well. The positional assumptions it makes limit parens and it really isn’t hard to read. It’s like executable JSON, only way easier to read. It isn’t far from the type of JS and Ruby I write anyway.

                                  There’s more to life than Paul Graham essays.

                                  The only real PG thing worth reading is Roots of Lisp, which breaks down Lisp into its axiomatic special forms. You can see how one can start from lambda calculus, add some special forms, and end up with the kernel for a language that can do anything. Purely as an educational read.

                                  “The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offence.”

                                  Today, this is Java. I’m sure you’d agree. Its pervasive use of non-message-passing OO has crippled two entire generations of programmers, unable to grasp first class functions and simple data flow. They cobble together things with hundreds of nouns, leaving the logic opaque and dispersed throughout these nouns and their verby interactions. Tremendeous effort is required just to track down where anything happens.

                                  Today, C syntax is just obvious and intuitive.

                                  This is only true of people with prior experience with C syntax languages. Exposure to a C style language first seats it as a norm within the brain, just as one’s first spoken language. I wouldn’t say C is intuitive to someone who has never programmed before.

                                  But the [Czech] language is terrifying

                                  I speak Polish, so I can very much relate to Czech and other Slavic languages. In fact, Polish is often considered the hardest language to learn.

                                  I put it to you that Lisp is the same thing. That depending on aptitude or personality or mindset or background, for some people it will be easy, for some hard, and for some either impossible or simply not worth the bother.

                                  I still strongly disagree.

                                  I am a visual-spatial person, and visualizing the trees and expressions is extremely easy for me. I have never felt more at home than I do with Clojure. It was an immediately overwhelmingly positive experience and I’m not sure any language will ever have a syntax or software model that is more matching my thought processes. (Prototypal languages like JavaScript and Lua come in a close second, because then I’m thinking in trees made of hash-maps instead.)

                                  see slides 6 & 7

                                  Actually, slide 7 is all I see (the words), and honestly, the default syntax highlighting for Lisps shouldn’t be rainbow braces, but muted and almost invisible braces like in said slide. Just indented nouns – like Python!

                                  I’ve adapted to many language with all sorts of funky syntaxes (WebFOCUS comes to mind) and I can’t say any was hard for me to get comfortable with after enough exposure. But the key to readability is finding the literal “shapes” on the screen and their patterns. My eyes can just scan them. (Python is the most readily readable in that regard.) But, if one does not write Clojure in idiomatic style, it does truly become hard to read.

                                  Lisp syntax lives or dies by how you horizontally indent braces. If you do not practice “semantic indentation” then you can truly be lost in a sea of meaningless parens, trying to find how the words relate to each other. That indentation visually shows the relationship. A visual tree.

                                  But the barrier to entry is very, very high, and it would better serve the Lisp world to recognise and acknowledge this than to continue 6 decades of denialism.

                                  I have introduced many people to Clojure and they’ve never found the syntax to be a barrier to entry. As a functional programmer, I find that C syntax gets in the way of Functional patterns and its syntax is a barrier to entry in learning Functional Programming.

                                  Let me dig up some examples:

                                  A C# functional approach to database return value transforms and validation: https://gist.github.com/Slackwise/965ac1947b69c60e21aa030be96b657b

                                  I am certain the Clojure equivalent would be shorter and much easier to read. Notice it looks fairly lispy on its own, in idiomatic Functional C# style. That is the nature of a Functional approach, be it C like syntax, or Lispy syntax.

                                  A more recent toy example was a technical challenge posited to me to write a palindrome function, which I decided to write functionally in both JavaScript (as a singular pure expression) and Clojure for comparison:

                                  const isPalindrome = (
                                    string,
                                    trimSameEnds = (
                                      [first, ...rest],
                                      last = rest.slice(-1),
                                      leftovers = rest.slice(0, -1)
                                    ) => first == last
                                           ? trimSameEnds(leftovers)
                                           : rest
                                  ) => trimSameEnds(string).length <= 1;
                                  
                                  (defn palindrome? [s]
                                    (<= 1 
                                        (count (loop [[first & rest] s]
                                                 (let [last (last rest)
                                                       leftovers (drop-last rest)]
                                                   (if (= first last)
                                                       (recur leftovers)
                                                       rest))))))
                                  

                                  Is the JavaScript form any easier to read? I would say the Clojure form is slightly easier as long as you understand semantic indentation. (Obviously you need to understand both languages as well as be somewhat versed in Functional Programming to make heads or tails of this FP voodoo.)

                                  I would say that familiarity is key, but moreso: consistent style.

                                  Any language written in a funky style that is not idiomatic is going to be immediately hard to read. I guarantee I can take any language and make it harder to read simply by changing the style. I personally find it harder to read something even if someone makes a minor lazy mistake like writing 1+2 instead of 1 + 2. It throws off the expected “shape” of the code and impedes readability.

                                  This is why I proposed Dylan on top of Lisp, not just Lisp.

                                  If by Dylan implemented as a reader macro in Lisp as an option, I’m for it, for those who have hangups over syntax. But also, any language they might prefer might as well be a reader macro option. I do think though, simply building good DSLs would go a long way in building an entire OS out of one language, without having to reach for C-ish syntax.

                                  1. 1

                                    Oh boy, I really don’t want to take up your time.

                                    No no, it’s fine, I am learning all the while here.

                                    I myself am no Common Lisp expert. It’s an old language with odd behavior and too many macros. I personally use Clojure and find it extremely ergonomic for application development. I find modern Schemes in general to be fairly ergonomic as well, but maybe a bit too too many parens compared to Clojure.

                                    Interesting. Noted.

                                    Clojure does a good job of limiting parens, and introducing reader macros of [] for vectors and {} for hash-maps and it works out exceedingly well. The positional assumptions it makes limit parens and it really isn’t hard to read. It’s like executable JSON, only way easier to read. It isn’t far from the type of JS and Ruby I write anyway.

                                    I have a suspicion that this may be the kind of improvement that is only helpful to those who have achieved a certain level of proficiency already. In other words, that it doesn’t help beginners much; maybe it reduces the steepness of part of the learning curve later on, but not at the beginning – and the beginning is possibly the most important part.

                                    The only real PG thing worth reading is Roots of Lisp, which breaks down Lisp into its axiomatic special forms. You can see how one can start from lambda calculus, add some special forms, and end up with the kernel for a language that can do anything. Purely as an educational read.

                                    Interesting.

                                    I found his essays very persuasive at first. I have grown a little more sceptical over time.

                                    Today, this is Java. I’m sure you’d agree.

                                    Hmmm. Up to a point, perhaps yes.

                                    I’d probably say C and C++ in more general, actually.

                                    I have read a lot of loper-os.org, and it pointed me at an essay of Mark Tarver’s “The Bipolar Lisp Programmer”. a A comment of his really struck me:

                                    « Now in contrast, the C/C++ approach is quite different. It’s so damn hard to do anything with tweezers and glue that anything significant you do will be a real achievement. You want to document it. Also you’re liable to need help in any C project of significant size; so you’re liable to be social and work with others. You need to, just to get somewhere. » http://marktarver.com/bipolar.html

                                    Its pervasive use of non-message-passing OO has crippled two entire generations of programmers, unable to grasp first class functions and simple data flow. They cobble together things with hundreds of nouns, leaving the logic opaque and dispersed throughout these nouns and their verby interactions. Tremendeous effort is required just to track down where anything happens.

                                    I really don’t know. I have never mastered an OO language. I am currently reading up about Smalltalk in some detail, rather than theoretical overviews. To my pleased surprise, the Squeak community have been quite receptive to the ideas in my talk.

                                    Today, C syntax is just obvious and intuitive.

                                    This is only true of people with prior experience with C syntax languages. Exposure to a C style language first seats it as a norm within the brain, just as one’s first spoken language. I wouldn’t say C is intuitive to someone who has never programmed before.

                                    For clarity: I was being somewhat sardonic here. I am not saying that I personally believe this to be true, but that it is common, widely-held received wisdom.

                                    I speak Polish, so I can very much relate to Czech and other Slavic languages. In fact, Polish is often considered the hardest language to learn.

                                    :-) I can well believe that!

                                    I put it to you that Lisp is the same thing. That depending on aptitude or personality or mindset or background, for some people it will be easy, for some hard, and for some either impossible or simply not worth the bother.

                                    I still strongly disagree.

                                    I thought you might, and this response did sadden me, because I am failing to get my point across at all, clearly. :-(

                                    Actually, slide 7 is all I see (the words), and honestly, the default syntax highlighting for Lisps shouldn’t be rainbow braces, but muted and almost invisible braces like in said slide. Just indented nouns – like Python!

                                    This is sort of my point. (And don’t get me wrong; I am not a Python enthusiast. I’ve been failing to learn it since v1 was current.

                                    The thing I think is instructive about Python is the way that experienced programmers react to it. It polarises people. Some love it, some hate it.

                                    Even rookie programmers like me know that different people feel different indentation patterns are right and good. There’s a quote in your link:

                                    « Nearly everybody is convinced that every style but their own is ugly and unreadable. Leave out the “but their own” and they’re probably right…​ »

                                    Python forces everyone to adhere to the same indentation pattern, by making it meaninful. The people that hate Python are probably people that have horribly idiosyncratic indentation styles, and thus would probably benefit the most from being forced into one that makes sense to others, if their code is ever to be read or maintained by anyone else.

                                    Thus, I suspect that strenuous objections to Python tell you something far more valuable about the person making the objections, than anything the objections themselves could ever tell you about Python.

                                    I’ve adapted to many language with all sorts of funky syntaxes (WebFOCUS comes to mind) and I can’t say any was hard for me to get comfortable with after enough exposure. But the key to readability is finding the literal “shapes” on the screen and their patterns. My eyes can just scan them. (Python is the most readily readable in that regard.) But, if one does not write Clojure in idiomatic style, it does truly become hard to read.

                                    So, it sounds to me like you have a versatile and adaptable mind that readily adapts to different languages. Most Lisp people seem to have minds like that.

                                    It seems to me that where they often fail is in not realising that not everyone has minds like that. That for many people, merely learning one style or one programming language was really hard, and when they finally got it, they didn’t want to ever have to change, to ever have to go through it again by learning something else.

                                    We all know people who only speak a single human language and say that they don’t have a knack for languages and can’t learn new ones. This is not only a sign of poor teaching methods. Maybe they are actually right. Maybe they really do lack ability at learning this stuff. Maybe it’s real. I see no reason why not.

                                    A lack of ability to learn to speak more than one human language does not stop someone from being highly creative in that language – I am sure that many wonderful writers, poets, playwrights, novelists etc. are monoglot.

                                    Well, a lot of skilful programmers who are able to do very useful work are also possibly monoglots. It took a lot of effort for them to learn one language, and they really like it, and all they will even consider are variants of that single language, or things that are different but at least use the same syntax.

                                    In the ’50s and ’60s, it might have been COBOL, or PL/1, or RPG.

                                    In the ’70s & ’80s, it might have been BASIC and variants on BASIC, especially for self-taught programmers. For another group, with more formal training or education, Pascal and variants on Pascal.

                                    In the ‘90s onwards, it’s been C.

                                    And so now we have lots of languages with C syntax and a cosmetic resemblance to C, and most people are comfortable with that.

                                    Me, personally, I always found C hard work and while I admired its brevity, I found it unreadable. Even my own code.

                                    Later, as more people piled onto the Internet and I got to talk to others, I found that this was a widespread problem.

                                    But that was swiftly overwhelmed and buried behind the immense momentum of C and C-like languages. Now, well, Stephen Diehl’s observation that I quoted is just how it is for most people in industry.

                                    If on the surface it looks like C, then it’s easy. Java looks superficially like C, although it’s not underneath. Javscript looks like it, although it’s not and it’s not like Java either. C++ is like C but with a million knobs and dials on. D is like C. C# is like C. And they’ve thrived.

                                    And people who know nothing else now thing that a language that replaces { and } with BEGIN and END is impossibly wordy and verbose.

                                    In the opposite direction, a language which replaces { and } but also for and while and if and almost everything else with just thousands of ( and huge blocks of nothing but )and it doesn’t even keep the block delimiters in order! Well, YES, to such a person, YES, this is totally unreadable.

                                    I do not know how old you are. I am quite old; I’m 53. I began and finished programming in the 1980s. But I try to retain a flexible mind.

                                    However, I see people of my age raging at “new math”. The idea that

                                    3 + 4 * 5

                                    … is the same thing as

                                    4 * 5 + 3

                                    … deeply offends them. They are old enough that they’ve forgotten school maths. The little they they recall is fragmentary and inconsistent. They have forgotten rules that they learned later such as “Bless My Dear Aunt Sally” or “BODMAS”. (If these are meaningless, Google them. :-) ) They think that they can do it and they don’t know that actually decades of use of calculators means they can’t. Prove to them with a calculator that actually there are precedence rules, and they will angrily say that the calculator is wrong and was clearly programmed by someone who “follows this ‘New Maths’ nonsense.”

                                    I have often read Lisp people saying things like:

                                    « Look as this:

                                    f(a,b)

                                    versus

                                    (f a b)

                                    It’s the same! We have just moved the same characters around! It’s really the same thing!”

                                    Well, no, to someone who only knows x(y,z) and nothing else, this is self-evidently nonsense and ridiculous.

                                    I put it to you that it is necessary to accept that, just as there are people who are monoglots and will die monoglots and may have rich and fulfilling creative lives being monoglots…

                                    … that by the same token, there are useful, skilled, productive programmers who can only handle ALGOL-type languages, who with serious effort might be able to move from the C branch of the ALGOL family to another branch, such as Python or PHP or Perl, but asking them to step outside the ALGOL family altogether and learn APL or Forth or Haskell or Lisp is just a step too far, one that they will never successfully take, and that is not a problem or a failing of theirs.

                                    Lisp syntax lives or dies by how you horizontally indent braces. If you do not practice “semantic indentation” then you can truly be lost in a sea of meaningless parens, trying to find how the words relate to each other. That indentation visually shows the relationship. A visual tree.

                                    Are you familiar with the “sweet expressions” project? It tried to “fix” Lisp syntax with indentation. It got nowhere much despite a lot of effort.

                                    https://readable.sourceforge.io/

                                    I don’t think it is ever going to succeed.

                                    In other words, I do not think that indentation can ever be the answer. It might help those who get over this hurdle, climb this hill, but it won’t help those for whom the hill will always be too high and too steep.

                                    But the barrier to entry is very, very high, and it would better serve the Lisp world to recognise and acknowledge this than to continue 6 decades of denialism.

                                    Agreed!

                                    I have introduced many people to Clojure and they’ve never found the syntax to be a barrier to entry. As a functional programmer, I find that C syntax gets in the way of Functional patterns and its syntax is a barrier to entry in learning Functional Programming.

                                    I am glad to hear it. I do suspect that for a lot of people, though, FP itself is just too far away from anything they will ever need.

                                    I read your “semantic formatting” link and I can’t understand a word of it, I’m afraid. :-(

                                    Let me dig up some examples:

                                    A C# functional approach to database return value transforms and validation: https://gist.github.com/Slackwise/965ac1947b69c60e21aa030be96b657b

                                    I am certain the Clojure equivalent would be shorter and much easier to read. Notice it looks fairly lispy on its own, in idiomatic Functional C# style. That is the nature of a Functional approach, be it C like syntax, or Lispy syntax. […] I would say that familiarity is key, but moreso: consistent style.

                                    Way over my head. I really am sorry.

                                    Any language written in a funky style that is not idiomatic is going to be immediately hard to read. I guarantee I can take any language and make it harder to read simply by changing the style. I personally find it harder to read something even if someone makes a minor lazy mistake like writing 1+2 instead of 1 + 2. It throws off the expected “shape” of the code and impedes readability.

                                    There you go. To me, 1+2 and 1 + 2 are completely interchangeable, but + 1 2 is an effort to decode.

                                    If by Dylan implemented as a reader macro in Lisp as an option, I’m for it, for those who have hangups over syntax. But also, any language they might prefer might as well be a reader macro option. I do think though, simply building good DSLs would go a long way in building an entire OS out of one language, without having to reach for C-ish syntax.

                                    I had to Google this term. If I understand you correctly, well, yes, that is the general idea. I think…

                          1. 11

                            I think this article gets a lot of important details wrong.

                            In 1987, there’s Windows 2.0 and Windows/386. Windows 2.0 is fully real mode (although it can use expanded memory), and Windows/386 uses 386 capabilities to support multiple DOS sessions. The article is right about that.

                            Windows/286 though is more an exercise in branding. It’s the brand applied to Windows 2.1, and it’s still a fully real mode system, and it cannot address 16Mb of RAM. One minor thing that happened between 2.0 and 2.1 was the availability of the high memory area, allowing the range between 1Mb to 1Mb+64Kb to be addressed in real mode. Doing that requires the A20 gate, which implies a 286, but it’s still real mode and the system works fine on an 8088.

                            Both Windows/286 and Windows/386 operated the Windows environment itself in a 640Kb session, either because the system is natively real mode (286) or because it’s emulating many 640Kb DOS sessions and Windows happens to get one (386.) This still meant the Windows environment was constrained to 640Kb, although since it could use expanded memory, it was 640Kb that could be addressed with the ability to swap out regions into RAM that was not directly addressable. As a bonus trivia point, note that Windows/386 had no ability to page to disk, because the goal of the product was to use the RAM in the system.

                            Windows 3.0 didn’t actually glob three different operating modes into one binary. There’s three binaries - KERNEL.EXE, DOSX.EXE, and WIN386.EXE (for real mode, standard mode, and 386 enhanced mode respectively), and a stub WIN.COM which launches the right one. But the important thing is that the 286 and 386 modes allowed the Windows environment to run in protected mode and for Windows programs to address more than 640Kb. This is why those systems will warn before running a 2.x application, because there’s no guarantee that a program designed for real mode will run in protected mode. The main reason for including the real mode environment wasn’t to support 8088, it was to support 2.x applications until they could be updated to run in protected mode.

                            The observation I make about these systems is that OS/2 1.x was a 286 protected mode environment, so each OS/2 application could address all of the RAM in the system but there could only be one DOS session due to no v86 support; Windows/386 was protected mode host running many real mode environments, so it could run many DOS sessions, but the Windows part was also limited to 640Kb. The next generation (Windows 3.0, OS/2 2.0) allowed the full set of capabilities to support multiple DOS sessions and allow protected mode applications.

                            1. 2

                              The other things missed is that the EGA/VGA driver in Windows/386 can run CGA programs in a windows. My go to test is Battletech in CGA mode.

                              I’d shown it off here: https://virtuallyfun.com/wordpress/2018/08/07/windows-386-v2-03/

                              The other big thing is that Windows/286 can run multiple text mode MS-DOS sessions in windows. Memory is greatly constrained, and they need to be very well behaved, infocom games fit the bill and you can play several at once, in dosboxes in a windows.

                              Windows/286 delivers the European MS-DOS 4 experience, in the same way that Windows/386 delivered v86. Although being basically constrained to 64kb or less per session isn’t terribly useful, unlesss you are back in the 80s

                              1. 1

                                Oh, now those are very cool additions! Thank you!

                                I knew that ordinary Windows 2 could multitask DOS boxes, yes, but as you say, since they all had to fit into the same 640kB along with Windows itself, this wasn’t actually very useful. I’d also seen that if you had a full-screen graphics program running and you forced it into a window (Alt-Enter?), in some modes, Windows could display an accurate snapshot of it. I did not know CGA stuff could continue running.

                                I am European but I never saw the multitasking “original DOS 4”. I did find it interesting that for me and my customers, one of the more useful additions to MS-DOS 4 and 5, DOSShell, never got the credit it deserved IMHO. Before DOSShell, either I lashed together app menus with batch files, or if the customer spent a bit more, there were dozens of little DOS menu tools you could use. DOSShell built a pretty decent one right in, with a decent file manager, but few ever mentioned that it also gave you task switching. You could be in Quattro or dBase or Word or whatever, summon the shell with the Ctrl+Shift+Esc, then launch another app, and your original app was suspended right where it was.

                                It wasn’t multitasking but it was all some DOS power-users needed. I can’t remember if background apps were paged out into XMS or EMS or something, or snapshotted to disk.

                                1. 3

                                  European dos 4 was for OEMs it wasn’t retail. It’s basically the Windows memory management engine combined with DOS. It’s almost a real mode OS/2. It would have been super useful except it needing to fit the whole thing in real mode. Oddly enough DOS4G/W can run under it so it’ll run retail DooM.

                                  Windows 2 had the best compromises they could do with real mode. Obviously the 386 product was really a hypervisor with a real mode UI which really made it far nicer for dos multitasking. Enough of it fits with dos onto a single floppy so it was my go to for emergency stuff as at least I could run more than one thing at once …

                                  There is a leak of a super early Windows 3.0 that looks like 2.0 but it’s running in protected mode. It’s night and day ahead of 2.0 since it’s got tonnes of memory, but they restricted it to 286 protected mode so no v86 mode and EGA only.

                                  This was such a crazy time at MS as they ended up jumping on the NT/OS2 project dumping the OS/2 developers and switching to Windows everywhere for NT just as 3.0 was a massive big seller (it sold well over a million units!) far more than OS/2 ever did in its lifetime, along with not selling 3,000$ SDK’s they dropped the bar to $99 for Quick C for Windows and Visual Basic 1.0. Now the “bedroom coder” had access to a DOS Extender, and dev kit for a few hundred, far cheaper than anything else on the market.

                                  The real silly was how expensive tools and extenders were let alone audio libraries and graphics. Windows crush them all even back in the era of GDI only.

                                  1. 1

                                    Great comments - thank you, I am learning a lot here!

                                    I have done a few blog posts on the theme of “great might-have-beens of tech”, and one was on the multitasking DOS revolution that never quite happened. I focussed on DR-DOS 7, which finally re-introdruced multitasking, and DESQview/X. I had not considered that the original, pre-IBM-PC DOS 4, MS-DOS 4 could have touched it all off.

                                    I’ve not seen this leaked Win3. I’ll have a look.

                                    The OS/2-Win3-NT thing was a road-crash. It is really quite surprising anything useful came out of it, really.

                                    I evaluated OS/2 1.x for my then-employers and told them to keep clear. Over 30Y later, I stand by that. It was not a compelling option. A 386 OS/2 1.0 that still ran in text mode but could have multitasked DOS apps would have been a totally different proposition, and the line that MS tried to sell IBM, of OS/2 for the high end and Windows for the low end, could have kinda-sorta worked.

                                    By the time OS/2 2 was a thing, the 386SX was out and common. That’s what I mostly ran it on. But when OS/2 1.x was current, the 386DX was the state of the art, and 386DX PCs were expensive beasts. 4MB or more of RAM was reasonable, so a 386 OS/2 1.x for £5000+ PCs and a protect-mode 286 Windows 3 for £1000-£2000 PCs might have made some sort of sense.

                                    But it wasn’t real and in the real world MS was wisely extricating itself from a sinking ship and this was the fraudulent justification. For once, I don’t blame the company.

                                    It’s widely forgotten that NT was a successful salvage of the original OS/2 3.x, the planned CPU-portable version. So NT is the exception that proves (violates) the second-system-effect rule. I ran OS/2 2 and I liked it, but the Workplace Shell really was not all that, and the vast config text files were horrible to deal with. NT and the much-maligned Windows Registry were better.

                              2. 1

                                Fascinating answer – thank you!

                                Re Windows/286… I am not sure. You might be right. I have done some digging and there is a statement from Microsoft’s own “Old New Thing” blog that Standard Mode goes back to Windows/286: https://devblogs.microsoft.com/oldnewthing/20040407-00/?p=39893

                                However, several other sources state that Windows/286 ran fine on an 8086, just without HIMEM.SYS and without access to the High Memory Area. These seem to be contradictory statements. I am not sure which is true, or how to find out apart from maybe getting a copy and trying it – but I’m not sure any Windows 2-era software could tell me how much free RAM there was.

                                They may not truly be contradictory though – if the memory manager could initialise and use XMS, but the programs were real mode binaries, that might make both statements valid, no?

                                Note that the OS2Museum says that Windows/386 can run in real mode, too: http://www.os2museum.com/wp/windows386-2-01/

                                There were multiple releases of Windows 2: at least 2.0, 2.01, 2.03, 2.1 and 2.11. It is not entirely clear to me but it seems that Windows 2.0 was marketed in two editions, called “Windows 2” and “Windows/386”. For Windows 2.1x this was changed to “Windows/286” and “Windows/386”.

                                Example: https://www.oldcomputermuseum.com/os/windows_286_v2.10.html

                                I concede your point about there being 3 different kernels in Windows 3.0, with WIN.COM choosing between them.

                                Regarding OS/2 1 for 386, I don’t know if you have seen but the OS/2 Museum has a copy: https://www.os2museum.com/wp/playing-football/

                                It so nearly happened.

                                Things might have played out quite differently… OS/2 1 on 386, leading to quicker obsolescence for 286 PCs, but leaving a gap for DOS GUIs that Quarterdeck could have filled with DESQview/X. Perhaps with this competitive spur, the GNU Project would have adopted the BSD 4.4-Lite kernel that it evaluated; there could have been a working GNU OS by 1989 or so.

                                1. 5

                                  there is a statement from Microsoft’s own “Old New Thing” blog that Standard Mode goes back to Windows/286:

                                  Yeah, I remember back at the time telling Raymond he was wrong about that, which he conceded. Unfortunately since the blog keeps migrating across systems without preserving comments, all of that discussion was lost. I wasn’t the only one to comment.

                                  how to find out apart from maybe getting a copy and trying it

                                  You should try it :) Note that Windows 2.x really needs setver to tell it that it’s DOS 3.40, and Windows/386 requires DOS to be not loaded as high, since it predated that support. I’ve never been able to get Windows/386 to work outside of real DOS though (ie., not DosBox.)

                                  I’m not sure any Windows 2-era software could tell me how much free RAM there was

                                  Since these systems tended to run out of RAM a lot, they made it obvious: in the about box in MS-DOS Executive. Windows/286 reports this as “Memory Free.” If EMS is present, including on Windows/386, it reports “Conventional Memory Free” and “Expanded Memory Free”. Even Word and Excel include this information in their About boxes.

                                  if the memory manager could initialise and use XMS, but the programs were real mode binaries, that might make both statements valid, no?

                                  I haven’t dug into the code, but strongly suspect that WIN386 morphed into EMM386. Both are using the remapping capabilities of the 386 to expose XMS as EMS, which the Windows environment can consume. If you didn’t need multiple DOS boxes, EMM386 + Windows/286 is functionally the same as Windows/386; it’s just that Windows/386 could also leverage the same remapping logic for multiple DOS boxes (and video grabbing.) One other quirk is that the EMM386 code is just newer and more capable, so trying this out now, Windows/386 only emulated 15Mb of EMS, but EMM386 emulated 32Mb on the same hardware.

                                  OS2Museum says that Windows/386 can run in real mode, too

                                  Yes, it can. I bought a copy on eBay before that post was written, and was quite surprised to see Win386.exe + Win86.com. It doesn’t do the auto-detect logic that 3.0 does, but there’s really no good reason to use Windows/286 if you have access to Windows/386 since it’s basically a superset. Presumably back then it cost a lot more.

                                  Actually, I just tried something: you can copy win386.exe and win386.386, place them in a Windows/286 install and run it. Win386 tries to load win86.com, but the existing win.com can be renamed to win86.com. Or, you can just take command.com and call it win86.com to really inspect what win386.exe did: it’s a 10Kb TSR that emulates 16Mb of EMS and launched a child program. I’ll bet that TSR exposes some really funky services too that probably allow for multiple DOS sessions with no Windows environment at all, but I’ll need some time with a debugger to see what it does. (Edit: One other thing it does that I hadn’t noticed is it’s using ~300Kb of EMS, so it’s more than 10Kb total. EMM386.EXE detects WIN386 as its device driver and can interact with it but can’t disable EMS because WIN386 is using it.)

                                  Regarding OS/2 1 for 386, I don’t know if you have seen

                                  Heh, I hadn’t seen that, it’s very funny. I thought at the time a lot of the more technical folks (including BillG) thought that 286-based OS/2 was a bad idea, but it was driven by IBM’s business interests in having sold and still selling so many 286s much later than other manufacturers.

                                  Since we’re on topic though, have you seen WLO? That thing was really technically impressive, because the binaries were compatible between Windows 3.0 and OS/2 1.x - the exact same program ran in protected mode on both. OS/2 1.x and Windows used the same “NE” executable format, and WLO was a Windows program with a bit set in the header telling OS/2 to load it, coupled with some shim DLLs to redirect Windows API calls into their OS/2 equivalents. It was used in Excel and Word for OS/2.

                                  1. 5

                                    Unfortunately since the blog keeps migrating across systems without preserving comments, all of that discussion was lost.

                                    Wayback and some URL spelunking to the rescue.

                                    1. 4

                                      Thanks for digging this up. To be clear, I’m not Mike; I emailed Raymond privately, but Mike’s description is better than mine.

                                      At the end of the comments, the thing they’re talking about is WINMEM32.DLL. That beast deserves an article all of its own; the issue is that Windows 3.x runs in protected mode so can address all of the memory, but it’s still a system designed for a 286. The 386 processor introduces “flat” 32 bit registers, but those can’t be used to interact with the Windows 3.x API, which expects all far pointers in 16:16 format, since that’s what the 286 could support. WINMEM32.DLL allowed for conversion between these formats to facilitate running 16:32 code that could still interact with Windows. Note that doing this is challenging because it can’t just be the output of a 32 bit compiler - it’s an executable that has pieces of code running in 16:16 and pieces running in 16:32, and requires a fair amount of handwritten assembly. I don’t think I ever saw a program use this DLL, since it didn’t enable access to additional memory and didn’t simplify development. Win32s arrived around 3 years later but it allows a 32 bit program to use a 32 bit API and let Windows do the thunking, so the development model is much simpler.

                                      1. 1

                                        Good find! Thanks!

                                      2. 4

                                        I haven’t dug into the code, but strongly suspect that WIN386 morphed into EMM386

                                        Fun fact: The PC98 version of MS-DOS includes a DPMI DOS extender - which is actually just a headless version of 386 enhanced Windows 3.1.

                                  1. 3

                                    The new 80386 chip had an additional mode on top of 8-bit (8086-compatible) and 16-bit (80286-compatible) modes.

                                    Wait, 8086 is a “8 bit” CPU? (hint: No, it isn’t. Not the ALU width, not the register width, not the data bus width and not the address bus width)

                                    Article needs to be reviewed from top to bottom for factual mistakes. There’s far too many. As it is, you’ve made us your peer-reviewers.

                                    1. 3

                                      OK, then, what alternative nomenclature do you propose to distinguish between the 1st, 2nd and 3rd generation x86 CPUs? The 286 is unambiguously 16-bit and nothing else. The 386DX is unambiguously 32-bit, but the later 386SX muddies the waters.

                                      If the 2nd-gen 286 is 16-bit, then I submit that the far-more-limited 8088/8086 are at best 8/16-bit designs, with their 8-bit-like segments – just a bunch of them. The 8086 was expressly designed to be compatible with 8080 assembly code, and the 8088 had an 8-bit memory bus.

                                      For simplicity and brevity, I think comparing them to the later model with 16x as much RAM support by calling them 8-bit for short is fair. Clearly you disagree. That’s fair, too. I am surprised by how vociferously, though.

                                      Article needs to be reviewed from top to bottom for factual mistakes. There’s far too many. As it is, you’ve made us your peer-reviewers.

                                      I think that’s unecessarily harsh. It’s just a blog-post stuck together from a few FB comments. It’s not something I am trying to sell to anyone.

                                      1. 1

                                        For simplicity and brevity, I think comparing them to the later model with 16x as much RAM support by calling them 8-bit for short is fair.

                                        Fair? How about being accurate instead?

                                        1. 1

                                          So let me get this straight: for believing what Microsoft said about its own product, I am in your view intentionally spreading disinformation?

                                          Anyone else reading this: tell me, is it possible to block someone on Lobsters?

                                          1. 1

                                            for believing what Microsoft said about its own product

                                            This is interesting. Can you provide a link?

                                            Anyone else reading this: tell me, is it possible to block someone on Lobsters?

                                            Please also tell me, while at it.

                                            1. 7

                                              Guys, please chill. None of us can always be right, and none of us were born knowing everything. Being welcoming, supportive and open in the face of inaccuracies is how we all grow.

                                    1. 3

                                      This post has triggered me to reinstall Snow Leopard on my old Mac Mini, which I recently found. Still one of my favorite operating systems.

                                      1. 1

                                        I’m seriously thinking of a hackintosh. Something dual processors and tonnes of ram.

                                        1. 1

                                          I built a Hackintosh about 10Y ago, from a PC I was given on my local Freecycle group. Core 2 Extreme, 8GB of RAM, and everything else came out of my old PC.

                                          At the time, Lion or Mountain Lion was current, I forget now, but I built it using 10.6 because I knew that security updates can break Hacktintoshes. I stayed on SL until 2014, and was very happy with it…

                                          Happy side-effect: it ran MS Office 2004 like a champ, and as I really hate the Office “Ribbon” I didn’t want anything newer. But Office 2004 was PowerPC-only… so Snow Leopard was the last version it worked on.

                                          I now use Office 2011 (well, Word 2011; LibreOffice for everything else) and don’t want any newer version, but Office 2011 won’t run on anything newer than 10.14…

                                      1. 3

                                        I’m not entirely convinced a new model is needed. We already have memory mapped files in all the major operating systems. And file pages can already be as small as 4KiB, which is tiny compared to common file sizes, these days. Perhaps it would make sense to have even smaller pages for something like Opteron, but do we really need to rethink everything? What would we gain?

                                        1. 4

                                          What we’d gain is eliminating 50+ years of technical debt.

                                          I recommend the Twizzler presentation mentioned a few comments down. It explains some of the concepts much better than I can. These people have really dug into the technical implications far deeper than me.

                                          The thing is this: persistent memory blows apart the computing model that has prevailed for some 60+ years now. This is not the Von Neumann model or anything like that; it’s much simpler.

                                          There are, in all computers for since about the late 1950s, a minimum of 2 types of storage:

                                          • primary storage, which the processor can access directly – it’s on the CPUs’ memory bus. Small, fast, volatile.
                                          • secondary storage, which is big, slow, and persistent. It is not on the memory bus and not in the memory map. It is held in blocks, and the processor must send a message to the disk controller, ask for a particular block, wait for it to be loaded from 2y store and place into 1y store.

                                          The processor can only work on data in 1y store, but everything must be fetched from it, worked on, and put back.

                                          This is profoundly limiting. It’s slow. It doesn’t matter how fast the storage is, it’s slow.

                                          PMEM changes that. You have RAM only RAM, but some of your RAM keeps its contents when the power is off.

                                          Files are legacy baggage. When all your data is in RAM all the time, you don’t need files. Files are what filesystems hold; filesystems are an abstraction method for indexing blocks of secondary storage. With no secondary storage, you don’t need filesystems any more.

                                          1. 7

                                            I feel like there are a bunch of things conflated here:

                                            Filesystems and file abstractions provide a global per-device namespace. That is not a great abstraction today, where you often want a truly global namespace (i.e. one shared between all of your devices) or something a lot more restrictive. I’d love to see more of the historical capability systems research resurrected here: for typical mobile-device UI abstractions, you really want a capability-based filesystem. Persistent memory doesn’t solve any of the problems of naming and access. It makes some of them more complicated: If you have a file on a server somewhere, it’s quite easy to expose remote read and write operations, it’s very hard to expose a remote mmap - trying to run a cache coherency protocol over the Internet does not lead to good programming models.

                                            Persistence is an attribute of files but in a very complicated way. On *NIX, the canonical way of doing an atomic operation on a file is to copy the file, make your changes, and then move the old file over the top. This isn’t great and it would be really nice if you could have transactional updates over ranges of files (annoyingly, ZFS actually implements all of the machinery for this, it just doesn’t expose it at the ZPL). With persistent memory, atomicity is hard. On current implementations, atomic operations with respect to CPU cache coherency and atomic operations with respect to committing data to persistent storage are completely different things. Getting any kind of decent performance out of something that directly uses persistent memory and is resilient in the presence of failure is an open research problem.

                                            Really using persistent memory in this way also requires memory safety. As one of the The Machine developers told me when we were discussing CHERI: with persistent memory, your memory-safety bugs last forever. You’ve now turned your filesystem abstractions into a concurrent GC problem.

                                            1. 1

                                              Excellent points; thank you.

                                              May I ask, are you the same David Chisnall of “C is not a low-level language” paper? That is probably my single most-commonly cited paper. My compliments on it.

                                              Your points are entirely valid, and that is why I have been emphasizing the “just for fun” angle of it. I do not have answers to some of these hard questions, but I think that at first, what is needed is some kind of proof of concept. Something that demonstrates the core point: that we can have a complex, rich, capable environment that is able to do real, interesting work, which in some ways exceeds the traditional *nix model for a programmer, which runs entirely in a hybrid DRAM/PMEM system, on existing hardware that can be built today.

                                              Once this point has been made by demonstration, then perhaps it will be possible to tackle much more sophisticated systems, which provide reliability, redundancy, resiliency, and all that nice stuff that enterprises will pay lots of money for.

                                              There is a common accusation, not entirely unjust, that the FOSS community is very good at imitating and incrementally improving existing implementations, but not so good at creating wholly new things. I am not here to fight that battle. What I was trying to come up with was a proposal to use some existing open technology – things that are already FOSS, already out there, and not new and untested and immature, but solid, time-proven tools that have survived despite decades in obscurity – and assemble them into something that can be used to explore new and largely uncharted territory.

                                              ISTM, based on really very little evidence at all, that HPE got carried away with the potential of someting that came out of their labs. It takes decades to go from a new type of component to large-scale highly-integrated mass production. Techies know that; marketing people do not. We may not have competitive memristor storage until the 2030s at the earliest, and HPE wanted to start building enterprise solutions out of it. Too much, too young.

                                              Linux didn’t spring fully-formed from Torvalds’ brow ready to defeat AIX, HP-UX and Solaris in battle. It needed decades to grow up.

                                              The Machine didn’t get decades.

                                              Smalltalk has already had decades.

                                            2. 4

                                              I think files are more than just an abstraction over block storage, they’re an abstraction over any storage. They’re crucial part of the UX as well. Consider directories… Directories are not necessary for file systems to operate (it could just all be flat files) but they exist, purely for usability and organisation. I think even in the era of PMEM users will demand some way to organise information and it’ll probably end up looking like files and directories.

                                              1. 2

                                                Most mobile operating systems don’t expose files and directories and they are extremely popular.

                                                1. 3

                                                  True, but those operating systems still expose filesystems to developers. Users don’t necessarily need to be end users. iOS and Android also do expose files and directories to end users now, although I know iOS didn’t for a long time.

                                                  1. 3

                                                    iOS also provides Core Data, which would be a better interface in the PMEM world anyway.

                                                    1. 1

                                                      True, but those operating systems still expose filesystems to developers.

                                                      Not all of them don’t, no.

                                                      NewtonOS didn’t. PalmOS didn’t. The reason being that they didn’t have filesystems.

                                                      iOS is just UNIX. iOS and Android devices are tiny Unix machines in your pocket. They have all the complexity of a desktop workstation – millions of lines of code in a dozen languages, multiuser support, all that – it’s just hidden.

                                                      I’m proposing not just hiding it. I am proposing throwing the whole lot away and putting something genuinely simple in its place. Not hidden complexity: eliminating the complexity.

                                                    2. 2

                                                      They tried. Really hard. But in the end, even Apple had to give up and provide the Files app.

                                                      Files are an extremely useful abstraction, which is why they were invented in the first place. And why they get reinvented every time someone tries to get rid of them.

                                                      1. 4

                                                        Files (as a UX and data interchange abstraction) are not the same thing as a filesystem. You don’t need a filesystem to provide a document abstraction. Smalltalk-80 had none. (It didn’t have documents itself, but I was on a team that added documents and other applications to it.) And filesystems tend to lack stuff you want for documents, like metadata and smart links and robust support for updating them safely.

                                                        1. 1

                                                          I’m pretty sure the vast majority of iOS users don’t know Files exist.

                                                          I do, but I almost never use it.

                                                        2. 1

                                                          And extremely limiting.

                                                  1. 1

                                                    How inexpensive is this non-volatile ‘ram-like’ memory these days?

                                                    1. 1

                                                      It isn’t cheap yet, but I think there’s little doubt PMEM is the future. It’s like seeing transition to 64-bit and SSD.

                                                      1. 1

                                                        Cheaper than flash SSDs, gigabyte for gigabyte, and obviously SSDs are cheaper than RAM or instead of having a few hundred gig of SSDs holding our swapfiles, we’d have a few hundred gig of RAM and no swapfiles.

                                                        The thing is that they’re byte-by-byte rewritable. You don’t need that in a disk; in fact, you need to wrap it in a tonne of extra logic to hide it away, since disks work on a sector-by-sector or block-by-block basis. So it makes 3D Xpoint less competitive in the SSD space.

                                                    1. 2

                                                      I think that is my favourite FOSDEM talk ever. I’ve run many of the OSs listed there and went through a similar journey of discovering the road not taken. I too wonder what a new OS could be like. I knew I was in for a good talk when newtons and smalltalk appeared.

                                                      1. 1

                                                        Excellent! Thank you!

                                                      1. 11

                                                        I think you’d be interested in Twizzler. I found it watching Peter Alvaro’s talk “What not where: Why a blue sky OS?”. It seems to address some of your points.

                                                        I thought it was discussed on lobste.rs, but can’t find the link atm.

                                                        1. 5

                                                          That was fascinating – thank you for that link!

                                                          Very much the same inspiration, but they’ve come at it from a radically different, more network-oriented direction. That is a very good thing.

                                                          OTOH, it does preserve a shim of *nix compatibility, and whereas I wasn’t considering the network side of it at all – I reckon radical new ideas like that should, one hopes, be an emergent property of giving people a radically more powerful programming model than POSIX and C/things-rooted-in-C – the problem with finding a way to present a PMEM-centric OS to programmars via the medium of the existing stack is that, while it means instant familiarity, while it could win people’s attention far quicker… it doesn’t free us up at all from the morass of over 50 years of technical debt.

                                                          At this point, in 2021, *nix is basically nothing but technical debt. The whole concept of a file-centric OS being adapted to a PMEM-centric machine… it almost breaks my hear, while simultaneously being awed by the brilliance of the thinking.

                                                          It feels a bit like inventing a warp drive, and then showing it to the world by bolting it into a 1969 tractor frame. It’ll be a very very fast tractor, but at the same time, it’ll still be a tractor. It will never handle as well as an aeroplane with that engine would… and the aeroplane will be poor compared to a spaceship with it in. But you can kinda sorta turn an aeroplane into a spaceship. You can’t really turn a tractor into one.

                                                          1. 4

                                                            (this is where I put on the “knows weird systems” hat)

                                                            Twizzler reminded me a lot of some prior art on single-level storage. They aren’t quite as distributed-first, but they’re certainly interesting to learn from. See the previous comment.

                                                            1. 1

                                                              I like the earlier comment! :-)

                                                              Yes, Twizzler certainly appears to be founded on some of the same ideas I have had. I am not claiming to have had blindingly profound, singular visions!

                                                              I have worked (very briefly) on AS/400 and I was certainly aware of it. Long before it, Multics shared some of the came concepts. As far as I can tell, the thing with these single-level-store designs is that basically they consider all storage as disk, whereas what I have in mind is treating it all as RAM.

                                                              So, yes, they’re very well-suited to IBM i, or a revived Multics in theory, and their kin, but I am looking in a slightly different direction.

                                                            2. 2

                                                              Loved that talk, brilliant. Thanks.

                                                              1. 2

                                                                That is mind blowing. Someone needs to post their 2020 paper. I’m still reeling.

                                                              1. 2

                                                                Good reminder of history and interesting idea. For me the biggest take-away is the fact that a lot of stuff was just built for fun.

                                                                It would be cool to explore OSs by using these existing projects. Personally, I still want to explore building something from scratch.

                                                                Great talk!

                                                                1. 2

                                                                  Thank you!

                                                                1. 5

                                                                  Sure, python does make it easy to write functions, and you can code in a “big bag of functions” kinda way, but why would you do that in a language that is specifically designed to work in an OO way? It’s like looking at Haskell and saying, “You know what this needs? Objects”

                                                                  1. 11

                                                                    Did you mean: Objective Caml?

                                                                    1. 6

                                                                      A massively underappreciated language.

                                                                      1. 1

                                                                        Oh dear god, does such a thing actually exist?

                                                                        I’m sorry…

                                                                        1. 4

                                                                          I can’t tell if you are joking or not so I’m just going to post the ocaml wiki and slowly back away.

                                                                          1. 2

                                                                            I was kinda joking, but I did look up the wikipedia entry for OCaml and I was impressed, and a little horrified :)

                                                                          2. 3

                                                                            one of my favourite languages! you’re missing out if you don’t at least give it a look.

                                                                            1. 2

                                                                              Why you’re sorry? Rust’s compiler initially was written in OCaml. MLdonkey, not really used that much anymore, is written in OCaml, and it’s a pretty big codebase. Static analyzers for C/C++ (Frama-C, Infer) are written in it. Reference interpreter of WebAssembly, compiler for Haxe language, Facebook Messenger.

                                                                              Did you know that Pascal has its own Object Pascal version too? It was called Delphi and it was hugely popular in the 2000s.

                                                                              1. 4

                                                                                Pascal went a lot further than that.

                                                                                Pascal evolved to greater modularity with Modula, which was quickly replaced with Modula-2. This was intended to suitable for OS development, but this proved premature. For instance, in parallel with the invention of the ARM processor, Acorn attempted to develop a new OS for it in Modular-2. The effort failed and was replaced with Arthur, later renamed RISC OS.

                                                                                It was one of the fastest compilers for MS-DOS for some years, though, especially Topspeed Modula-2.

                                                                                Pascal creator Prof Niklaus Wirth went back to the roots, stripped the language back hard, and created Oberon. Oberon is a modular, stripped-down Pascal successor. It is both a language, an IDE, and a minimal OS to support that IDE. The entire OS including IDE, editor, compiler, and UI is about 30,000 lines of code.

                                                                                Oberon developed in several directions. The successor language, Oberon-2, was used to write a more complete OS, called Linz Oberon: https://ssw.jku.at/Research/Projects/Oberon.html

                                                                                Prof Wirth was still dissatisfied and went back to Oberon and created Oberon-07 (in 2007) and re-implemented the OS as Project Oberon: http://www.projectoberon.com/

                                                                                Meanwhile, another team wanted threads and multiprocessor support. They added OOPS and created Active Oberon, and created a graphical version of the OS with a zooming UI called Bluebottle. The OS itself is now called A2. https://en.wikipedia.org/wiki/Bluebottle_OS

                                                                                1. 1

                                                                                  Object Pascal lives on in FreePascal and Lazarus! Not super popular but still going.

                                                                            2. 2

                                                                              but why would you do that in a language that is specifically designed to work in an OO way?

                                                                              Where did you get this idea from? It most certainly was not. The documentation clearly says that it is a multiparadigm language and that OOP is an optional feature.

                                                                              The OP answers you question anyway. The answer is because it is pointless. There is no advantage to adding half a dozen of syntactic elements and extra boilerplate code.

                                                                            1. 1

                                                                              The OS/2 Warp 3 and 4 desktops borrowed a fair bit from CDE. A lot of people loved those. I wonder if there would be any interest in trying to scale up CDE into something like the Warp 4 Workplace Shell?

                                                                              1. 1

                                                                                I think people liked Workplace Shell for its extendability and scripting.

                                                                              1. 1

                                                                                I would like to work on a small OS for smart phones someday. Particularly targeting the new RISC-V processors.

                                                                                1. 4

                                                                                  I suggest you take a look at Inferno: https://en.wikipedia.org/wiki/Inferno_(operating_system)

                                                                                  It’s already got Raspberry Pi and Android ports in progress. A smartphone version has been demonstrated. It’s the last product of the Unix family, with about 30y more development and thought than Unix itself.

                                                                                  1. 1

                                                                                    Thanks for this

                                                                                1. 2

                                                                                  There’s no reason you couldn’t retrofit preemptive multitasking and memory protection to RiscOS. Legacy apps wouldn’t be aware of the fact that they were preempted and you’d still allow them to cooperatively schedule threads within the apps. Each app would see its own 26-bit virtual address space if you added memory protection. I thought Acorn added both of those features around RiscOS 3 but apparently I misremembered. I’m pretty sure it did move to a 32-bit address space because I remember the 26->32-bit transition breaking things.

                                                                                  The article also says that emulation would be slow. There was a paper at VEE a few years back that used virtualisation extensions to run an AArch32 emulator on an AArch64 system and bounce system calls into the 32-bit compat interface in the kernel. It ran faster than native AArch32 mode on the same hardware for some things, slower on others, but not more than about 10% variation either way. Considering that a modern ARM core is a 2.5GHz superscalar monster and most legacy RiscOS applications were written for a 33MHz single-issue in-order core, emulation speed is unlikely to be a problem. RiscOS development pretty much died by the time 400MHz StrongARM cores were state of the art, so few legacy apps would still be faster than on the machine that they were written for, even if an emulator ran at 20% of native speed.

                                                                                  1. 2

                                                                                    Thanks for the comment!

                                                                                    Well, yes, there has already been an attempt to retrofit pre-emption, as I mentioned in the piece. It’s called Wimp2 and I believe it broke compatibility with a lot of stuff.

                                                                                    The 26-bit to 32-bit transition was initiated inside Pace, but finalised by Castle Technology when they sub-licensed (later, purchased) Pace’s RISC OS licence which was bought to make the Acorn NC range. It did break almost everything compiled, but for those things where the authors were alive and still involved, I believe updating software was not too hard.

                                                                                    I am much heartened by your comments on running 32-bit stuff on 64-bit, though. That is good news.

                                                                                  1. 1

                                                                                    I commented there, but suffice to say, this was examined in what ISTM was a more elegant and comprehensive way by Steve Yegge in 2005:

                                                                                    https://sites.google.com/site/steveyegge2/the-emacs-problem

                                                                                    1. 10

                                                                                      Wouldn’t that be NNTP?

                                                                                      USENET predates the Web by a good decade and a half or more, doesn’t it?

                                                                                      1. 4

                                                                                        I talk about NNTP in the last section. Yes NNTP is still in use (and even FIDO net) but they’re mostly used for distributing binaries.

                                                                                        I focus on RSS because it’s still used A LOT .. and even though a lot of end-users don’t use RSS readers, the feeds are everywhere in every piece of software. They’re used by scrapers, robots and lots of other indexing software. Professors use feeds for their journal articles, news desk editors use them to see what all their competitors are reporting on, etc. So they’re an important part of the web ecosystem and end users should take more advantage of them.