Very much the same inspiration, but they’ve come at it from a radically different, more network-oriented direction. That is a very good thing.
OTOH, it does preserve a shim of *nix compatibility, and whereas I wasn’t considering the network side of it at all – I reckon radical new ideas like that should, one hopes, be an emergent property of giving people a radically more powerful programming model than POSIX and C/things-rooted-in-C – the problem with finding a way to present a PMEM-centric OS to programmars via the medium of the existing stack is that, while it means instant familiarity, while it could win people’s attention far quicker… it doesn’t free us up at all from the morass of over 50 years of technical debt.
At this point, in 2021, *nix is basically nothing but technical debt. The whole concept of a file-centric OS being adapted to a PMEM-centric machine… it almost breaks my hear, while simultaneously being awed by the brilliance of the thinking.
It feels a bit like inventing a warp drive, and then showing it to the world by bolting it into a 1969 tractor frame. It’ll be a very very fast tractor, but at the same time, it’ll still be a tractor. It will never handle as well as an aeroplane with that engine would… and the aeroplane will be poor compared to a spaceship with it in. But you can kinda sorta turn an aeroplane into a spaceship. You can’t really turn a tractor into one.
(this is where I put on the “knows weird systems” hat)
Twizzler reminded me a lot of some prior art on single-level storage. They aren’t quite as distributed-first, but they’re certainly interesting to learn from. See the previous comment.
Yes, Twizzler certainly appears to be founded on some of the same ideas I have had. I am not claiming to have had blindingly profound, singular visions!
I have worked (very briefly) on AS/400 and I was certainly aware of it. Long before it, Multics shared some of the came concepts. As far as I can tell, the thing with these single-level-store designs is that basically they consider all storage as disk, whereas what I have in mind is treating it all as RAM.
So, yes, they’re very well-suited to IBM i, or a revived Multics in theory, and their kin, but I am looking in a slightly different direction.
I’m not entirely convinced a new model is needed. We already have memory mapped files in all the major operating systems. And file pages can already be as small as 4KiB, which is tiny compared to common file sizes, these days. Perhaps it would make sense to have even smaller pages for something like Opteron, but do we really need to rethink everything? What would we gain?
What we’d gain is eliminating 50+ years of technical debt.
I recommend the Twizzler presentation mentioned a few comments down. It explains some of the concepts much better than I can. These people have really dug into the technical implications far deeper than me.
The thing is this: persistent memory blows apart the computing model that has prevailed for some 60+ years now. This is not the Von Neumann model or anything like that; it’s much simpler.
There are, in all computers for since about the late 1950s, a minimum of 2 types of storage:
primary storage, which the processor can access directly – it’s on the CPUs’ memory bus. Small, fast, volatile.
secondary storage, which is big, slow, and persistent. It is not on the memory bus and not in the memory map. It is held in blocks, and the processor must send a message to the disk controller, ask for a particular block, wait for it to be loaded from 2y store and place into 1y store.
The processor can only work on data in 1y store, but everything must be fetched from it, worked on, and put back.
This is profoundly limiting. It’s slow. It doesn’t matter how fast the storage is, it’s slow.
PMEM changes that. You have RAM only RAM, but some of your RAM keeps its contents when the power is off.
Files are legacy baggage. When all your data is in RAM all the time, you don’t need files. Files are what filesystems hold; filesystems are an abstraction method for indexing blocks of secondary storage. With no secondary storage, you don’t need filesystems any more.
I feel like there are a bunch of things conflated here:
Filesystems and file abstractions provide a global per-device namespace. That is not a great abstraction today, where you often want a truly global namespace (i.e. one shared between all of your devices) or something a lot more restrictive. I’d love to see more of the historical capability systems research resurrected here: for typical mobile-device UI abstractions, you really want a capability-based filesystem. Persistent memory doesn’t solve any of the problems of naming and access. It makes some of them more complicated: If you have a file on a server somewhere, it’s quite easy to expose remote read and write operations, it’s very hard to expose a remote mmap - trying to run a cache coherency protocol over the Internet does not lead to good programming models.
Persistence is an attribute of files but in a very complicated way. On *NIX, the canonical way of doing an atomic operation on a file is to copy the file, make your changes, and then move the old file over the top. This isn’t great and it would be really nice if you could have transactional updates over ranges of files (annoyingly, ZFS actually implements all of the machinery for this, it just doesn’t expose it at the ZPL). With persistent memory, atomicity is hard. On current implementations, atomic operations with respect to CPU cache coherency and atomic operations with respect to committing data to persistent storage are completely different things. Getting any kind of decent performance out of something that directly uses persistent memory and is resilient in the presence of failure is an open research problem.
Really using persistent memory in this way also requires memory safety. As one of the The Machine developers told me when we were discussing CHERI: with persistent memory, your memory-safety bugs last forever. You’ve now turned your filesystem abstractions into a concurrent GC problem.
May I ask, are you the same David Chisnall of “C is not a low-level language” paper? That is probably my single most-commonly cited paper. My compliments on it.
Your points are entirely valid, and that is why I have been emphasizing the “just for fun” angle of it. I do not have answers to some of these hard questions, but I think that at first, what is needed is some kind of proof of concept. Something that demonstrates the core point: that we can have a complex, rich, capable environment that is able to do real, interesting work, which in some ways exceeds the traditional *nix model for a programmer, which runs entirely in a hybrid DRAM/PMEM system, on existing hardware that can be built today.
Once this point has been made by demonstration, then perhaps it will be possible to tackle much more sophisticated systems, which provide reliability, redundancy, resiliency, and all that nice stuff that enterprises will pay lots of money for.
There is a common accusation, not entirely unjust, that the FOSS community is very good at imitating and incrementally improving existing implementations, but not so good at creating wholly new things. I am not here to fight that battle. What I was trying to come up with was a proposal to use some existing open technology – things that are already FOSS, already out there, and not new and untested and immature, but solid, time-proven tools that have survived despite decades in obscurity – and assemble them into something that can be used to explore new and largely uncharted territory.
ISTM, based on really very little evidence at all, that HPE got carried away with the potential of someting that came out of their labs. It takes decades to go from a new type of component to large-scale highly-integrated mass production. Techies know that; marketing people do not. We may not have competitive memristor storage until the 2030s at the earliest, and HPE wanted to start building enterprise solutions out of it. Too much, too young.
Linux didn’t spring fully-formed from Torvalds’ brow ready to defeat AIX, HP-UX and Solaris in battle. It needed decades to grow up.
Reply notifications are working again, so I just saw this!:
May I ask, are you the same David Chisnall of “C is not a low-level language” paper? That is probably my single most-commonly cited paper. My compliments on it.
Something that demonstrates the core point: that we can have a complex, rich, capable environment that is able to do real, interesting work, which in some ways exceeds the traditional *nix model for a programmer, which runs entirely in a hybrid DRAM/PMEM system, on existing hardware that can be built today.
I do agree with the ‘make it work, make it correct, make it fast’ model, but I suspect that you’ll find with a lot of these things that the step from ‘make it work’ to ‘make it correct’ is really hard. A lot of academic OS work fails to make it from research to production because they focus on making something that works for some common cases and miss the bits that are really important in deployment. For persistent memory systems, how you handle failure is probably the most important thing.
With a file abstraction, there’s an explicit ‘write state for recovery’ step and a clear distinction in the abstract machine between volatile and non-volatile storage. I can quite easily do two-phase commit to a POSIX filesystem (unless my disk is lying about sync) and end up with something that leaves my program in a recoverable state if the power goes out at any point. I may lose uncommitted data, but I don’t lose committed data. Doing the same thing with a single-level store is much harder because caches are (as their name suggests) hidden. Data that’s written back to persistent memory is safe, data in caches isn’t. I have to ensure that, independent of the order that things are evicted from cache, my persistent storage is in a consistent state. This is made much harder on current systems by the fact that atomic with respect to other cores is done via the cache coherency protocol, whereas atomic with respect to main memory (persistent or otherwise) is done via cache evictions and so guaranteeing that you have a consistent view of your data structures with respect to both other cores and persistent storage is incredibly hard.
The only systems that I’ve seen do this successfully segregated persistent and volatile memory and provided managed abstractions for interacting with it. I particularly like the FaRM project from some folks downstairs.
There is a common accusation, not entirely unjust, that the FOSS community is very good at imitating and incrementally improving existing implementations, but not so good at creating wholly new things.
I think there’s some truth to that accusation, though I’m biased from having tried to do something very different in an open source project. It’s difficult to get traction for anything different because you start from a position of unfamiliarity when trying to explain to people what the benefits are. Unless it’s solving a problem that they’ve hit repeatedly, it’s hard to get the message across. This is true everywhere, but in projects that depend on volunteers it is particularly problematic.
ISTM, based on really very little evidence at all, that HPE got carried away with the potential of someting that came out of their labs. It takes decades to go from a new type of component to large-scale highly-integrated mass production. Techies know that; marketing people do not. We may not have competitive memristor storage until the 2030s at the earliest, and HPE wanted to start building enterprise solutions out of it. Too much, too young.
That’s not an entirely unfair characterisation. The Machine didn’t depend on memristers though, it was intended to work with the kind of single-level store that you can build today and be ready to adopt memrister-based memory when it became available. It suffered a bit from the same thing that a lot of novel OS projects do: they wanted to build a Linux compat layer to make migration easy, but once they have a Linux compat layer it was just a slow way of running Linux software. One of my colleagues likes to point out that a POSIX compatibility layer tends to be the last piece of native software written for any interesting OS.
I think files are more than just an abstraction over block storage, they’re an abstraction over any storage. They’re crucial part of the UX as well. Consider directories… Directories are not necessary for file systems to operate (it could just all be flat files) but they exist, purely for usability and organisation. I think even in the era of PMEM users will demand some way to organise information and it’ll probably end up looking like files and directories.
True, but those operating systems still expose filesystems to developers. Users don’t necessarily need to be end users. iOS and Android also do expose files and directories to end users now, although I know iOS didn’t for a long time.
True, but those operating systems still expose filesystems to developers.
Not all of them don’t, no.
NewtonOS didn’t. PalmOS didn’t. The reason being that they didn’t have filesystems.
iOS is just UNIX. iOS and Android devices are tiny Unix machines in your pocket. They have all the complexity of a desktop workstation – millions of lines of code in a dozen languages, multiuser support, all that – it’s just hidden.
I’m proposing not just hiding it. I am proposing throwing the whole lot away and putting something genuinely simple in its place. Not hidden complexity: eliminating the complexity.
They tried. Really hard. But in the end, even Apple had to give up and provide the Files app.
Files are an extremely useful abstraction, which is why they were invented in the first place. And why they get reinvented every time someone tries to get rid of them.
Files (as a UX and data interchange abstraction) are not the same thing as a filesystem.
You don’t need a filesystem to provide a document abstraction. Smalltalk-80 had none. (It didn’t have documents itself, but I was on a team that added documents and other applications to it.)
And filesystems tend to lack stuff you want for documents, like metadata and smart links and robust support for updating them safely.
We’re never going to have fewer storage tiers, but more. And general computation is not going to be done in a permanent medium for most tasks most of the time until software is infinitely better than it is now. Imagine if every time a program crashed all its files would be damaged? Our tower of sedimentary software layers relies on being able to restart things that have gone off the rails, and so far the only way we’ve been able to build large distributed systems reliably has been to have more of it be ephemeral and stateless, and/or be able to reconstruct state based on an event history.
Enjoyed the talk. Especially some of the history I didn’t know about Oberon.
Just not sure why the suggestion after Smalltalk was Dylan, a Lisp that looks nothing like a Lisp and is less popular than every other other Lisp. There’s already great interest in a Lisp OS (other than Emacs) so it just seems like pet favorites here or a dislike for Lisp syntax, but alright.
I generally agree with moving back to environments that integrate a programming language, though. Have you by chance considered the Web?
I mean a realistic approach would be fusing a Blink runtime to Linux, or using ChromiumOS as a base, and having JS as a mutable, dynamic language + WebAssembly system.
We’re already heading that way, although we’d need to embrace open ECMAScript Modules and Web Components as the building blocks instead of minified bundles, and we’d need to stop abusing the semantics of the web, treating HTML and CSS as purely build artifacts (things that are hard to read and extend).
Enjoyed the talk. Especially some of the history I didn’t know about Oberon.
Thanks!
Just not sure why the suggestion after Smalltalk was Dylan
Part of the plan is to make something that is easy and fun. It will be limited at first compared to the insane incomprehensible unfathomable richness of a modern *nix or Windows OS. Very limited. So if it is limited, then I think it has to be fun and accessible and easy and comprehensible to have any hope of winning people over.
Lisp is hard. It may be the ultimate programming language, the only programmable programming language, but the syntax is not merely offputting, it is profoundly inaccessible for a lot of ordinary mortals. Just learning an Algol-like language is not hard. BASIC was fun and accessible. The right languages are toys for children, and that’s good.
Today, I have met multiple professional Java programmers who have next to no grasp of the theory, or of algorithms or any comp-sci basic principles… but they can bolt together existing modules just fine and make useful systems.
Note: I am not saying that this is a good way to build business logic, but it is how a lot of organizations do it.
There is a ton of extra logic that one must internalize to make Lisp comprehensible. I suspect that there is a certain type of mind for whom this stuff is accessible, easily acquired, and then they find it intuitive and obvious and very useful.
But I think that that kind of mind is fairly rare, and I do not think that this kind of language – code composed of lists, containing naked ASTs – will ever be a mass-market proposition.
Dylan, OTOH, did what McCarthy originally intended. It wrapped the bare lists in something accessible, and they demonstrated this by building an exceptionally visual, colourful, friendly graphical programming language in it. It was not intended for building enterprise servers; it was built to power an all-graphical pocket digital assistant, with a few meg of RAM and no filesystem.
Friendly and fun, remember. Accessible, easy, simple above all else. Expressly not intended to be “big and professional like GNU.”
But underneath Dylan’s friendly face is the raw power of Lisp.
So the idea is that it gives you the best of both worlds, in principle. For mortals, there’s an easy, colourful, fun toy. But one you can build real useful apps in.
And underneath that, interchangeable and interoperable with it, is the power of Lisp – but you don’t need to see it or interact with it if you don’t want to.
And beneath that is Oberon, which lets you twiddle bits if you need to in order to write a device driver or a network stack for a new protocol. Or create a VM and launch it, so you can have a window with Firefox in it.
Have you by chance considered the Web?
Oh dear gods, no!
There is an old saying in comp sci, attributed to David Wheeler: “We can solve any problem by introducing an extra level of indirection.”
It is often attributed to Butler Lampson, one of the people at PARC who designed and built the Xerox Alto, Dolphin and Dorado machines. He is also said to have added a rider:
“…except for the problem of too many layers of indirection.”
The idea here is to strip away a dozen layers of indirection and simplify it down to the minimum number of layers that can provide a rich, programmable, high-level environment that does not require users to learn arcane historical concepts such as “disks” or “directories” or “files”, or “binaries” and “compilers” and “linkers”. All that is ancient history, implementation baggage from 50 years of Unix.
The WWW was a quick’n’dirty, kludgy implementation of hypertext on Unix, put together using NeXTstations. The real idea of hypertext came from Ted Nelson’s Xanadu.
The web is half a dozen layers of crap – a protocol [1] that carries composite documents [2] built from Unix text files [3] and rendered by a now massively complex engine [4] whose operation can be modified by a clunky text-based scripting language [5] which needed to be JITted and accelerated by a runtime environment [6]. It is a mess.
It is more or less exactly what I am trying to get away from. The idea of implementing a new OS in a minimal 2 layers, replacing a dozen layers, and then implementing that elegant little design by embedding it inside a clunky half-dozen layers hosted on top of half a dozen layers of Unix… I recoil in disgust, TBH. It is not merely inefficient, it’s profane, a desecration of the concept.
Look, I am not a Christian, but I was vaguely raised as one. There are a few nuggets of wisdom in the Christian bible.
Matthew 7:24-27 applies.
“Therefore, whosoever heareth these sayings of Mine and doeth them, I will liken him unto a wise man, who built his house upon a rock.
And the rain descended and the floods came, and the winds blew and beat upon that house; and it fell not, for it was founded upon a rock.
And every one that heareth these sayings of Mine and doeth them not, shall be likened unto a foolish man, who built his house upon the sand;
and the rain descended, and the floods came, and the winds blew, and beat upon that house; and it fell, and great was the fall of it.”
Unix is the sand here. An ever-shifting, impermanent base. Put more layers of silt and gravel and mud on top, and it’s still sand.
I’m saying we take bare x86 or ARM or RISC-V. We put Oberon on that, then Smalltalk or Lisp on Oberon, and done. Two layers, one of them portable. The user doesn’t even need to know what they’re running on, because they don’t have a compiler or anything like that.
You’re used to sand. You like sand. I can see that. But more sand is not the answer here. The answer is a high-pressure host that sweeps away all the sand.
Hey, I appreciate the detailed response. I generally agree with your thesis, but I’m going to touch on some of your points.
[Lisp syntax] it is profoundly inaccessible for a lot of ordinary mortals.
I am going to have to strongly disagree here (unless we’re talking dense Common Lisp with its decades of quirky features). The Lisp syntax has few rules to learn (if not the least of any language other than Forth), is educationally friendly with tools like Dr Racket, and is one of the easiest to teach first-time programmers due to its obvious evaluation flow.
All one needs to know is how parenthesis work in mathematics. All one has to understand “how the data flows” is to look at the position in the expression and perform substitution of values as they’d learn in grade school.
(a (b c)
(d f))
It is visually a sideways tree one can see of words and indentations. And thus can be visually rendered using a colorful friendly tree-collapsing UI if need be with drag and drop expressions. No other language can have such an interaction model with their complex syntax rules.
Colleges have been teaching Scheme as a first programming language for years with SICP.
Really the discussion of Lisp syntax is one done to death; this is me beating a horse fossil at this point. There’s the value in the ease of understanding syntax, and the less obvious value of meta-programming, so the only other thing I’d add is that, we could just build a Lisp OS, and create Smalltalk as a Racket style #lang on top of it. You’re not going to find a better language to let people have their pet favorite syntax than a Lisp that will let you create that syntax.
Dylan could also just be implemented as a Racket-like #lang. (I’m not saying though that Racket is the ideal language, just that a meta-programmable language is an ideal, low level, substrate, to build things upon.)
But I think that that kind of mind is fairly rare, and I do not think that this kind of language – code composed of lists, containing naked ASTs – will ever be a mass-market proposition.
This is, of course, why DSLs exist, and why Racket has entire language abstractions on top of it, as I mentioned with Smalltalk and Dylan. Good Lisp designs actually scales better when you create powerful abstractions on top of it, making a complicated system accessible to “mere mortals”. Non-meta languages simply cannot scale.
It is more or less exactly what I am trying to get away from. The idea of implementing a new OS in a minimal 2 layers, replacing a dozen layers, and then implementing that elegant little design
Yes, but, this is a tremendous undertaking. The web, for better or worse, is literally the closest thing we have today to a Smalltalk-y dynamic user-editable/configurable/extensible system. ChromiumOS is the closest we have to that being a full OS a user can just play with out of the box. What other system today can you just press F12 and hack to pieces?
I myself got into programming in the 90s by just clicking “View Source” and discovering the funky syntax required to make websites. I’ve mentored and assisted many kids today doing just the same. The web is the closest we have to this expression.
Now, I’m not disagreeing that we shouldn’t try to create that awesome minimal rebirth of systems. It’s one of my personal desires to see it happen, which is why I’m replying and was so interested in the talk. We’ve absolutely cornered ourselves with complicated designs, I absolutely agree. I was mostly just pointing out we have a path towards at least something that gives us a shard of the benefits of such a system with the web.
The rest thought, I agree at a high level, so I’ll leave things at that.
Hey, you’re welcome. I’m delighted when anyone wants to engage. :-)
But a serious answer deserved a serious response, so I slept on it, and, well, as you can see, it took some time. I don’t even the excuse that “Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte.”
If you are curious to do so, you might be amused to look through my older tech-blog posts – for example this or this.
The research project that led to these 3 FOSDEM talks started over a decade ago when I persuaded my editor that retrocomputing articles were popular & I went looking for something obscure that nobody else was writing about.
I looked at various interesting long-gone platforms or technologies – some of the fun ones were Apollo Aegis & DomainOS, SunDew/NeWS, the Three Rivers PERQ etc. – that had or did stuff nothing else did. All were either too obscure, or had little to no lasting impact or influence.
What I found, in time, were Lisp Machines. A little pointy lump in the soil, which as I kept digging turned into the entire Temple of Damanhur. (Anyone who’s never heard of that should definitely look it up.) And then as I kept digging, the entire war for the workstation, between whole-dynamic-environment languages (Lisp & Smalltalk, but there are others) and the reverse, the Unix way, the easy-but-somehow-sad environment of code written in a unsafe, hacky language, compiled to binaries, and run on an OS whose raison d’être is to “keep ‘em separated”: to turn a computer into a pile of little isolate execution contexts, which can only pass info to one another via plain text files. An ugly, lowest-common-denominator sort of OS but which succeeded and thrived because it was small, simple, easy to implement and to port, relatively versatile, and didn’t require fancy hardware.
That at one time, there were these two schools – that of the maximally capable, powerful language, running on expensive bespoke hardware but delivering astonishing abilities… versus a cheap, simple, hack of a system that everyone could clone, which ran on cheap old minicomputers, then workstations with COTS 68K chips, then on RISC chips.
(The Unix Haters Handbook was particularly instructive. Also recommended to everyone; it’s informative, it’s free and it’s funny.)
For a while, I was a sort of Lisp zealot or evangelist – without ever having mastered it myself, mind. It breaks my brain. “The Little Lisper” is the most impenetrable computer publication I’ve ever tried, and failed, to read.
A lot of my friends are jaded old Unix pros, like me having gone through multiple proprietary flavours before coming to Linux. Or possibly a BSD. I won serious kudos from my first editor when I knew how to properly shutdown a Tadpole SPARCbook with:
sync
sync
sync
halt
“What I tell you three times is true!” he crowed.
Very old Unix hands remember LispMs. They’ve certainly met lots of Lisp evangelists. They got very tired of me banging on about it. Example – a mate of mine said on Twitter:
«
A few years ago it was lisp is the true path. Before that is was touchscreens will kill the keyboard.
»
The thing is, while going on about it, I kept digging, kept researching. There’s more to life than Paul Graham essays. Yes, the old LispM fans were onto something; yes, the world lost something important when they were out-competed into extinction by Unix boxes; yes, in the right hands, it achieves undreamed-of levels of productivity and capability; yes, the famous bipolar Lisp programmer essay.
But there are other systems which people say the same sorts of things about. Not many. APL, but even APL fans recognise it has a niche. Forth, mainly for people who disdain OSes as unnecessary bloat and roll their own. Smalltalk. A handful of others. The “Languages of the Gods”.
Another thing I found is people who’d bounced off Lisp. Some tried hard but didn’t get it. Some learned it, maybe even implemented their own, but were unmoved by it and drifted off. A lot of people deride it – L.I.S.P. = Lotsa Insignificant Stupid Parentheses, etc. – but some of them do so with reason.
I do not know why this. It may be a cultural thing, it may be one of what forms of logic and of reasoning feel natural to different people. I had a hard time grasping algebra as a schoolchild. (Your comment about “grade school” stuff is impenetrable to me. I’m not American so I don’t know what “grade school” is, I cannot parse your example, and I don’t know what level it is aimed at – but I suspect it’s above mine. I failed ‘O’ level maths and had to resit it. The single most depressing moment of my biology degree was when the lecturer for “Intro to Statistics” said he knew we were all scared, but it was fine; for science undergraduates like us, it would just be revision of our maths ‘A’ level. If I tried, I’d never even have got good enough exam scores to be rejected for a maths ‘A’ level.)
When I finally understood algebra, I “got” it and it made sense and became a useful tool, but I have only a weak handle on it. I used to know how to solve a quadratic equation but I couldn’t do it now.
I never got as far as integration or differentiation. I only grasped them at all when trying to help a member of staff with her comp-studies homework. It’s true: the best way to learn something is to teach it.
Edsger Dijkstra was a grumpy git, but when he said:
“It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration”
… and…
“The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offence.”
… I kind of know what he meant. I disagree, obviously, and I am not alone, but he did have a core point.
I think possibly that if someone learned Algol-style infix notation when they were young, and it’s all they’ve ever known, if someone comes along and tells them that it’s all wrong, to throw it away, and do it like this – or possibly (this(like(do(it)))) – instead, it is perfectly reasonable to reject it.
Recently I used the expression A <> B to someone online and they didn’t understand. I was taken aback. This is BASIC syntax and was universal when I was under 35. No longer. I rephrased it as A != B and they understood immediately.
«
C syntax is magical programmer catnip. You sprinkle it on anything and it suddenly becomes “practical” and “readable”.
»
I submit that there are some people who cannot intuitively grasp the syntaxless list syntax of Lisp. And others who can handle it fine but dislike it, just as many love Python indentation and others despise it. And others who maybe could but with vast effort and it will forever hinder them.
Comparison: I am 53 years old, I emigrated to the Czech Republic 7 years ago and I now have a family here and will probably stay. I like it here. There are good reasons people still talk about the Bohemian lifestyle.
But the language is terrifying: 4 genders, 7 cases, all nouns have 2 plurals (2-4 & >=5), a special set of future tenses for verbs of motion, & two entire sets of tenses – verb “aspects”, very broadly one for things that are happening in the past/present/future but are incomplete, and one for things in the past or present that are complete.
After 6 years of study, I am an advanced beginner. I cannot read a headline.
Now, context: I speak German, poorly. I learned it in 3 days of hard work travelling thence on a bus. I speak passable French after a few years of it at school. I can get by in Spanish, Norwegian and Swedish from a few weeks each.
I am not bad at languages, and I’m definitely not intimidated by them. But learning your first Slavic language in your 40s is like climbing Everest with 2 broken legs.
No matter how hard I try, I will never be fluent. I won’t live long enough.
Maybe if I started Russian at 7 instead of French, I’d be fine, but I didn’t. But 400 million people speak Slavic languages and have no problems with this stuff.
I am determined. I will get to some useful level if it kills me. But I’ll never be any good and I doubt I’ll ever read a novel in it.
I put it to you that Lisp is the same thing. That depending on aptitude or personality or mindset or background, for some people it will be easy, for some hard, and for some either impossible or simply not worth the bother. I know many Anglophones (and other first-language speakers) who live in Czechia who just gave up on Czech. For a lot of people, it’s just too hard as an adult. My first course started with 15 students and ended with 3. This is on the low side of normal; 60% of students quit in the first 3 months, after paying in full.
And when people say that “look, really, f(a,b) is the same thing as (f a,b)” or tell us that we’ll just stop seeing the parentheses after a while (see slides 6 & 7 ) IT DOES NOT HELP. In fact, it’s profoundly offputting.
I am regarded as a Lisp evangelist among some groups of friends. I completely buy and believe, from my research, that it probably is the most powerful programming language there’s ever been.
But the barrier to entry is very, very high, and it would better serve the Lisp world to recognise and acknowledge this than to continue 6 decades of denialism.
Before this talk, I conferred with 2 very smart programmer friends of mine about the infix/prefix notation issue. ISTM that it should be possible to have a smart editor that could convert between the two, or even round-trip convert a subset of them.
This is why I proposed Dylan on top of Lisp, not just Lisp. Because Lisp frightens people and puts them off, and that is not their fault or failing. There was always meant to be an easier, more accessible form for the non-specialists. Some of my favourite attempts were CGOL and Lisp wizard David A. Moon’s PLOT. If Moon thinks it’s worth doing, we should listen. You might have heard of this editor he wrote? It’s called “Emacs”. I hear it’s quite something.
For a while, I was a sort of Lisp zealot or evangelist – without ever having mastered it myself, mind.
I myself am no Common Lisp expert. It’s an old language with odd behavior and too many macros. I personally use Clojure and find it extremely ergonomic for application development. I find modern Schemes in general to be fairly ergonomic as well, but maybe a bit too too many parens compared to Clojure.
Clojure does a good job of limiting parens, and introducing reader macros of [] for vectors and {} for hash-maps and it works out exceedingly well. The positional assumptions it makes limit parens and it really isn’t hard to read. It’s like executable JSON, only way easier to read. It isn’t far from the type of JS and Ruby I write anyway.
There’s more to life than Paul Graham essays.
The only real PG thing worth reading is Roots of Lisp, which breaks down Lisp into its axiomatic special forms. You can see how one can start from lambda calculus, add some special forms, and end up with the kernel for a language that can do anything. Purely as an educational read.
“The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offence.”
Today, this is Java. I’m sure you’d agree. Its pervasive use of non-message-passing OO has crippled two entire generations of programmers, unable to grasp first class functions and simple data flow. They cobble together things with hundreds of nouns, leaving the logic opaque and dispersed throughout these nouns and their verby interactions. Tremendeous effort is required just to track down where anything happens.
Today, C syntax is just obvious and intuitive.
This is only true of people with prior experience with C syntax languages. Exposure to a C style language first seats it as a norm within the brain, just as one’s first spoken language. I wouldn’t say C is intuitive to someone who has never programmed before.
But the [Czech] language is terrifying
I speak Polish, so I can very much relate to Czech and other Slavic languages. In fact, Polish is often considered the hardest language to learn.
I put it to you that Lisp is the same thing. That depending on aptitude or personality or mindset or background, for some people it will be easy, for some hard, and for some either impossible or simply not worth the bother.
I still strongly disagree.
I am a visual-spatial person, and visualizing the trees and expressions is extremely easy for me. I have never felt more at home than I do with Clojure. It was an immediately overwhelmingly positive experience and I’m not sure any language will ever have a syntax or software model that is more matching my thought processes. (Prototypal languages like JavaScript and Lua come in a close second, because then I’m thinking in trees made of hash-maps instead.)
see slides 6 & 7
Actually, slide 7 is all I see (the words), and honestly, the default syntax highlighting for Lisps shouldn’t be rainbow braces, but muted and almost invisible braces like in said slide. Just indented nouns – like Python!
I’ve adapted to many language with all sorts of funky syntaxes (WebFOCUS comes to mind) and I can’t say any was hard for me to get comfortable with after enough exposure. But the key to readability is finding the literal “shapes” on the screen and their patterns. My eyes can just scan them. (Python is the most readily readable in that regard.) But, if one does not write Clojure in idiomatic style, it does truly become hard to read.
Lisp syntax lives or dies by how you horizontally indent braces. If you do not practice “semantic indentation” then you can truly be lost in a sea of meaningless parens, trying to find how the words relate to each other. That indentation visually shows the relationship. A visual tree.
But the barrier to entry is very, very high, and it would better serve the Lisp world to recognise and acknowledge this than to continue 6 decades of denialism.
I have introduced many people to Clojure and they’ve never found the syntax to be a barrier to entry. As a functional programmer, I find that C syntax gets in the way of Functional patterns and its syntax is a barrier to entry in learning Functional Programming.
I am certain the Clojure equivalent would be shorter and much easier to read. Notice it looks fairly lispy on its own, in idiomatic Functional C# style. That is the nature of a Functional approach, be it C like syntax, or Lispy syntax.
A more recent toy example was a technical challenge posited to me to write a palindrome function, which I decided to write functionally in both JavaScript (as a singular pure expression) and Clojure for comparison:
Is the JavaScript form any easier to read? I would say the Clojure form is slightly easier as long as you understand semantic indentation. (Obviously you need to understand both languages as well as be somewhat versed in Functional Programming to make heads or tails of this FP voodoo.)
I would say that familiarity is key, but moreso: consistent style.
Any language written in a funky style that is not idiomatic is going to be immediately hard to read. I guarantee I can take any language and make it harder to read simply by changing the style. I personally find it harder to read something even if someone makes a minor lazy mistake like writing 1+2 instead of 1 + 2. It throws off the expected “shape” of the code and impedes readability.
This is why I proposed Dylan on top of Lisp, not just Lisp.
If by Dylan implemented as a reader macro in Lisp as an option, I’m for it, for those who have hangups over syntax. But also, any language they might prefer might as well be a reader macro option. I do think though, simply building good DSLs would go a long way in building an entire OS out of one language, without having to reach for C-ish syntax.
No no, it’s fine, I am learning all the while here.
I myself am no Common Lisp expert. It’s an old language with odd behavior and too many macros. I personally use Clojure and find it extremely ergonomic for application development. I find modern Schemes in general to be fairly ergonomic as well, but maybe a bit too too many parens compared to Clojure.
Interesting. Noted.
Clojure does a good job of limiting parens, and introducing reader macros of [] for vectors and {} for hash-maps and it works out exceedingly well. The positional assumptions it makes limit parens and it really isn’t hard to read. It’s like executable JSON, only way easier to read. It isn’t far from the type of JS and Ruby I write anyway.
I have a suspicion that this may be the kind of improvement that is only helpful to those who have achieved a certain level of proficiency already. In other words, that it doesn’t help beginners much; maybe it reduces the steepness of part of the learning curve later on, but not at the beginning – and the beginning is possibly the most important part.
The only real PG thing worth reading is Roots of Lisp, which breaks down Lisp into its axiomatic special forms. You can see how one can start from lambda calculus, add some special forms, and end up with the kernel for a language that can do anything. Purely as an educational read.
Interesting.
I found his essays very persuasive at first. I have grown a little more sceptical over time.
Today, this is Java. I’m sure you’d agree.
Hmmm. Up to a point, perhaps yes.
I’d probably say C and C++ in more general, actually.
I have read a lot of loper-os.org, and it pointed me at an essay of Mark Tarver’s “The Bipolar Lisp Programmer”. a A comment of his really struck me:
«
Now in contrast, the C/C++ approach is quite different. It’s so damn hard to do anything with tweezers and glue that anything significant you do will be a real achievement. You want to document it. Also you’re liable to need help in any C project of significant size; so you’re liable to be social and work with others. You need to, just to get somewhere.
»
http://marktarver.com/bipolar.html
Its pervasive use of non-message-passing OO has crippled two entire generations of programmers, unable to grasp first class functions and simple data flow. They cobble together things with hundreds of nouns, leaving the logic opaque and dispersed throughout these nouns and their verby interactions. Tremendeous effort is required just to track down where anything happens.
I really don’t know. I have never mastered an OO language. I am currently reading up about Smalltalk in some detail, rather than theoretical overviews. To my pleased surprise, the Squeak community have been quite receptive to the ideas in my talk.
Today, C syntax is just obvious and intuitive.
This is only true of people with prior experience with C syntax languages. Exposure to a C style language first seats it as a norm within the brain, just as one’s first spoken language. I wouldn’t say C is intuitive to someone who has never programmed before.
For clarity: I was being somewhat sardonic here. I am not saying that I personally believe this to be true, but that it is common, widely-held received wisdom.
I speak Polish, so I can very much relate to Czech and other Slavic languages. In fact, Polish is often considered the hardest language to learn.
:-) I can well believe that!
I put it to you that Lisp is the same thing. That depending on aptitude or personality or mindset or background, for some people it will be easy, for some hard, and for some either impossible or simply not worth the bother.
I still strongly disagree.
I thought you might, and this response did sadden me, because I am failing to get my point across at all, clearly. :-(
Actually, slide 7 is all I see (the words), and honestly, the default syntax highlighting for Lisps shouldn’t be rainbow braces, but muted and almost invisible braces like in said slide. Just indented nouns – like Python!
This is sort of my point. (And don’t get me wrong; I am not a Python enthusiast. I’ve been failing to learn it since v1 was current.
The thing I think is instructive about Python is the way that experienced programmers react to it. It polarises people. Some love it, some hate it.
Even rookie programmers like me know that different people feel different indentation patterns are right and good. There’s a quote in your link:
«
Nearly everybody is convinced that every style but their own is ugly and unreadable. Leave out the “but their own” and they’re probably right…
»
Python forces everyone to adhere to the same indentation pattern, by making it meaninful. The people that hate Python are probably people that have horribly idiosyncratic indentation styles, and thus would probably benefit the most from being forced into one that makes sense to others, if their code is ever to be read or maintained by anyone else.
Thus, I suspect that strenuous objections to Python tell you something far more valuable about the person making the objections, than anything the objections themselves could ever tell you about Python.
I’ve adapted to many language with all sorts of funky syntaxes (WebFOCUS comes to mind) and I can’t say any was hard for me to get comfortable with after enough exposure. But the key to readability is finding the literal “shapes” on the screen and their patterns. My eyes can just scan them. (Python is the most readily readable in that regard.) But, if one does not write Clojure in idiomatic style, it does truly become hard to read.
So, it sounds to me like you have a versatile and adaptable mind that readily adapts to different languages. Most Lisp people seem to have minds like that.
It seems to me that where they often fail is in not realising that not everyone has minds like that. That for many people, merely learning one style or one programming language was really hard, and when they finally got it, they didn’t want to ever have to change, to ever have to go through it again by learning something else.
We all know people who only speak a single human language and say that they don’t have a knack for languages and can’t learn new ones. This is not only a sign of poor teaching methods. Maybe they are actually right. Maybe they really do lack ability at learning this stuff. Maybe it’s real. I see no reason why not.
A lack of ability to learn to speak more than one human language does not stop someone from being highly creative in that language – I am sure that many wonderful writers, poets, playwrights, novelists etc. are monoglot.
Well, a lot of skilful programmers who are able to do very useful work are also possibly monoglots. It took a lot of effort for them to learn one language, and they really like it, and all they will even consider are variants of that single language, or things that are different but at least use the same syntax.
In the ’50s and ’60s, it might have been COBOL, or PL/1, or RPG.
In the ’70s & ’80s, it might have been BASIC and variants on BASIC, especially for self-taught programmers. For another group, with more formal training or education, Pascal and variants on Pascal.
In the ‘90s onwards, it’s been C.
And so now we have lots of languages with C syntax and a cosmetic resemblance to C, and most people are comfortable with that.
Me, personally, I always found C hard work and while I admired its brevity, I found it unreadable. Even my own code.
Later, as more people piled onto the Internet and I got to talk to others, I found that this was a widespread problem.
But that was swiftly overwhelmed and buried behind the immense momentum of C and C-like languages. Now, well, Stephen Diehl’s observation that I quoted is just how it is for most people in industry.
If on the surface it looks like C, then it’s easy. Java looks superficially like C, although it’s not underneath. Javscript looks like it, although it’s not and it’s not like Java either. C++ is like C but with a million knobs and dials on. D is like C. C# is like C. And they’ve thrived.
And people who know nothing else now thing that a language that replaces { and } with BEGIN and END is impossibly wordy and verbose.
In the opposite direction, a language which replaces { and } but also for and while and if and almost everything else with just thousands of ( and huge blocks of nothing but ) – and it doesn’t even keep the block delimiters in order! Well, YES, to such a person, YES, this is totally unreadable.
I do not know how old you are. I am quite old; I’m 53. I began and finished programming in the 1980s. But I try to retain a flexible mind.
However, I see people of my age raging at “new math”. The idea that
3 + 4 * 5
… is the same thing as
4 * 5 + 3
… deeply offends them. They are old enough that they’ve forgotten school maths. The little they they recall is fragmentary and inconsistent. They have forgotten rules that they learned later such as “Bless My Dear Aunt Sally” or “BODMAS”. (If these are meaningless, Google them. :-) ) They think that they can do it and they don’t know that actually decades of use of calculators means they can’t. Prove to them with a calculator that actually there are precedence rules, and they will angrily say that the calculator is wrong and was clearly programmed by someone who “follows this ‘New Maths’ nonsense.”
I have often read Lisp people saying things like:
«
Look as this:
f(a,b)
versus
(f a b)
It’s the same! We have just moved the same characters around! It’s really the same thing!”
Well, no, to someone who only knows x(y,z) and nothing else, this is self-evidently nonsense and ridiculous.
I put it to you that it is necessary to accept that, just as there are people who are monoglots and will die monoglots and may have rich and fulfilling creative lives being monoglots…
… that by the same token, there are useful, skilled, productive programmers who can only handle ALGOL-type languages, who with serious effort might be able to move from the C branch of the ALGOL family to another branch, such as Python or PHP or Perl, but asking them to step outside the ALGOL family altogether and learn APL or Forth or Haskell or Lisp is just a step too far, one that they will never successfully take, and that is not a problem or a failing of theirs.
Lisp syntax lives or dies by how you horizontally indent braces. If you do not practice “semantic indentation” then you can truly be lost in a sea of meaningless parens, trying to find how the words relate to each other. That indentation visually shows the relationship. A visual tree.
Are you familiar with the “sweet expressions” project? It tried to “fix” Lisp syntax with indentation. It got nowhere much despite a lot of effort.
In other words, I do not think that indentation can ever be the answer. It might help those who get over this hurdle, climb this hill, but it won’t help those for whom the hill will always be too high and too steep.
But the barrier to entry is very, very high, and it would better serve the Lisp world to recognise and acknowledge this than to continue 6 decades of denialism.
Agreed!
I have introduced many people to Clojure and they’ve never found the syntax to be a barrier to entry. As a functional programmer, I find that C syntax gets in the way of Functional patterns and its syntax is a barrier to entry in learning Functional Programming.
I am glad to hear it. I do suspect that for a lot of people, though, FP itself is just too far away from anything they will ever need.
I read your “semantic formatting” link and I can’t understand a word of it, I’m afraid. :-(
I am certain the Clojure equivalent would be shorter and much easier to read. Notice it looks fairly lispy on its own, in idiomatic Functional C# style. That is the nature of a Functional approach, be it C like syntax, or Lispy syntax.
[…]
I would say that familiarity is key, but moreso: consistent style.
Way over my head. I really am sorry.
Any language written in a funky style that is not idiomatic is going to be immediately hard to read. I guarantee I can take any language and make it harder to read simply by changing the style. I personally find it harder to read something even if someone makes a minor lazy mistake like writing 1+2 instead of 1 + 2. It throws off the expected “shape” of the code and impedes readability.
There you go. To me, 1+2 and 1 + 2 are completely interchangeable, but + 1 2 is an effort to decode.
If by Dylan implemented as a reader macro in Lisp as an option, I’m for it, for those who have hangups over syntax. But also, any language they might prefer might as well be a reader macro option. I do think though, simply building good DSLs would go a long way in building an entire OS out of one language, without having to reach for C-ish syntax.
I had to Google this term. If I understand you correctly, well, yes, that is the general idea. I think…
I think that is my favourite FOSDEM talk ever. I’ve run many of the OSs listed there and went through a similar journey of discovering the road not taken. I too wonder what a new OS could be like. I knew I was in for a good talk when newtons and smalltalk appeared.
Cheaper than flash SSDs, gigabyte for gigabyte, and obviously SSDs are cheaper than RAM or instead of having a few hundred gig of SSDs holding our swapfiles, we’d have a few hundred gig of RAM and no swapfiles.
The thing is that they’re byte-by-byte rewritable. You don’t need that in a disk; in fact, you need to wrap it in a tonne of extra logic to hide it away, since disks work on a sector-by-sector or block-by-block basis. So it makes 3D Xpoint less competitive in the SSD space.
I think you’d be interested in Twizzler. I found it watching Peter Alvaro’s talk “What not where: Why a blue sky OS?”. It seems to address some of your points.
I thought it was discussed on lobste.rs, but can’t find the link atm.
That was fascinating – thank you for that link!
Very much the same inspiration, but they’ve come at it from a radically different, more network-oriented direction. That is a very good thing.
OTOH, it does preserve a shim of *nix compatibility, and whereas I wasn’t considering the network side of it at all – I reckon radical new ideas like that should, one hopes, be an emergent property of giving people a radically more powerful programming model than POSIX and C/things-rooted-in-C – the problem with finding a way to present a PMEM-centric OS to programmars via the medium of the existing stack is that, while it means instant familiarity, while it could win people’s attention far quicker… it doesn’t free us up at all from the morass of over 50 years of technical debt.
At this point, in 2021, *nix is basically nothing but technical debt. The whole concept of a file-centric OS being adapted to a PMEM-centric machine… it almost breaks my hear, while simultaneously being awed by the brilliance of the thinking.
It feels a bit like inventing a warp drive, and then showing it to the world by bolting it into a 1969 tractor frame. It’ll be a very very fast tractor, but at the same time, it’ll still be a tractor. It will never handle as well as an aeroplane with that engine would… and the aeroplane will be poor compared to a spaceship with it in. But you can kinda sorta turn an aeroplane into a spaceship. You can’t really turn a tractor into one.
(this is where I put on the “knows weird systems” hat)
Twizzler reminded me a lot of some prior art on single-level storage. They aren’t quite as distributed-first, but they’re certainly interesting to learn from. See the previous comment.
I like the earlier comment! :-)
Yes, Twizzler certainly appears to be founded on some of the same ideas I have had. I am not claiming to have had blindingly profound, singular visions!
I have worked (very briefly) on AS/400 and I was certainly aware of it. Long before it, Multics shared some of the came concepts. As far as I can tell, the thing with these single-level-store designs is that basically they consider all storage as disk, whereas what I have in mind is treating it all as RAM.
So, yes, they’re very well-suited to IBM i, or a revived Multics in theory, and their kin, but I am looking in a slightly different direction.
Loved that talk, brilliant. Thanks.
That is mind blowing. Someone needs to post their 2020 paper. I’m still reeling.
I’m not entirely convinced a new model is needed. We already have memory mapped files in all the major operating systems. And file pages can already be as small as 4KiB, which is tiny compared to common file sizes, these days. Perhaps it would make sense to have even smaller pages for something like Opteron, but do we really need to rethink everything? What would we gain?
What we’d gain is eliminating 50+ years of technical debt.
I recommend the Twizzler presentation mentioned a few comments down. It explains some of the concepts much better than I can. These people have really dug into the technical implications far deeper than me.
The thing is this: persistent memory blows apart the computing model that has prevailed for some 60+ years now. This is not the Von Neumann model or anything like that; it’s much simpler.
There are, in all computers for since about the late 1950s, a minimum of 2 types of storage:
The processor can only work on data in 1y store, but everything must be fetched from it, worked on, and put back.
This is profoundly limiting. It’s slow. It doesn’t matter how fast the storage is, it’s slow.
PMEM changes that. You have RAM only RAM, but some of your RAM keeps its contents when the power is off.
Files are legacy baggage. When all your data is in RAM all the time, you don’t need files. Files are what filesystems hold; filesystems are an abstraction method for indexing blocks of secondary storage. With no secondary storage, you don’t need filesystems any more.
I feel like there are a bunch of things conflated here:
Filesystems and file abstractions provide a global per-device namespace. That is not a great abstraction today, where you often want a truly global namespace (i.e. one shared between all of your devices) or something a lot more restrictive. I’d love to see more of the historical capability systems research resurrected here: for typical mobile-device UI abstractions, you really want a capability-based filesystem. Persistent memory doesn’t solve any of the problems of naming and access. It makes some of them more complicated: If you have a file on a server somewhere, it’s quite easy to expose remote
read
andwrite
operations, it’s very hard to expose a remotemmap
- trying to run a cache coherency protocol over the Internet does not lead to good programming models.Persistence is an attribute of files but in a very complicated way. On *NIX, the canonical way of doing an atomic operation on a file is to copy the file, make your changes, and then move the old file over the top. This isn’t great and it would be really nice if you could have transactional updates over ranges of files (annoyingly, ZFS actually implements all of the machinery for this, it just doesn’t expose it at the ZPL). With persistent memory, atomicity is hard. On current implementations, atomic operations with respect to CPU cache coherency and atomic operations with respect to committing data to persistent storage are completely different things. Getting any kind of decent performance out of something that directly uses persistent memory and is resilient in the presence of failure is an open research problem.
Really using persistent memory in this way also requires memory safety. As one of the The Machine developers told me when we were discussing CHERI: with persistent memory, your memory-safety bugs last forever. You’ve now turned your filesystem abstractions into a concurrent GC problem.
Excellent points; thank you.
May I ask, are you the same David Chisnall of “C is not a low-level language” paper? That is probably my single most-commonly cited paper. My compliments on it.
Your points are entirely valid, and that is why I have been emphasizing the “just for fun” angle of it. I do not have answers to some of these hard questions, but I think that at first, what is needed is some kind of proof of concept. Something that demonstrates the core point: that we can have a complex, rich, capable environment that is able to do real, interesting work, which in some ways exceeds the traditional *nix model for a programmer, which runs entirely in a hybrid DRAM/PMEM system, on existing hardware that can be built today.
Once this point has been made by demonstration, then perhaps it will be possible to tackle much more sophisticated systems, which provide reliability, redundancy, resiliency, and all that nice stuff that enterprises will pay lots of money for.
There is a common accusation, not entirely unjust, that the FOSS community is very good at imitating and incrementally improving existing implementations, but not so good at creating wholly new things. I am not here to fight that battle. What I was trying to come up with was a proposal to use some existing open technology – things that are already FOSS, already out there, and not new and untested and immature, but solid, time-proven tools that have survived despite decades in obscurity – and assemble them into something that can be used to explore new and largely uncharted territory.
ISTM, based on really very little evidence at all, that HPE got carried away with the potential of someting that came out of their labs. It takes decades to go from a new type of component to large-scale highly-integrated mass production. Techies know that; marketing people do not. We may not have competitive memristor storage until the 2030s at the earliest, and HPE wanted to start building enterprise solutions out of it. Too much, too young.
Linux didn’t spring fully-formed from Torvalds’ brow ready to defeat AIX, HP-UX and Solaris in battle. It needed decades to grow up.
The Machine didn’t get decades.
Smalltalk has already had decades.
Reply notifications are working again, so I just saw this!:
That’s me, thanks! I’m currently working on a language that aims to address a lot of my criticisms of the C abstract machine.
I do agree with the ‘make it work, make it correct, make it fast’ model, but I suspect that you’ll find with a lot of these things that the step from ‘make it work’ to ‘make it correct’ is really hard. A lot of academic OS work fails to make it from research to production because they focus on making something that works for some common cases and miss the bits that are really important in deployment. For persistent memory systems, how you handle failure is probably the most important thing.
With a file abstraction, there’s an explicit ‘write state for recovery’ step and a clear distinction in the abstract machine between volatile and non-volatile storage. I can quite easily do two-phase commit to a POSIX filesystem (unless my disk is lying about sync) and end up with something that leaves my program in a recoverable state if the power goes out at any point. I may lose uncommitted data, but I don’t lose committed data. Doing the same thing with a single-level store is much harder because caches are (as their name suggests) hidden. Data that’s written back to persistent memory is safe, data in caches isn’t. I have to ensure that, independent of the order that things are evicted from cache, my persistent storage is in a consistent state. This is made much harder on current systems by the fact that atomic with respect to other cores is done via the cache coherency protocol, whereas atomic with respect to main memory (persistent or otherwise) is done via cache evictions and so guaranteeing that you have a consistent view of your data structures with respect to both other cores and persistent storage is incredibly hard.
The only systems that I’ve seen do this successfully segregated persistent and volatile memory and provided managed abstractions for interacting with it. I particularly like the FaRM project from some folks downstairs.
I think there’s some truth to that accusation, though I’m biased from having tried to do something very different in an open source project. It’s difficult to get traction for anything different because you start from a position of unfamiliarity when trying to explain to people what the benefits are. Unless it’s solving a problem that they’ve hit repeatedly, it’s hard to get the message across. This is true everywhere, but in projects that depend on volunteers it is particularly problematic.
That’s not an entirely unfair characterisation. The Machine didn’t depend on memristers though, it was intended to work with the kind of single-level store that you can build today and be ready to adopt memrister-based memory when it became available. It suffered a bit from the same thing that a lot of novel OS projects do: they wanted to build a Linux compat layer to make migration easy, but once they have a Linux compat layer it was just a slow way of running Linux software. One of my colleagues likes to point out that a POSIX compatibility layer tends to be the last piece of native software written for any interesting OS.
I think files are more than just an abstraction over block storage, they’re an abstraction over any storage. They’re crucial part of the UX as well. Consider directories… Directories are not necessary for file systems to operate (it could just all be flat files) but they exist, purely for usability and organisation. I think even in the era of PMEM users will demand some way to organise information and it’ll probably end up looking like files and directories.
Most mobile operating systems don’t expose files and directories and they are extremely popular.
True, but those operating systems still expose filesystems to developers. Users don’t necessarily need to be end users. iOS and Android also do expose files and directories to end users now, although I know iOS didn’t for a long time.
iOS also provides Core Data, which would be a better interface in the PMEM world anyway.
Not all of them don’t, no.
NewtonOS didn’t. PalmOS didn’t. The reason being that they didn’t have filesystems.
iOS is just UNIX. iOS and Android devices are tiny Unix machines in your pocket. They have all the complexity of a desktop workstation – millions of lines of code in a dozen languages, multiuser support, all that – it’s just hidden.
I’m proposing not just hiding it. I am proposing throwing the whole lot away and putting something genuinely simple in its place. Not hidden complexity: eliminating the complexity.
They tried. Really hard. But in the end, even Apple had to give up and provide the Files app.
Files are an extremely useful abstraction, which is why they were invented in the first place. And why they get reinvented every time someone tries to get rid of them.
Files (as a UX and data interchange abstraction) are not the same thing as a filesystem. You don’t need a filesystem to provide a document abstraction. Smalltalk-80 had none. (It didn’t have documents itself, but I was on a team that added documents and other applications to it.) And filesystems tend to lack stuff you want for documents, like metadata and smart links and robust support for updating them safely.
I’m pretty sure the vast majority of iOS users don’t know Files exist.
I do, but I almost never use it.
And extremely limiting.
We’re never going to have fewer storage tiers, but more. And general computation is not going to be done in a permanent medium for most tasks most of the time until software is infinitely better than it is now. Imagine if every time a program crashed all its files would be damaged? Our tower of sedimentary software layers relies on being able to restart things that have gone off the rails, and so far the only way we’ve been able to build large distributed systems reliably has been to have more of it be ephemeral and stateless, and/or be able to reconstruct state based on an event history.
Good reminder of history and interesting idea. For me the biggest take-away is the fact that a lot of stuff was just built for fun.
It would be cool to explore OSs by using these existing projects. Personally, I still want to explore building something from scratch.
Great talk!
Thank you!
[Comment removed by author]
Enjoyed the talk. Especially some of the history I didn’t know about Oberon.
Just not sure why the suggestion after Smalltalk was Dylan, a Lisp that looks nothing like a Lisp and is less popular than every other other Lisp. There’s already great interest in a Lisp OS (other than Emacs) so it just seems like pet favorites here or a dislike for Lisp syntax, but alright.
I generally agree with moving back to environments that integrate a programming language, though. Have you by chance considered the Web?
I mean a realistic approach would be fusing a Blink runtime to Linux, or using ChromiumOS as a base, and having JS as a mutable, dynamic language + WebAssembly system.
We’re already heading that way, although we’d need to embrace open ECMAScript Modules and Web Components as the building blocks instead of minified bundles, and we’d need to stop abusing the semantics of the web, treating HTML and CSS as purely build artifacts (things that are hard to read and extend).
Thanks!
Part of the plan is to make something that is easy and fun. It will be limited at first compared to the insane incomprehensible unfathomable richness of a modern *nix or Windows OS. Very limited. So if it is limited, then I think it has to be fun and accessible and easy and comprehensible to have any hope of winning people over.
Lisp is hard. It may be the ultimate programming language, the only programmable programming language, but the syntax is not merely offputting, it is profoundly inaccessible for a lot of ordinary mortals. Just learning an Algol-like language is not hard. BASIC was fun and accessible. The right languages are toys for children, and that’s good.
Today, I have met multiple professional Java programmers who have next to no grasp of the theory, or of algorithms or any comp-sci basic principles… but they can bolt together existing modules just fine and make useful systems.
Note: I am not saying that this is a good way to build business logic, but it is how a lot of organizations do it.
There is a ton of extra logic that one must internalize to make Lisp comprehensible. I suspect that there is a certain type of mind for whom this stuff is accessible, easily acquired, and then they find it intuitive and obvious and very useful.
But I think that that kind of mind is fairly rare, and I do not think that this kind of language – code composed of lists, containing naked ASTs – will ever be a mass-market proposition.
Dylan, OTOH, did what McCarthy originally intended. It wrapped the bare lists in something accessible, and they demonstrated this by building an exceptionally visual, colourful, friendly graphical programming language in it. It was not intended for building enterprise servers; it was built to power an all-graphical pocket digital assistant, with a few meg of RAM and no filesystem.
Friendly and fun, remember. Accessible, easy, simple above all else. Expressly not intended to be “big and professional like GNU.”
But underneath Dylan’s friendly face is the raw power of Lisp.
So the idea is that it gives you the best of both worlds, in principle. For mortals, there’s an easy, colourful, fun toy. But one you can build real useful apps in.
And underneath that, interchangeable and interoperable with it, is the power of Lisp – but you don’t need to see it or interact with it if you don’t want to.
And beneath that is Oberon, which lets you twiddle bits if you need to in order to write a device driver or a network stack for a new protocol. Or create a VM and launch it, so you can have a window with Firefox in it.
Oh dear gods, no!
There is an old saying in comp sci, attributed to David Wheeler: “We can solve any problem by introducing an extra level of indirection.”
It is often attributed to Butler Lampson, one of the people at PARC who designed and built the Xerox Alto, Dolphin and Dorado machines. He is also said to have added a rider: “…except for the problem of too many layers of indirection.”
The idea here is to strip away a dozen layers of indirection and simplify it down to the minimum number of layers that can provide a rich, programmable, high-level environment that does not require users to learn arcane historical concepts such as “disks” or “directories” or “files”, or “binaries” and “compilers” and “linkers”. All that is ancient history, implementation baggage from 50 years of Unix.
The WWW was a quick’n’dirty, kludgy implementation of hypertext on Unix, put together using NeXTstations. The real idea of hypertext came from Ted Nelson’s Xanadu.
The web is half a dozen layers of crap – a protocol [1] that carries composite documents [2] built from Unix text files [3] and rendered by a now massively complex engine [4] whose operation can be modified by a clunky text-based scripting language [5] which needed to be JITted and accelerated by a runtime environment [6]. It is a mess.
It is more or less exactly what I am trying to get away from. The idea of implementing a new OS in a minimal 2 layers, replacing a dozen layers, and then implementing that elegant little design by embedding it inside a clunky half-dozen layers hosted on top of half a dozen layers of Unix… I recoil in disgust, TBH. It is not merely inefficient, it’s profane, a desecration of the concept.
Look, I am not a Christian, but I was vaguely raised as one. There are a few nuggets of wisdom in the Christian bible.
Matthew 7:24-27 applies.
“Therefore, whosoever heareth these sayings of Mine and doeth them, I will liken him unto a wise man, who built his house upon a rock. And the rain descended and the floods came, and the winds blew and beat upon that house; and it fell not, for it was founded upon a rock. And every one that heareth these sayings of Mine and doeth them not, shall be likened unto a foolish man, who built his house upon the sand; and the rain descended, and the floods came, and the winds blew, and beat upon that house; and it fell, and great was the fall of it.”
Unix is the sand here. An ever-shifting, impermanent base. Put more layers of silt and gravel and mud on top, and it’s still sand.
I’m saying we take bare x86 or ARM or RISC-V. We put Oberon on that, then Smalltalk or Lisp on Oberon, and done. Two layers, one of them portable. The user doesn’t even need to know what they’re running on, because they don’t have a compiler or anything like that.
You’re used to sand. You like sand. I can see that. But more sand is not the answer here. The answer is a high-pressure host that sweeps away all the sand.
Hey, I appreciate the detailed response. I generally agree with your thesis, but I’m going to touch on some of your points.
I am going to have to strongly disagree here (unless we’re talking dense Common Lisp with its decades of quirky features). The Lisp syntax has few rules to learn (if not the least of any language other than Forth), is educationally friendly with tools like Dr Racket, and is one of the easiest to teach first-time programmers due to its obvious evaluation flow.
All one needs to know is how parenthesis work in mathematics. All one has to understand “how the data flows” is to look at the position in the expression and perform substitution of values as they’d learn in grade school.
It is visually a sideways tree one can see of words and indentations. And thus can be visually rendered using a colorful friendly tree-collapsing UI if need be with drag and drop expressions. No other language can have such an interaction model with their complex syntax rules.
Carmack, for example, chose to teach his son programming with Racket: https://twitter.com/id_aa_carmack/status/569688211158511616?lang=en
Colleges have been teaching Scheme as a first programming language for years with SICP.
Really the discussion of Lisp syntax is one done to death; this is me beating a horse fossil at this point. There’s the value in the ease of understanding syntax, and the less obvious value of meta-programming, so the only other thing I’d add is that, we could just build a Lisp OS, and create Smalltalk as a Racket style
#lang
on top of it. You’re not going to find a better language to let people have their pet favorite syntax than a Lisp that will let you create that syntax.Dylan could also just be implemented as a Racket-like
#lang
. (I’m not saying though that Racket is the ideal language, just that a meta-programmable language is an ideal, low level, substrate, to build things upon.)This is, of course, why DSLs exist, and why Racket has entire language abstractions on top of it, as I mentioned with Smalltalk and Dylan. Good Lisp designs actually scales better when you create powerful abstractions on top of it, making a complicated system accessible to “mere mortals”. Non-meta languages simply cannot scale.
Yes, but, this is a tremendous undertaking. The web, for better or worse, is literally the closest thing we have today to a Smalltalk-y dynamic user-editable/configurable/extensible system. ChromiumOS is the closest we have to that being a full OS a user can just play with out of the box. What other system today can you just press
F12
and hack to pieces?I myself got into programming in the 90s by just clicking “View Source” and discovering the funky syntax required to make websites. I’ve mentored and assisted many kids today doing just the same. The web is the closest we have to this expression.
Now, I’m not disagreeing that we shouldn’t try to create that awesome minimal rebirth of systems. It’s one of my personal desires to see it happen, which is why I’m replying and was so interested in the talk. We’ve absolutely cornered ourselves with complicated designs, I absolutely agree. I was mostly just pointing out we have a path towards at least something that gives us a shard of the benefits of such a system with the web.
The rest thought, I agree at a high level, so I’ll leave things at that.
Hey, you’re welcome. I’m delighted when anyone wants to engage. :-)
But a serious answer deserved a serious response, so I slept on it, and, well, as you can see, it took some time. I don’t even the excuse that “Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte.”
If you are curious to do so, you might be amused to look through my older tech-blog posts – for example this or this.
The research project that led to these 3 FOSDEM talks started over a decade ago when I persuaded my editor that retrocomputing articles were popular & I went looking for something obscure that nobody else was writing about.
I looked at various interesting long-gone platforms or technologies – some of the fun ones were Apollo Aegis & DomainOS, SunDew/NeWS, the Three Rivers PERQ etc. – that had or did stuff nothing else did. All were either too obscure, or had little to no lasting impact or influence.
What I found, in time, were Lisp Machines. A little pointy lump in the soil, which as I kept digging turned into the entire Temple of Damanhur. (Anyone who’s never heard of that should definitely look it up.) And then as I kept digging, the entire war for the workstation, between whole-dynamic-environment languages (Lisp & Smalltalk, but there are others) and the reverse, the Unix way, the easy-but-somehow-sad environment of code written in a unsafe, hacky language, compiled to binaries, and run on an OS whose raison d’être is to “keep ‘em separated”: to turn a computer into a pile of little isolate execution contexts, which can only pass info to one another via plain text files. An ugly, lowest-common-denominator sort of OS but which succeeded and thrived because it was small, simple, easy to implement and to port, relatively versatile, and didn’t require fancy hardware.
That at one time, there were these two schools – that of the maximally capable, powerful language, running on expensive bespoke hardware but delivering astonishing abilities… versus a cheap, simple, hack of a system that everyone could clone, which ran on cheap old minicomputers, then workstations with COTS 68K chips, then on RISC chips.
(The Unix Haters Handbook was particularly instructive. Also recommended to everyone; it’s informative, it’s free and it’s funny.)
For a while, I was a sort of Lisp zealot or evangelist – without ever having mastered it myself, mind. It breaks my brain. “The Little Lisper” is the most impenetrable computer publication I’ve ever tried, and failed, to read.
A lot of my friends are jaded old Unix pros, like me having gone through multiple proprietary flavours before coming to Linux. Or possibly a BSD. I won serious kudos from my first editor when I knew how to properly shutdown a Tadpole SPARCbook with:
“What I tell you three times is true!” he crowed.
Very old Unix hands remember LispMs. They’ve certainly met lots of Lisp evangelists. They got very tired of me banging on about it. Example – a mate of mine said on Twitter:
« A few years ago it was lisp is the true path. Before that is was touchscreens will kill the keyboard. »
The thing is, while going on about it, I kept digging, kept researching. There’s more to life than Paul Graham essays. Yes, the old LispM fans were onto something; yes, the world lost something important when they were out-competed into extinction by Unix boxes; yes, in the right hands, it achieves undreamed-of levels of productivity and capability; yes, the famous bipolar Lisp programmer essay.
But there are other systems which people say the same sorts of things about. Not many. APL, but even APL fans recognise it has a niche. Forth, mainly for people who disdain OSes as unnecessary bloat and roll their own. Smalltalk. A handful of others. The “Languages of the Gods”.
Another thing I found is people who’d bounced off Lisp. Some tried hard but didn’t get it. Some learned it, maybe even implemented their own, but were unmoved by it and drifted off. A lot of people deride it – L.I.S.P. = Lotsa Insignificant Stupid Parentheses, etc. – but some of them do so with reason.
I do not know why this. It may be a cultural thing, it may be one of what forms of logic and of reasoning feel natural to different people. I had a hard time grasping algebra as a schoolchild. (Your comment about “grade school” stuff is impenetrable to me. I’m not American so I don’t know what “grade school” is, I cannot parse your example, and I don’t know what level it is aimed at – but I suspect it’s above mine. I failed ‘O’ level maths and had to resit it. The single most depressing moment of my biology degree was when the lecturer for “Intro to Statistics” said he knew we were all scared, but it was fine; for science undergraduates like us, it would just be revision of our maths ‘A’ level. If I tried, I’d never even have got good enough exam scores to be rejected for a maths ‘A’ level.)
When I finally understood algebra, I “got” it and it made sense and became a useful tool, but I have only a weak handle on it. I used to know how to solve a quadratic equation but I couldn’t do it now.
I never got as far as integration or differentiation. I only grasped them at all when trying to help a member of staff with her comp-studies homework. It’s true: the best way to learn something is to teach it.
Edsger Dijkstra was a grumpy git, but when he said:
“It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration”
… and…
“The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offence.”
… I kind of know what he meant. I disagree, obviously, and I am not alone, but he did have a core point.
I think possibly that if someone learned Algol-style infix notation when they were young, and it’s all they’ve ever known, if someone comes along and tells them that it’s all wrong, to throw it away, and do it like this – or possibly
(this(like(do(it))))
– instead, it is perfectly reasonable to reject it.Recently I used the expression
A <> B
to someone online and they didn’t understand. I was taken aback. This is BASIC syntax and was universal when I was under 35. No longer. I rephrased it asA != B
and they understood immediately.Today, C syntax is just obvious and intuitive. As Stephen Diehl said:
« C syntax is magical programmer catnip. You sprinkle it on anything and it suddenly becomes “practical” and “readable”. »
I submit that there are some people who cannot intuitively grasp the syntaxless list syntax of Lisp. And others who can handle it fine but dislike it, just as many love Python indentation and others despise it. And others who maybe could but with vast effort and it will forever hinder them.
Comparison: I am 53 years old, I emigrated to the Czech Republic 7 years ago and I now have a family here and will probably stay. I like it here. There are good reasons people still talk about the Bohemian lifestyle.
But the language is terrifying: 4 genders, 7 cases, all nouns have 2 plurals (2-4 & >=5), a special set of future tenses for verbs of motion, & two entire sets of tenses – verb “aspects”, very broadly one for things that are happening in the past/present/future but are incomplete, and one for things in the past or present that are complete.
After 6 years of study, I am an advanced beginner. I cannot read a headline.
Now, context: I speak German, poorly. I learned it in 3 days of hard work travelling thence on a bus. I speak passable French after a few years of it at school. I can get by in Spanish, Norwegian and Swedish from a few weeks each.
I am not bad at languages, and I’m definitely not intimidated by them. But learning your first Slavic language in your 40s is like climbing Everest with 2 broken legs.
No matter how hard I try, I will never be fluent. I won’t live long enough.
Maybe if I started Russian at 7 instead of French, I’d be fine, but I didn’t. But 400 million people speak Slavic languages and have no problems with this stuff.
I am determined. I will get to some useful level if it kills me. But I’ll never be any good and I doubt I’ll ever read a novel in it.
I put it to you that Lisp is the same thing. That depending on aptitude or personality or mindset or background, for some people it will be easy, for some hard, and for some either impossible or simply not worth the bother. I know many Anglophones (and other first-language speakers) who live in Czechia who just gave up on Czech. For a lot of people, it’s just too hard as an adult. My first course started with 15 students and ended with 3. This is on the low side of normal; 60% of students quit in the first 3 months, after paying in full.
And when people say that “look, really,
f(a,b)
is the same thing as(f a,b)
” or tell us that we’ll just stop seeing the parentheses after a while (see slides 6 & 7 ) IT DOES NOT HELP. In fact, it’s profoundly offputting.I am regarded as a Lisp evangelist among some groups of friends. I completely buy and believe, from my research, that it probably is the most powerful programming language there’s ever been.
But the barrier to entry is very, very high, and it would better serve the Lisp world to recognise and acknowledge this than to continue 6 decades of denialism.
Before this talk, I conferred with 2 very smart programmer friends of mine about the infix/prefix notation issue. ISTM that it should be possible to have a smart editor that could convert between the two, or even round-trip convert a subset of them.
This is why I proposed Dylan on top of Lisp, not just Lisp. Because Lisp frightens people and puts them off, and that is not their fault or failing. There was always meant to be an easier, more accessible form for the non-specialists. Some of my favourite attempts were CGOL and Lisp wizard David A. Moon’s PLOT. If Moon thinks it’s worth doing, we should listen. You might have heard of this editor he wrote? It’s called “Emacs”. I hear it’s quite something.
Oh boy, I really don’t want to take up your time.
I myself am no Common Lisp expert. It’s an old language with odd behavior and too many macros. I personally use Clojure and find it extremely ergonomic for application development. I find modern Schemes in general to be fairly ergonomic as well, but maybe a bit too too many parens compared to Clojure.
Clojure does a good job of limiting parens, and introducing reader macros of
[]
for vectors and{}
for hash-maps and it works out exceedingly well. The positional assumptions it makes limit parens and it really isn’t hard to read. It’s like executable JSON, only way easier to read. It isn’t far from the type of JS and Ruby I write anyway.The only real PG thing worth reading is Roots of Lisp, which breaks down Lisp into its axiomatic special forms. You can see how one can start from lambda calculus, add some special forms, and end up with the kernel for a language that can do anything. Purely as an educational read.
Today, this is Java. I’m sure you’d agree. Its pervasive use of non-message-passing OO has crippled two entire generations of programmers, unable to grasp first class functions and simple data flow. They cobble together things with hundreds of nouns, leaving the logic opaque and dispersed throughout these nouns and their verby interactions. Tremendeous effort is required just to track down where anything happens.
This is only true of people with prior experience with C syntax languages. Exposure to a C style language first seats it as a norm within the brain, just as one’s first spoken language. I wouldn’t say C is intuitive to someone who has never programmed before.
I speak Polish, so I can very much relate to Czech and other Slavic languages. In fact, Polish is often considered the hardest language to learn.
I still strongly disagree.
I am a visual-spatial person, and visualizing the trees and expressions is extremely easy for me. I have never felt more at home than I do with Clojure. It was an immediately overwhelmingly positive experience and I’m not sure any language will ever have a syntax or software model that is more matching my thought processes. (Prototypal languages like JavaScript and Lua come in a close second, because then I’m thinking in trees made of hash-maps instead.)
Actually, slide 7 is all I see (the words), and honestly, the default syntax highlighting for Lisps shouldn’t be rainbow braces, but muted and almost invisible braces like in said slide. Just indented nouns – like Python!
I’ve adapted to many language with all sorts of funky syntaxes (WebFOCUS comes to mind) and I can’t say any was hard for me to get comfortable with after enough exposure. But the key to readability is finding the literal “shapes” on the screen and their patterns. My eyes can just scan them. (Python is the most readily readable in that regard.) But, if one does not write Clojure in idiomatic style, it does truly become hard to read.
Lisp syntax lives or dies by how you horizontally indent braces. If you do not practice “semantic indentation” then you can truly be lost in a sea of meaningless parens, trying to find how the words relate to each other. That indentation visually shows the relationship. A visual tree.
I have introduced many people to Clojure and they’ve never found the syntax to be a barrier to entry. As a functional programmer, I find that C syntax gets in the way of Functional patterns and its syntax is a barrier to entry in learning Functional Programming.
Let me dig up some examples:
A C# functional approach to database return value transforms and validation: https://gist.github.com/Slackwise/965ac1947b69c60e21aa030be96b657b
I am certain the Clojure equivalent would be shorter and much easier to read. Notice it looks fairly lispy on its own, in idiomatic Functional C# style. That is the nature of a Functional approach, be it C like syntax, or Lispy syntax.
A more recent toy example was a technical challenge posited to me to write a palindrome function, which I decided to write functionally in both JavaScript (as a singular pure expression) and Clojure for comparison:
Is the JavaScript form any easier to read? I would say the Clojure form is slightly easier as long as you understand semantic indentation. (Obviously you need to understand both languages as well as be somewhat versed in Functional Programming to make heads or tails of this FP voodoo.)
I would say that familiarity is key, but moreso: consistent style.
Any language written in a funky style that is not idiomatic is going to be immediately hard to read. I guarantee I can take any language and make it harder to read simply by changing the style. I personally find it harder to read something even if someone makes a minor lazy mistake like writing
1+2
instead of1 + 2
. It throws off the expected “shape” of the code and impedes readability.If by Dylan implemented as a reader macro in Lisp as an option, I’m for it, for those who have hangups over syntax. But also, any language they might prefer might as well be a reader macro option. I do think though, simply building good DSLs would go a long way in building an entire OS out of one language, without having to reach for C-ish syntax.
No no, it’s fine, I am learning all the while here.
Interesting. Noted.
I have a suspicion that this may be the kind of improvement that is only helpful to those who have achieved a certain level of proficiency already. In other words, that it doesn’t help beginners much; maybe it reduces the steepness of part of the learning curve later on, but not at the beginning – and the beginning is possibly the most important part.
Interesting.
I found his essays very persuasive at first. I have grown a little more sceptical over time.
Hmmm. Up to a point, perhaps yes.
I’d probably say C and C++ in more general, actually.
I have read a lot of loper-os.org, and it pointed me at an essay of Mark Tarver’s “The Bipolar Lisp Programmer”. a A comment of his really struck me:
« Now in contrast, the C/C++ approach is quite different. It’s so damn hard to do anything with tweezers and glue that anything significant you do will be a real achievement. You want to document it. Also you’re liable to need help in any C project of significant size; so you’re liable to be social and work with others. You need to, just to get somewhere. » http://marktarver.com/bipolar.html
I really don’t know. I have never mastered an OO language. I am currently reading up about Smalltalk in some detail, rather than theoretical overviews. To my pleased surprise, the Squeak community have been quite receptive to the ideas in my talk.
For clarity: I was being somewhat sardonic here. I am not saying that I personally believe this to be true, but that it is common, widely-held received wisdom.
:-) I can well believe that!
I thought you might, and this response did sadden me, because I am failing to get my point across at all, clearly. :-(
This is sort of my point. (And don’t get me wrong; I am not a Python enthusiast. I’ve been failing to learn it since v1 was current.
The thing I think is instructive about Python is the way that experienced programmers react to it. It polarises people. Some love it, some hate it.
Even rookie programmers like me know that different people feel different indentation patterns are right and good. There’s a quote in your link:
« Nearly everybody is convinced that every style but their own is ugly and unreadable. Leave out the “but their own” and they’re probably right… »
Python forces everyone to adhere to the same indentation pattern, by making it meaninful. The people that hate Python are probably people that have horribly idiosyncratic indentation styles, and thus would probably benefit the most from being forced into one that makes sense to others, if their code is ever to be read or maintained by anyone else.
Thus, I suspect that strenuous objections to Python tell you something far more valuable about the person making the objections, than anything the objections themselves could ever tell you about Python.
So, it sounds to me like you have a versatile and adaptable mind that readily adapts to different languages. Most Lisp people seem to have minds like that.
It seems to me that where they often fail is in not realising that not everyone has minds like that. That for many people, merely learning one style or one programming language was really hard, and when they finally got it, they didn’t want to ever have to change, to ever have to go through it again by learning something else.
We all know people who only speak a single human language and say that they don’t have a knack for languages and can’t learn new ones. This is not only a sign of poor teaching methods. Maybe they are actually right. Maybe they really do lack ability at learning this stuff. Maybe it’s real. I see no reason why not.
A lack of ability to learn to speak more than one human language does not stop someone from being highly creative in that language – I am sure that many wonderful writers, poets, playwrights, novelists etc. are monoglot.
Well, a lot of skilful programmers who are able to do very useful work are also possibly monoglots. It took a lot of effort for them to learn one language, and they really like it, and all they will even consider are variants of that single language, or things that are different but at least use the same syntax.
In the ’50s and ’60s, it might have been COBOL, or PL/1, or RPG.
In the ’70s & ’80s, it might have been BASIC and variants on BASIC, especially for self-taught programmers. For another group, with more formal training or education, Pascal and variants on Pascal.
In the ‘90s onwards, it’s been C.
And so now we have lots of languages with C syntax and a cosmetic resemblance to C, and most people are comfortable with that.
Me, personally, I always found C hard work and while I admired its brevity, I found it unreadable. Even my own code.
Later, as more people piled onto the Internet and I got to talk to others, I found that this was a widespread problem.
But that was swiftly overwhelmed and buried behind the immense momentum of C and C-like languages. Now, well, Stephen Diehl’s observation that I quoted is just how it is for most people in industry.
If on the surface it looks like C, then it’s easy. Java looks superficially like C, although it’s not underneath. Javscript looks like it, although it’s not and it’s not like Java either. C++ is like C but with a million knobs and dials on. D is like C. C# is like C. And they’ve thrived.
And people who know nothing else now thing that a language that replaces
{
and}
withBEGIN
andEND
is impossibly wordy and verbose.In the opposite direction, a language which replaces
{
and}
but alsofor
andwhile
andif
and almost everything else with just thousands of(
and huge blocks of nothing but)
– and it doesn’t even keep the block delimiters in order! Well, YES, to such a person, YES, this is totally unreadable.I do not know how old you are. I am quite old; I’m 53. I began and finished programming in the 1980s. But I try to retain a flexible mind.
However, I see people of my age raging at “new math”. The idea that
3 + 4 * 5
… is the same thing as
4 * 5 + 3
… deeply offends them. They are old enough that they’ve forgotten school maths. The little they they recall is fragmentary and inconsistent. They have forgotten rules that they learned later such as “Bless My Dear Aunt Sally” or “BODMAS”. (If these are meaningless, Google them. :-) ) They think that they can do it and they don’t know that actually decades of use of calculators means they can’t. Prove to them with a calculator that actually there are precedence rules, and they will angrily say that the calculator is wrong and was clearly programmed by someone who “follows this ‘New Maths’ nonsense.”
I have often read Lisp people saying things like:
« Look as this:
f(a,b)
versus
(f a b)
It’s the same! We have just moved the same characters around! It’s really the same thing!”
Well, no, to someone who only knows x(y,z) and nothing else, this is self-evidently nonsense and ridiculous.
I put it to you that it is necessary to accept that, just as there are people who are monoglots and will die monoglots and may have rich and fulfilling creative lives being monoglots…
… that by the same token, there are useful, skilled, productive programmers who can only handle ALGOL-type languages, who with serious effort might be able to move from the C branch of the ALGOL family to another branch, such as Python or PHP or Perl, but asking them to step outside the ALGOL family altogether and learn APL or Forth or Haskell or Lisp is just a step too far, one that they will never successfully take, and that is not a problem or a failing of theirs.
Are you familiar with the “sweet expressions” project? It tried to “fix” Lisp syntax with indentation. It got nowhere much despite a lot of effort.
https://readable.sourceforge.io/
I don’t think it is ever going to succeed.
In other words, I do not think that indentation can ever be the answer. It might help those who get over this hurdle, climb this hill, but it won’t help those for whom the hill will always be too high and too steep.
Agreed!
I am glad to hear it. I do suspect that for a lot of people, though, FP itself is just too far away from anything they will ever need.
I read your “semantic formatting” link and I can’t understand a word of it, I’m afraid. :-(
Way over my head. I really am sorry.
There you go. To me,
1+2
and1 + 2
are completely interchangeable, but+ 1 2
is an effort to decode.I had to Google this term. If I understand you correctly, well, yes, that is the general idea. I think…
I think that is my favourite FOSDEM talk ever. I’ve run many of the OSs listed there and went through a similar journey of discovering the road not taken. I too wonder what a new OS could be like. I knew I was in for a good talk when newtons and smalltalk appeared.
Excellent! Thank you!
You may be also interested in my older article: Programmer’s critique of missing structure of operating systems
How inexpensive is this non-volatile ‘ram-like’ memory these days?
It isn’t cheap yet, but I think there’s little doubt PMEM is the future. It’s like seeing transition to 64-bit and SSD.
Cheaper than flash SSDs, gigabyte for gigabyte, and obviously SSDs are cheaper than RAM or instead of having a few hundred gig of SSDs holding our swapfiles, we’d have a few hundred gig of RAM and no swapfiles.
The thing is that they’re byte-by-byte rewritable. You don’t need that in a disk; in fact, you need to wrap it in a tonne of extra logic to hide it away, since disks work on a sector-by-sector or block-by-block basis. So it makes 3D Xpoint less competitive in the SSD space.