It was one of the ideas I had for Raspberry Pi 3. I keep thinking about buying one. They say it’s best even if less hardware since so much software already works on it. Helps newcomers out. I want an always-on box that doesn’t use much power.
Oberon port, either Oberon or A2 Bluebottle, was one of my ideas. I also thought about porting it to Rust then backporting it to Oberon. Basically, knocking out any temporal errors plus letting it run without GC. Then, Oberon-to-C-to-LLVM for performance boost. Oberon in overdrive. ;)
If you wait 15 years on your project, then Wirth’s habits might mean there will be another 5-10 Oberon’s released with minor, feature changes before you get started. He might try dropping if statements or something. Who knows.
It can also mean more bugs in the next layer up, if the dearth of features at a particular layer requires people to constantly reimplement even basic functionality.
@jclulow and you propose a good objection, but I think you are confusing easyness with simplicity.
I see the features that a language (or an operating system) supports like the dimensions that can describe a program (or a dynamic ecosystem of users interacting with hardware devices).
Thus, to me, an high number of features (as many system calls in an OS or many forms in a programming language) are smells of a (potentially) broken design, full of redundancy and features that do not compose well.
On the flip side, a low number of features can mean either a low expressivity (often by design, as in most domain specific languages) or a better designed set of orthogonal features. Or it might just be a quick and dirty hack whose sole goal was to solve a contingent issue in the fastest and easiest possible way.
My favourite example to explain this concept is to compare Linux+Firefox with Plan 9 (and then Plan 9 to Jehanne, that is specifically looking for the most orthogonal set of abstractions/syscalls that a powerful distributed OS can provide).
It’s not just a matter of code bloat, performance, security and so on, it’s a matter of power and expressivity: with few well designed features you can express all the features provided by more complex artifacts… but also a wide new set that they cannot!
I get what you’re saying. Your view is actually more nuanced than Wirth’s. The main measure of complexity for Wirth was how long it took to compile the compiler. If it took a lot longer, he’d drop the feature. I think longer compiles are fine if they give us something in return. I also favor an interpreter + compiler setup for rapid development followed by high optimization. We have a lot of means to get high correctness regardless of the features in it. I’m seeing all rewards with no losses. Certainly people can put in stupid features or needless complexity which I’d be against. Wirth just set the bar way, way, too low.
“with few well designed features you can express all the features provided by more complex artifacts…”
Macros, modules, generics, polymorphism… a few examples.
“Simplicity is also an important security feature, in particular for programming languages.”
You probably shouldn’t be citing that paper given it’s one of the rarest attacks in existence. Compiler optimizations screw up security the most but people always cite Thompson. Anyway, I originally learned security reading work of the guy Thompson ripped off: Paul Karger. I wrote here about Karger’s invention of the problem and how you solve it. It’s a totally-solved problem. For new solutions, we have a page collecting tiny and verified versions of everything to help newcomers build something more quickly.
with few well designed features you can express all the features provided by more complex artifacts…
Macros, modules, generics, polymorphism… a few examples.
Exactly!
You do not need a compiler to generate code. You just need the compiler to verify the generated code before compilation.
Embedding code generation in a compiler (or a runtime, as Jit compiler do) can be convenient, but I’m not sure it’s teally needed.
Obviously it’s pointless to restrict yourself from using a supported language feature to pick “the good parts”. I use macros in C for example. But I also generate C in other ways when it’s appropriate and it’s pretty simple and usable. Thus I like Oberon that omit a possible source of complexity just like I like LISP and Scheme that maximise the gain/complexity ratio by raising the gain.
You probably shouldn’t be citing that paper given it’s one of the rarest attacks in existence.
I stand corrected (and thanks for these references!).
But the point I was trying to make was more general: we cannot trust “trust” in technology.
And the laymen cannot trust us either, as when they do we fool and exploit them.
Thus we need to rebuild the IT world from the ground up, focusing on simplicity from the very beginning. So that fellow programmers can rapidly inspect and understand everything.
The main measure of complexity for Wirth was how long it took to compile the compiler. […]
Wirth just set the bar way, way, too low.
He really doesn’t need my defence, but I don’t think that minimizing compiler’s compilation time is the goal, but just a metric.
AFAIK, the reasoning is: if it’s slow to compile (with a decent compiler), it means in could be simpler.
In other words, compilation time is a sort of canary that complexity kills for first.
Confusingly, RISC5 isn’t related at all to RISC-V as far as I know, it’s just the sixth iteration of a series of designs with features that are introduced incrementally for pedagogical purposes, numbered RISC-0 through RISC-5. The series is described in this document (PDF); look towards the end of the first page for the section starting with “The development of our RISC progresses through several stages.”
The whole source and the book explaining the stuff is at https://projectoberon.com
In my Copious Free Time (read: after my kids are in college, so ~15 years from now) I’d like to port Oberon to the Raspberry Pi. It would be fun.
It was one of the ideas I had for Raspberry Pi 3. I keep thinking about buying one. They say it’s best even if less hardware since so much software already works on it. Helps newcomers out. I want an always-on box that doesn’t use much power.
Oberon port, either Oberon or A2 Bluebottle, was one of my ideas. I also thought about porting it to Rust then backporting it to Oberon. Basically, knocking out any temporal errors plus letting it run without GC. Then, Oberon-to-C-to-LLVM for performance boost. Oberon in overdrive. ;)
If you wait 15 years on your project, then Wirth’s habits might mean there will be another 5-10 Oberon’s released with minor, feature changes before you get started. He might try dropping if statements or something. Who knows.
Well actually I would remove the FOR loop given it’s just syntactic sugar for a WHILE.
However for some reason it seems that Wirth likes it. :-)
Anyway… Wirth is a light source in these dark days: he will always remind us that less features mean less bugs.
It can also mean more bugs in the next layer up, if the dearth of features at a particular layer requires people to constantly reimplement even basic functionality.
This is exactly why I countered Wirth’s philosophy. We see it in action already where modern languages can:
(a) improve productivity expressing solutions with better abstractions or type inference
(b) improve performance with features like built-in parallelism and highly-optimizing compilers
(c) improve safety/security with things like better type systems
@jclulow and you propose a good objection, but I think you are confusing easyness with simplicity.
I see the features that a language (or an operating system) supports like the dimensions that can describe a program (or a dynamic ecosystem of users interacting with hardware devices).
Thus, to me, an high number of features (as many system calls in an OS or many forms in a programming language) are smells of a (potentially) broken design, full of redundancy and features that do not compose well.
On the flip side, a low number of features can mean either a low expressivity (often by design, as in most domain specific languages) or a better designed set of orthogonal features. Or it might just be a quick and dirty hack whose sole goal was to solve a contingent issue in the fastest and easiest possible way.
My favourite example to explain this concept is to compare Linux+Firefox with Plan 9 (and then Plan 9 to Jehanne, that is specifically looking for the most orthogonal set of abstractions/syscalls that a powerful distributed OS can provide).
It’s not just a matter of code bloat, performance, security and so on, it’s a matter of power and expressivity: with few well designed features you can express all the features provided by more complex artifacts… but also a wide new set that they cannot!
Simplicity is also an important security feature, in particular for programming languages.
I get what you’re saying. Your view is actually more nuanced than Wirth’s. The main measure of complexity for Wirth was how long it took to compile the compiler. If it took a lot longer, he’d drop the feature. I think longer compiles are fine if they give us something in return. I also favor an interpreter + compiler setup for rapid development followed by high optimization. We have a lot of means to get high correctness regardless of the features in it. I’m seeing all rewards with no losses. Certainly people can put in stupid features or needless complexity which I’d be against. Wirth just set the bar way, way, too low.
“with few well designed features you can express all the features provided by more complex artifacts…”
Macros, modules, generics, polymorphism… a few examples.
“Simplicity is also an important security feature, in particular for programming languages.”
You probably shouldn’t be citing that paper given it’s one of the rarest attacks in existence. Compiler optimizations screw up security the most but people always cite Thompson. Anyway, I originally learned security reading work of the guy Thompson ripped off: Paul Karger. I wrote here about Karger’s invention of the problem and how you solve it. It’s a totally-solved problem. For new solutions, we have a page collecting tiny and verified versions of everything to help newcomers build something more quickly.
Exactly!
You do not need a compiler to generate code. You just need the compiler to verify the generated code before compilation.
Embedding code generation in a compiler (or a runtime, as Jit compiler do) can be convenient, but I’m not sure it’s teally needed.
Obviously it’s pointless to restrict yourself from using a supported language feature to pick “the good parts”. I use macros in C for example. But I also generate C in other ways when it’s appropriate and it’s pretty simple and usable. Thus I like Oberon that omit a possible source of complexity just like I like LISP and Scheme that maximise the gain/complexity ratio by raising the gain.
I stand corrected (and thanks for these references!).
But the point I was trying to make was more general: we cannot trust “trust” in technology.
And the laymen cannot trust us either, as when they do we fool and exploit them.
Thus we need to rebuild the IT world from the ground up, focusing on simplicity from the very beginning. So that fellow programmers can rapidly inspect and understand everything.
He really doesn’t need my defence, but I don’t think that minimizing compiler’s compilation time is the goal, but just a metric.
AFAIK, the reasoning is: if it’s slow to compile (with a decent compiler), it means in could be simpler.
In other words, compilation time is a sort of canary that complexity kills for first.
I just noticed this was a new submission. So, I’ll add OberonStation here, too.
Any idea why he called the processor module “RISC5”? Is this just a mere nod to RISC-V, or is there some deeper connection?
Confusingly, RISC5 isn’t related at all to RISC-V as far as I know, it’s just the sixth iteration of a series of designs with features that are introduced incrementally for pedagogical purposes, numbered RISC-0 through RISC-5. The series is described in this document (PDF); look towards the end of the first page for the section starting with “The development of our RISC progresses through several stages.”