A simile that just struck me: A fascination with 6502 assembly or CP/M is akin to building your own suit of armor and re-enacting medieval jousts (a la the SCA.) Designing your own “clean-slate” virtual machine and applications, or building atop someone else’s, is more like escaping into medieval fantasy worlds (a la Lord Of The Rings.)
Neither of those are bad, of course! I love me some escapism. But neither has anything to do with the world today or the future, or has any real purpose other than fun. The future, even post-apocalyptic, is not going to be like the Middle Ages nor Middle Earth, and your homemade plate armor will not save you from a survivalist toting a rifle. Nor is Rivendell or Narnia a guideline for a better tomorrow.
The author (authors?) has (have?) a really nice take on software simplicity/minimalism with which I resonate wholeheartedly. They also have a good critique of an analogous approach regarding, erm, a network protocol, whose name I won’t spell out here lest I invoke the ancient USENET daemon of flamewars.
I disagree, an article built on dismissing and putting down the work of two groups of people that work really hard on their projects can not be excellent, nor nice.
A lot of the articles posted to lobste.rs are about criticizing (or yes, dismissing) technologies created by really hard-working people. I’m sure the people responsible for the C language, async Rust, JavaScript in browsers, Swift, proprietary operating systems, and Urbit all work(ed) hard. That doesn’t mean they get a gold star and a free pass against negative opinions. Nor does it mean negative opinions have to be written in a dry just-the-facts style.
My name is Devine, and I’m to blame for the uxn disaster.. I’m getting this link thrown at me from all sides right now, so I figured I might as well chip in.
I’m a bit more on the design-y side of things, and all these fancy words to talk about computers and retro computing, are a bit beyond me, and reference an era of computing which I didn’t really had a chance to experience first hand.
But let’s talk about “the uxn machine has quite the opposite effect, due to inefficient implementations and a poorly designed virtual machine, which does not lend itself to writing an efficient implementation easily.”
This is meaningful to me and I’d love to have your opinion on this. Before setting on the journey to build uxn, I looked around at the options that were out there, that would solve our specific problems, and I’d like to know what you would recommend.
Obviously, I take it that the author is not advocating that we simply return to Electron, and I take it they understand that re-compiling C applications after each change is not viable with our setup, that Rust is manyfolds too slow for us to make any use of it, and that they understand that our using Plan 9, C is not a good candidate for writing cross-compatible(libdraw) applications anyway.
So, what should we have done differently?
I’m genuinely looking for suggestions, even if one suggestion might not be compatible with our specific position, many folks come to us asking for existing solutions in that space, and I would like to furnish our answers with things I might have not tried.
In a DevGAMM presentation, Jon Blow managed to convince himself and other game developers that high level languages are going to cause the “collapse of civilisation” due to a loss of some sort of “capability” to do low-level programming, and that abstraction will make people forget how to do things.
I believe this is a gross misrepresentation of Jon Blow’s actual argument.
If I got it correctly, the gist of Blow’s argument is that loss of knowledge may collapse of civilizations. He points at the collapse of the Mediterranean Bronze Age, which in all likely hood had many intertwined causes, and one contributing factor was loss of knowledge.
Then he notes that something like loss of knowledge seems to be going on in our field. More and more we delegate low-level efforts to ever more concentrated teams of experts, to the point where the world at large seems to be rather ignorant of what happens under the hood. And as a consequence, we get stuff like criminally slow programs like Photoshop.
When I say “criminally” I am not even exaggerating. Slow program waste the time of all their users, and sufficiently popular program can easily lose cumulated lifetimes. We could argue that wasting a cumulative 60 years is just as bad as accidentally killing someone.
In our quest for better and better programmer productivity, we forgot that performance is not a niche concern. For interactive programs, anything short of “instantaneous” is slower than ideal. And then there’s energy consumption and the cost of silicon. Making a high performance chip is incredibly polluting. There’s a good chance that slower chips could be much better for the environment (not to mention our wallets). But for slower chips to be good enough, we need our programs to speed the hell up.
So, are high level languages causing the collapse of our civilisation? Not quite. The problem is more that fewer and fewer people know (or even care to know) what happens underneath, and there may come a point where this become unsustainable.
We could argue that wasting a cumulative 60 years is just as bad as accidentally killing someone.
The time spent waiting for an image to render in Photoshop, or for a compile to finish, is not blank nothingness ripped out of our existence. It’s a time the mind can spend to reflect, to plan, to do other tasks.
Still, the sheer time wasted can be fairly huge. Assuming the following.
10 second wasted per work day.
5 day work week, 40 weeks per year.
1 million such users
over 10 years.
That’s about 300 years worth of wasted time. About 5 lifetimes. (I think I’m being very conservative here.)
So as you pointed out, those 300 years aren’t completely lost. They are partially lost. I just wonder how much is actually lost here. How many lifetimes of waiting must we cause for it to be just as bad as wasting an actual lifetime? in my opinion it’s most probably somewhere between 5 and 100. And there’s definitely an upper bound.
Software (like Adobe Photoshop) exists at a local maximum of reliability - development resources - backwards compatibility - target hardware availability - profit.
One could imagine a version that was very fast and let the user not wait at all, but randomly corrupted images. No-one would like that, even if it saved virtual lives.
There’s a corrective to bad software: it’s called the market. So far, no-one has managed to displace Photoshop from its dominance as a PC-based image editor. Maybe someone will, and Adobe will shift their priority and local maximum to address that shortcoming.
In the meantime, I’d be more worried about stuff that actually kills people, like pollution, non-optimal access to healthcare, and wars.
The market is not perfect. Because it is made up of individuals, a bit of lost time is hardly noticed. Moreover, users have basically now way of distinguishing between necessary lost time and avoidable lost time. Plus, the incentives are all wrong, especially for productivity software: a new feature hardly anybody will use can sell, because it grows the list of things the software can do. Taking too much time to boot up however annoys everyone a tiny little bit — just not enough to actually lose the sale.
Yet, when you think about it, if you’re loosing cumulated decades of time across all your users, it’s kind of your moral duty to spend at least a couple weeks to fix the issue.
Terrible article. The author simply mashes together concepts while apparently having only a superficial understanding of any of them. The comparison of uxn to urbit is particularly hilarious, considering that they have totally different goals. Well, yes, both are virtual machines and that is where it ends.
Simplicity and elegance have appeals beyond performance, like ease of understanding and implementation, like straightforward development tool design. Judging the performance of a VM that (from what I see) has never been intended to be high-speed based on some arbitrary micro benchmark also doesn’t really demonstrate a particularly thorough methodology (the pervasive use of “we” does make the article sound somewhat scientific, I do grant that…)
I suggest the author invests some serious effort into studying C. Moore’s CPU designs, the true meaning of “simplicity”, the fact that it can be very liberating to understand a software system inside out and that not everybody has the same goals when it comes to envisioning the ideal piece of software. The article just critizes, which is easy, but doesn’t present anything beyond that.
Terrible article. The author simply mashes together concepts while apparently having only a superficial understanding of any of them.
I suggest the author invests some serious effort into studying C. Moore’s CPU designs, the true meaning of “simplicity”, the fact that it can be very liberating to understand a software system inside out
I don’t exactly agree with the author’s criticism of uxn (probably because I see it purely as a fun project, and not a serious endeavor), but let’s not descend into personal attacks please.
Now, with that out of way - this is not at all personal, the author is simply misrepresenting or confused, because there are numerous claims that have no basis;
It is claimed this assembler is like Forth, but it is not interactive, nor it have the ability to define new immediate words; calling and returning are explicit instructions. The uxntal language is merely an assembler for a stack machine.
Must Forth be interactive? What sense make immediate words in an assembler? Returning is an explicit instruction in Forth (EXIT, ;). That sentence suggests some wild claim has been made, but I can’t see where.
Using software design techniques to reduce power usage, and to allow continued use of old computers is a good idea, but the uxn machine has quite the opposite effect, due to inefficient implementations and a poorly designed virtual machine, which does not lend itself to writing an efficient implementation easily.
Again, I suggest studying Moore’s works, Koopman’s book and checking out the “Mill” to see that stacks can be very fast. The encoding scheme is possibly the simplest I’ve ever seen, the instruction set is fully orthogonal. A hardware implementation of the design would be orders of magnitude simpler than any other VM/CPU. Dynamic translation (which seems to be the author’s technique of choice) would be particularly straightforward. I see no poor design here.
The uxn platform has been ported to other non-Unix-like systems, but it is still not self-hosting, which has been routinely ignored as a part of bootstrapping.
This makes no sense. Why self-host a VM? “Routinely ignored”? What is he trying to say?
After that the author discusses the performance of the uxn VM implementation, somehow assuming that is the only metric important enough to warrant an assessment of the quality of uxn (the “disaster”).
Vectorisation is also out of the question, because there are no compilers for uxn code, let alone vectorising compilers.
What does the author expect here?
We can only conclude that the provided instruction sizes are arbitrary, and not optimised for performance or portability, yet they are not suitable for many applications either.
(I assume data sizes are meant here, not instruction sizes, as the latter are totally uniform) Uxn is a 8/16 bit CPU model and supports the same data sizes as any historical CPU with similar word size. Again, I get the impression the author is just trying very hard to find things to complain about.
Next the author goes to great lengths to evaluate uxn assembly as a high level programming tool, naturally finding numerous flaws in the untyped nature of assembly (surprise!).
a performant implementation of uxn requires much of the complexity of modern optimising compilers.
The same could be said about the JVM, I guess.
To get to the end, I can only say this article is an overly strenous attempt to find shortcomings, of whatever nature, mixing design issues, implementation details, the authors idea of VM implementation, security topics, at one moment taking uxn as a VM design, then as a language, then as a compiler target, then as a particular VM implementation, then as a general computing platform.
I like writing compilers, I have written compilers that target uxn, it is as good a target as any other (small) CPU (in fact, it is much easier than, say, the 6502). Claiming that “the design of uxn makes it unsuitable for personal computing, be it on new or old hardware” is simply false, as I can say from personal experience. This article is pure rambling, especially the end, where sentences like this that somehow let me doubt whether the author is capable of the required mental detachment to discuss technical issues:
Minimalist computing is theoretically about “more with less”, but rather than being provided with “more”, we are instead being guilt-tripped and told that any “more” is sinful: that it is going to cause the collapse of civilisation, that it is going to ruin the environment, that it increases the labour required by programmers, and so on. Yet it is precisely those minimalist devices which are committing these sins right now; the hypocritical Church of Minimalism calls us the sinners, while it harbours its most sinful priests, and gives them a promotion every so often.
No, brother, they are not out there to get you. They just want simple systems, that’s all. Relax.
As O’Keefe said about Prolog: “Elegance is not optional”. This also applies to CPU and VM design. You can write an uxn assembler in 20 lines of Forth. There you have a direct proof that simplicity and elegance have engineering implications in terms of maintenance, understandability and (a certain measure of) performance.
I agree with you in the sense that doing something for fun is obviously allowed, but I feel like the criticism in the article is not that you shouldn’t build anything simple and minimalist for fun, but that the things we build are usually not as revolutionary as some may claim just because they’re simple. Now, if the author of uxn made no such claims then that’s fine; however, that doesn’t mean something cannot be criticized for its perceived flaws (whether you agree with the style and tone of the criticism or not).
I also agree that the Church of Minimalism stuff is a bit over-the-top.
FWIW I had exactly the same reaction to this article as you, and I haven’t even heard of any of these projects. The article seems like it is in bad faith.
Minimalist computing is theoretically about “more with less”, but rather than being provided with “more”, we are instead being guilt-tripped and told that any “more” is sinful: that it is going to cause the collapse of civilisation, that it is going to ruin the environment, that it increases the labour required by programmers, and so on. Yet it is precisely those minimalist devices which are committing these sins right now; the hypocritical Church of Minimalism calls us the sinners, while it harbours its most sinful priests, and gives them a promotion every so often.
This part in particular is so hyperbolic as to be absurd. Completely unnecessary. Still, I guess if your goal is to garner attention, hyperbole sells.
This part in particular is so hyperbolic as to be absurd. Completely unnecessary. Still, I guess if your goal is to garner attention, hyperbole sells.
I wouldn’t say so. I’ve had folks tell me on tech news aggregators that the only way to make computing ethical is for computing to be reimplemented on uxn stacks so that we can all understand our code, or else the code we use can be used for exploitation. Now this may not be the actual uxn project’s stance on the matter at all, but much like Rust seems to have a bit of a reputation of really pushy fans, I think it’s fair to say that uxn has attracted a fanbase that often pushes this narrative of “sinful computing”.
I did a light search and found nothing off-hand. I’ll DM you if I manage to find this since I don’t like naming and shaming in public.
Edit: And yeah I’m not saying this has been my experience with a majority at all. The folks I’ve heard talk about uxn have been mixed with most having fun with the architecture the same way folks seem to like writing PICO-8. It just has some… pushy folks involved also.
I believe the only way to make computing ethical is to reinvent computing to do more with less. I also believe uxn is trying to reinvent computing (for a very specific use case) to do more with less. But those two statements still don’t add up to any claim that it’s the only way out, or even that it’s been shown to work in broader use cases.
Disclaimer: I’ve also tried to reinvent computing to do more with less. So I have a knife in this fight.
Actually, we regularly get posts here on lobste.rs espousing exactly that sort of ideology. I think there’s one or two trending right now. perhaps you’ve hit on the right set of tag filters so you never see them?
The comparison of uxn to urbit is particularly hilarious, considering that they have totally different goals.
they both market themselves as “clean-slate computing stacks”, they both begin with a basic admission that no new OS will ever exist (so you have to host your OS on something else), they both are supported by a cult of personality, they both are obsessed with ‘simplicity’ to the point of losing pragmatic use and speed. I’d say they’re pretty similar!
Influenced by the above post, what are some good formally-specified minimal imperative languages that folks here know? Constraining an execution model for formal reasoning seems like it would scratch both the constrained programming itch and give you the safety to reason about your program in ways that just the human brain can’t. Are there examples of these?
I know that Scheme can be written idiomatically quite imperatively and there are a lot of minimal scheme implementations. This wasn’t quite what I meant by “imperative language”, but still, are there formally verified subsets of Scheme? That would be quite cool.
It fits in every way that matters, just wasn’t what I “expected” when I wrote the question. The reason being that I was envisioning a bit of mechanical sympathy with the code I wrote and the processor, but of course by doing that I fell into the same trap as so many others with these minimal languages. Mechanical sympathy is not the same as minimal.
I’d be happy with a formal, minimal Scheme to play with. Bonus points if it tends to reduce to performant instructions, but a formal, minimal Scheme is plenty.
the only simple code that can be written in C does not have any redundancy
This statement really rings true to me. Many attempts at simplification of software I have seen ignore tools that would provide redundancies without making the code complex, just because they perceive those tools to be complex themselves. In the end, they either die because the lack of redundancies make the project unusable, or the complexity from manually added redundancies makes the project unmaintainable.
Wow. Excellent article, and such nice prose.
A simile that just struck me: A fascination with 6502 assembly or CP/M is akin to building your own suit of armor and re-enacting medieval jousts (a la the SCA.) Designing your own “clean-slate” virtual machine and applications, or building atop someone else’s, is more like escaping into medieval fantasy worlds (a la Lord Of The Rings.)
Neither of those are bad, of course! I love me some escapism. But neither has anything to do with the world today or the future, or has any real purpose other than fun. The future, even post-apocalyptic, is not going to be like the Middle Ages nor Middle Earth, and your homemade plate armor will not save you from a survivalist toting a rifle. Nor is Rivendell or Narnia a guideline for a better tomorrow.
The author (authors?) has (have?) a really nice take on software simplicity/minimalism with which I resonate wholeheartedly. They also have a good critique of an analogous approach regarding, erm, a network protocol, whose name I won’t spell out here lest I invoke the ancient USENET daemon of flamewars.
I disagree, an article built on dismissing and putting down the work of two groups of people that work really hard on their projects can not be excellent, nor nice.
A lot of the articles posted to lobste.rs are about criticizing (or yes, dismissing) technologies created by really hard-working people. I’m sure the people responsible for the C language, async Rust, JavaScript in browsers, Swift, proprietary operating systems, and Urbit all work(ed) hard. That doesn’t mean they get a gold star and a free pass against negative opinions. Nor does it mean negative opinions have to be written in a dry just-the-facts style.
The uxn author’s response is on the Orange Site.
I believe this is a gross misrepresentation of Jon Blow’s actual argument.
If I got it correctly, the gist of Blow’s argument is that loss of knowledge may collapse of civilizations. He points at the collapse of the Mediterranean Bronze Age, which in all likely hood had many intertwined causes, and one contributing factor was loss of knowledge.
Then he notes that something like loss of knowledge seems to be going on in our field. More and more we delegate low-level efforts to ever more concentrated teams of experts, to the point where the world at large seems to be rather ignorant of what happens under the hood. And as a consequence, we get stuff like criminally slow programs like Photoshop.
When I say “criminally” I am not even exaggerating. Slow program waste the time of all their users, and sufficiently popular program can easily lose cumulated lifetimes. We could argue that wasting a cumulative 60 years is just as bad as accidentally killing someone.
In our quest for better and better programmer productivity, we forgot that performance is not a niche concern. For interactive programs, anything short of “instantaneous” is slower than ideal. And then there’s energy consumption and the cost of silicon. Making a high performance chip is incredibly polluting. There’s a good chance that slower chips could be much better for the environment (not to mention our wallets). But for slower chips to be good enough, we need our programs to speed the hell up.
So, are high level languages causing the collapse of our civilisation? Not quite. The problem is more that fewer and fewer people know (or even care to know) what happens underneath, and there may come a point where this become unsustainable.
The time spent waiting for an image to render in Photoshop, or for a compile to finish, is not blank nothingness ripped out of our existence. It’s a time the mind can spend to reflect, to plan, to do other tasks.
Good point, my 1 to 1 scale was incorrect.
Still, the sheer time wasted can be fairly huge. Assuming the following.
That’s about 300 years worth of wasted time. About 5 lifetimes. (I think I’m being very conservative here.)
So as you pointed out, those 300 years aren’t completely lost. They are partially lost. I just wonder how much is actually lost here. How many lifetimes of waiting must we cause for it to be just as bad as wasting an actual lifetime? in my opinion it’s most probably somewhere between 5 and 100. And there’s definitely an upper bound.
Software (like Adobe Photoshop) exists at a local maximum of reliability - development resources - backwards compatibility - target hardware availability - profit.
One could imagine a version that was very fast and let the user not wait at all, but randomly corrupted images. No-one would like that, even if it saved virtual lives.
There’s a corrective to bad software: it’s called the market. So far, no-one has managed to displace Photoshop from its dominance as a PC-based image editor. Maybe someone will, and Adobe will shift their priority and local maximum to address that shortcoming.
In the meantime, I’d be more worried about stuff that actually kills people, like pollution, non-optimal access to healthcare, and wars.
(Edit seen today: Polluted air cuts global life expectancy by two years
I agree, to a point.
The market is not perfect. Because it is made up of individuals, a bit of lost time is hardly noticed. Moreover, users have basically now way of distinguishing between necessary lost time and avoidable lost time. Plus, the incentives are all wrong, especially for productivity software: a new feature hardly anybody will use can sell, because it grows the list of things the software can do. Taking too much time to boot up however annoys everyone a tiny little bit — just not enough to actually lose the sale.
Yet, when you think about it, if you’re loosing cumulated decades of time across all your users, it’s kind of your moral duty to spend at least a couple weeks to fix the issue.
https://www.folklore.org/StoryView.py?project=Macintosh&story=Saving_Lives.txt
Terrible article. The author simply mashes together concepts while apparently having only a superficial understanding of any of them. The comparison of uxn to urbit is particularly hilarious, considering that they have totally different goals. Well, yes, both are virtual machines and that is where it ends.
Simplicity and elegance have appeals beyond performance, like ease of understanding and implementation, like straightforward development tool design. Judging the performance of a VM that (from what I see) has never been intended to be high-speed based on some arbitrary micro benchmark also doesn’t really demonstrate a particularly thorough methodology (the pervasive use of “we” does make the article sound somewhat scientific, I do grant that…)
I suggest the author invests some serious effort into studying C. Moore’s CPU designs, the true meaning of “simplicity”, the fact that it can be very liberating to understand a software system inside out and that not everybody has the same goals when it comes to envisioning the ideal piece of software. The article just critizes, which is easy, but doesn’t present anything beyond that.
I don’t exactly agree with the author’s criticism of uxn (probably because I see it purely as a fun project, and not a serious endeavor), but let’s not descend into personal attacks please.
Thanks.
Now, with that out of way - this is not at all personal, the author is simply misrepresenting or confused, because there are numerous claims that have no basis;
Must Forth be interactive? What sense make immediate words in an assembler? Returning is an explicit instruction in Forth (EXIT, ;). That sentence suggests some wild claim has been made, but I can’t see where.
Again, I suggest studying Moore’s works, Koopman’s book and checking out the “Mill” to see that stacks can be very fast. The encoding scheme is possibly the simplest I’ve ever seen, the instruction set is fully orthogonal. A hardware implementation of the design would be orders of magnitude simpler than any other VM/CPU. Dynamic translation (which seems to be the author’s technique of choice) would be particularly straightforward. I see no poor design here.
This makes no sense. Why self-host a VM? “Routinely ignored”? What is he trying to say?
After that the author discusses the performance of the uxn VM implementation, somehow assuming that is the only metric important enough to warrant an assessment of the quality of uxn (the “disaster”).
What does the author expect here?
(I assume data sizes are meant here, not instruction sizes, as the latter are totally uniform) Uxn is a 8/16 bit CPU model and supports the same data sizes as any historical CPU with similar word size. Again, I get the impression the author is just trying very hard to find things to complain about.
Next the author goes to great lengths to evaluate uxn assembly as a high level programming tool, naturally finding numerous flaws in the untyped nature of assembly (surprise!).
The same could be said about the JVM, I guess.
To get to the end, I can only say this article is an overly strenous attempt to find shortcomings, of whatever nature, mixing design issues, implementation details, the authors idea of VM implementation, security topics, at one moment taking uxn as a VM design, then as a language, then as a compiler target, then as a particular VM implementation, then as a general computing platform.
I like writing compilers, I have written compilers that target uxn, it is as good a target as any other (small) CPU (in fact, it is much easier than, say, the 6502). Claiming that “the design of uxn makes it unsuitable for personal computing, be it on new or old hardware” is simply false, as I can say from personal experience. This article is pure rambling, especially the end, where sentences like this that somehow let me doubt whether the author is capable of the required mental detachment to discuss technical issues:
No, brother, they are not out there to get you. They just want simple systems, that’s all. Relax.
As O’Keefe said about Prolog: “Elegance is not optional”. This also applies to CPU and VM design. You can write an uxn assembler in 20 lines of Forth. There you have a direct proof that simplicity and elegance have engineering implications in terms of maintenance, understandability and (a certain measure of) performance.
I agree with you in the sense that doing something for fun is obviously allowed, but I feel like the criticism in the article is not that you shouldn’t build anything simple and minimalist for fun, but that the things we build are usually not as revolutionary as some may claim just because they’re simple. Now, if the author of uxn made no such claims then that’s fine; however, that doesn’t mean something cannot be criticized for its perceived flaws (whether you agree with the style and tone of the criticism or not).
I also agree that the Church of Minimalism stuff is a bit over-the-top.
FWIW I had exactly the same reaction to this article as you, and I haven’t even heard of any of these projects. The article seems like it is in bad faith.
This part in particular is so hyperbolic as to be absurd. Completely unnecessary. Still, I guess if your goal is to garner attention, hyperbole sells.
I wouldn’t say so. I’ve had folks tell me on tech news aggregators that the only way to make computing ethical is for computing to be reimplemented on uxn stacks so that we can all understand our code, or else the code we use can be used for exploitation. Now this may not be the actual uxn project’s stance on the matter at all, but much like Rust seems to have a bit of a reputation of really pushy fans, I think it’s fair to say that uxn has attracted a fanbase that often pushes this narrative of “sinful computing”.
Oh interesting, do you have any links? I’m intrigued by this insanity.
Edit: though presumably this is a vocal minority, making this still quite a hyperbolic statement.
I did a light search and found nothing off-hand. I’ll DM you if I manage to find this since I don’t like naming and shaming in public.
Edit: And yeah I’m not saying this has been my experience with a majority at all. The folks I’ve heard talk about uxn have been mixed with most having fun with the architecture the same way folks seem to like writing PICO-8. It just has some… pushy folks involved also.
I believe the only way to make computing ethical is to reinvent computing to do more with less. I also believe uxn is trying to reinvent computing (for a very specific use case) to do more with less. But those two statements still don’t add up to any claim that it’s the only way out, or even that it’s been shown to work in broader use cases.
Disclaimer: I’ve also tried to reinvent computing to do more with less. So I have a knife in this fight.
Actually, we regularly get posts here on lobste.rs espousing exactly that sort of ideology. I think there’s one or two trending right now. perhaps you’ve hit on the right set of tag filters so you never see them?
“C. Moore…” as in Chuck Moore…
The paste was cut off. Fixed.
they both market themselves as “clean-slate computing stacks”, they both begin with a basic admission that no new OS will ever exist (so you have to host your OS on something else), they both are supported by a cult of personality, they both are obsessed with ‘simplicity’ to the point of losing pragmatic use and speed. I’d say they’re pretty similar!
Strong disagree, from someone who’s been moderately involved with uxn community previously. Who is the cult leader in this scenario?
[Comment removed by author]
Influenced by the above post, what are some good formally-specified minimal imperative languages that folks here know? Constraining an execution model for formal reasoning seems like it would scratch both the constrained programming itch and give you the safety to reason about your program in ways that just the human brain can’t. Are there examples of these?
How about Scheme?
I know that Scheme can be written idiomatically quite imperatively and there are a lot of minimal scheme implementations. This wasn’t quite what I meant by “imperative language”, but still, are there formally verified subsets of Scheme? That would be quite cool.
I’m curious in which ways scheme doesn’t fit your idea of “imperitive language”?
It fits in every way that matters, just wasn’t what I “expected” when I wrote the question. The reason being that I was envisioning a bit of mechanical sympathy with the code I wrote and the processor, but of course by doing that I fell into the same trap as so many others with these minimal languages. Mechanical sympathy is not the same as minimal.
I’d be happy with a formal, minimal Scheme to play with. Bonus points if it tends to reduce to performant instructions, but a formal, minimal Scheme is plenty.
Verified in what sense?
Formally verifiable, meaning either:
We can translate this minimal Scheme code into something an automated theorem-prover can work with, and let it rip.
If 1 isn’t possible, then a Scheme that renders minimally to theorem prover clauses that an author has to prove on their own in the prover.
I’m guessing we’re at a state where 2 is doable but 1 is not yet, but I haven’t payed as much attention here as I should, so I’m curious.
Check it out https://www.ideals.illinois.edu/bitstream/handle/2142/11368/A%20Formal%20Rewriting%20Logic%20Semantic%20Definition%20of%20Scheme.pdf?sequence=2
This is awesome, thanks!
Depending how “simple” you want there is wasm
This statement really rings true to me. Many attempts at simplification of software I have seen ignore tools that would provide redundancies without making the code complex, just because they perceive those tools to be complex themselves. In the end, they either die because the lack of redundancies make the project unusable, or the complexity from manually added redundancies makes the project unmaintainable.