I recall watching this talk and being both impressed and unimpressed. Impressed by the fact that someone was near the top of the abstraction tower and leapt to the bottom, unimpressed by the conclusions they drew when they were at the bottom.
uxn, retroforth for ilo and several other projects (including the permacomputing folks) follow similar paths: they drill down to the bottom, realize that they need an ultra-simple portable execution environment that can run “anywhere”, and they try to do that in the smallest footprint required to interface with the machines they want to run on.
The desire to separate themselves from the machine is the evolution of something like Chuck Moore’s attitude. The number of types of systems that exist nowadays is far more than in Chuck’s days, and the abstractions that’re required to write “simple” programs (even on “bare metal”) are now towering. In order to understand the machine, you need to invent your own. The machine that you’re working on doesn’t need to be your target anymore. You’re free.
While this is a noble effort, the problem is that boundary between the space you’ve made (the VM, etc.) and the surrounding space (the host, whatever that might be). The choices you make for opcodes, word widths, memory layouts, and other architectural decisions will predetermine the kinds of hosts you can interface with. Everything from basic performance concerns to “how does this feel when I start executing this on paper” to “how similar is this to existing architectures” needs to be considered.
It’s fairly difficult to pick what you need. Brainfuck, the lambda calculus, various combinator calculi, string rewriting, etc. all point to what the right answer might be. I don’t think 6502-likes are along that path, but I respect the efforts, as they’re chosen due to the closeness to the hosts we currently operate on.
The number of types of systems that exist nowadays is far more than in Chuck’s days
I guess if you frame it as “systems that exist” this is technically true, but if you’re talking about “systems that you might actually scavenge outside a museum” I doubt it. Nowadays we have x86 and ARM accounting for 99% plus PPC, AVR, and Xtensa, with RISC-V is just barely starting to become a thing, but this is dwarfed by the diversity of architectures that existed when Forth was developed.
The architectures I can name off the top of my head.
x86
ARM
POWER
MIPS
Z80/6502 (and others in a similar vein)
AVR
PIC
ESP32 (and ESP8266)
RISC-V
That’s not even counting any of the subfamilies like ARM64/Thumb, many of which can have significantly different architectural decisions. It’s also not counting how these individual architectures are set in specific systems: a particular hardware configuration isn’t guaranteed.
Compare this to the environment that Moore existed in: large mainframe computers and minicomputers. There were a finite amount of hardware configurations because the hardware was simple, the costs were high, and the machines themselves were large, hulking beasts that just became solid state and had a backing vendor.
The diversity of the machines today can’t even be compared to the likes of when Chuck Moore was initially getting started.
All the systems you list are byte addressable, with 8 bit bytes, and a power-of-two word size. Signed integers are always twos-complement. If floating point hardware exists, it is always IEEE compliant. A string of 7-bit ASCII characters is invariable stored as an array of 8-bit bytes, with the high bit 0. This standardization is useful, but hardware was much more diverse when Chuck got started.
Consider the GE-600/Honeywell 6000, which had 36 bit words and 18 bit addresses. ASCII character strings were packed either 4 to a word (with 9 bit bytes) or 5 to a word (with 7 bit bytes plus an extra bit to use as a flag). Or the CDC 6600, with 60 bit words and 1’s complement arithmetic.
Compare this to the environment that Moore existed in: large mainframe computers and minicomputers. There were a finite amount of hardware configurations because the hardware was simple
Let’s look at just a single manufacturer: DEC. Just across their PDP line, you had a huge amount of variety from 12-bit, 18-bit, 16-bit, and 36-bit families. Sometimes the newer models within one family would be expanded versions of older ones, like the PDP-5 -> PDP-8, but not always. But once you include DEC’s earlier devices, you’ve already got half the diversity you listed above just coming from a single producer. Broaden it to the entire market (remember, basically everyone is reinventing ISAs from scratch because there is no expectation of portability) and there’s a bewildering amount of variety.
Then the year after Forth, Inc was founded saw the introduction of the Altair; once personal computers get thrown into the mix it gets even more diverse, up to the point manufacturers started standardizing on IBM’s PC architecture.
What I listed was just processor architecture families. I didn’t mention what those things went into. DEC shipped full, working machines. The processor families I list above can be a part of larger heterogeneous systems that have different methods of programming. Hell, your computer is multiple computers all talking to each-other.
That explosion of variety is what I’m trying to convey as the difference. Because machines got smaller, and the components became generic/able to be integrated into larger systems, the number of possibilities shot through the roof.
I kind of wish more people would take a computer science degree. These rants by ultra-minimalists/retro computing fans often get situated in a kind of weak context, with a bit of a scraped together notion.
Not leading with “Why portable C and an abstraction layer over the non-portable memory bits is definitely wrong” is kind of a demonstration of the entire problem with this article. Because that’s actually the real answer to the problem: to write a stable compiled program in a stable language which utilizes a cross-platform library that’s retargetable. Example: https://en.wikipedia.org/wiki/OpenGL_Utility_Toolkit - I could spend some time recompiling a program I wrote against that in 2001, recompile, and it’d work. It’s a solved problem, if you’re a developer with an interest in it.
I also note that entire episodes of original star trek were about this concept:
There was a time when computers were super playful, but now they feel cold, and have been weaponized against people.
because the mythologies induced by, e.g., What the Dormouse Said (a good book when dosed with other histories) are not fully representative of reality.
To the author, if you’re reading: you need to start digging into the contexts deeper, you’re still “ I had a vague idea of what programming was.” only vaguely touching the context the technologies were created in and the social reasons they were formed the way they were.
This comment saddens me. From where I’m sitting, it amounts to calling a self motivated person ignorant, calling their project pointless, and telling them to do better. There is constructive criticism here, but it is sandwiched by less helpful comments.
What’s the tl;dr on this thing? It had nice drawings, and it looked like the person put in effort, so I wanted to support content like this by reading it and discussing it, but my eyes just glazed over (I probably have adult ADHD). It began to sound like a rant against cloud only software (which I also agree with: I like to work on the train during my commute) but then it got so long and windy. What was the solution?
From my understanding (after reading a few other posts on the author’s site), it’s about two things:
modern software being far too complex for any single person to understand completely; and
modern software growing such that it will use all the resources you have, even at the expense of power consumption.
The author lives on a sailboat, with electricity provided by small solar panels, where every resource (whether that be electricity, water, food…) needs to be used conservatively. When they first started on their journey in that boat, they were using modern stuff - MacBooks and ThinkPads, running contemporary OSes, but quickly discovered that with the ever-growing reliance of modern tech on a high-speed internet connection (and the large battery drain involved with using any of that tech), they needed something better.
Some context is that this is from a uxn developer. A small assembly language for making lightweight applications.
I say this presentation touches on two main points:
The developers have lived on an off-grid sailboat so don’t have access to traditional software development tools. Most people wouldn’t consider the barriers someone living in the Solomon Islands would face trying to write an Android or iPhone app. Even consider the amount of data needed to write a node.js app or to use fancy development environments on NixOS. Latency makes remote development not feasible.
And the rest of the presentation is about software preservation. If I build an Android app, will you be able to run it in 10 years? How can we make something more portable? Virtual machines can be pretty useful, but they have to present the right level of abstraction.
The subscription model of software is kind of disturbing, but I can’t tell if that’s because I’ve become an old fuddy duddy. I do very few classes of things on a computer nowadays.
Work (write software) - proprietary tools supplied by work
Write (fiction) - plain text editor
Store photos - OS folders
Store audio recordings - OS folders
Save documents - OS folders + Google docs
I worry about the longevity of my data formats, not the software I used to create them.
I assume that the hardware and software platforms will move. It is inexorable. I detest subscription models - I want to own something: software, hardware whatever. I don’t care that it gets obsolete, I just care that it works like it used to. I don’t want it to suddenly stop working because of money or connectivity.
However when I buy new hardware, I accept that it may not run old software. Hence my concern with the durability of data formats.
I do get super annoyed when Disney tries to force me to discard my nice 8 year old iPad just to use its updated App. Fortunately enough people feel like me, and the App store now allows me to use older versions of the App.
If I build an Android app, will you be able to run it in 10 years
In my experience, very likely yes. I have dug up a ten year old Android apk file before and found it still working perfectly on modern Android. It pops up a warning now to the effect that “this is really old and might not still work” but it does just fine.
Android had a very different culture to both Apple and the main body of Google and actively cares about backwards compatibility.
Starlink has issues but mostly works really well. If we have systems like Starlink, but more accessible (price and regions) then do we actually need to worry about a 10GB download of Xcode? Today, a rich software developer could put Starlink on their sailboat and not think about data, right?
Lithium batteries can be expensive today but with electric cars and home batteries, etc. the recycled battery market will make it very cheap to have a lot of storage on a sailboat. I know batteries are heavy and space is a premium so solar is limited, but do you still you have to think about energy, if you have enough storage?
Starlink isn’t going to cover the whole world, and to your point mostly benefits those in the developed world who can afford $125/mo. or more for access.
A large percentage of the world’s population simply cannot pay that, and even if they could, Starlink isn’t going to build ground stations anywhere near them.
This “Elon will save us!” culture has to end. He might well create little bubbles of gilded comfort in for the rich and powerful (satellite Internet, pay-to-play social media, and fast, clean, $100k cars) but his interest ends where the profit margins approach zero.
A large percentage of the world’s population simply cannot pay that, and even if they could, Starlink isn’t going to build ground stations anywhere near them.
Yes, I agree that price is a huge issue but I think Starlink is opening up to regions without ground stations - I guess the latency might not be the 50ms standard, but I imagine it will still be a good thing for places like the Solomon Islands.
This “Elon will save us!” culture has to end.
I would love for Starlink to be worker owned or see an alternative. I just care about the digital divide and Starlink has done things for places like rural Australia that our Government is not able to. I know the example is 10GB Xcode for iPhone development, but what about watching a YouTube video or participating in a Zoom call for school?
And on price, a 4Mbps fibre connection in the Solomon Islands is $170USD per month at the moment. Yes, Starlink’s price is ridiculous for these regions but you have to understand that it’s sadly an improvement.
At some point, you’re going to have to stop. The problem (and underlying motivation behind projects like uxn) is that we just don’t know where to stop. The abstraction towers are getting higher each year, and that 10GB isn’t going to be 10GB in a few years with newer architectures, patterns for programming, and support for legacy devices being shipped under the moniker of a development environment.
Whatever standards you target, they won’t be sufficient once they’re widely used and ultimately pushed beyond their limit. Growth is the problem, not the constants of the current year.
I “read” like maybe 20%. Maybe I missed the point. It seems like the author is drawn to what I’d call “programming in the dirt”, where if we got stranded on some desert island, the programs they could bring with (on paper, in carry-on luggage) would be
fairly readable in a small way (not too-simple like Brainfuck or too-complex like full x86-64 ISA)
possible to run on a runtime you could actually build on the island (not needing the “mainland” if you will)
(somewhat game-oriented?)
Gilligan’s PL, if you will. I take it the author has been developing their own language most similar to 6502 assembly.
Other PLs that come to mind would be: Don Knuth’s MMIX, BASIC, maybe even Standard ML or Featherweight Java.
I recall watching this talk and being both impressed and unimpressed. Impressed by the fact that someone was near the top of the abstraction tower and leapt to the bottom, unimpressed by the conclusions they drew when they were at the bottom.
uxn
,retroforth for ilo
and several other projects (including the permacomputing folks) follow similar paths: they drill down to the bottom, realize that they need an ultra-simple portable execution environment that can run “anywhere”, and they try to do that in the smallest footprint required to interface with the machines they want to run on.The desire to separate themselves from the machine is the evolution of something like Chuck Moore’s attitude. The number of types of systems that exist nowadays is far more than in Chuck’s days, and the abstractions that’re required to write “simple” programs (even on “bare metal”) are now towering. In order to understand the machine, you need to invent your own. The machine that you’re working on doesn’t need to be your target anymore. You’re free.
While this is a noble effort, the problem is that boundary between the space you’ve made (the VM, etc.) and the surrounding space (the host, whatever that might be). The choices you make for opcodes, word widths, memory layouts, and other architectural decisions will predetermine the kinds of hosts you can interface with. Everything from basic performance concerns to “how does this feel when I start executing this on paper” to “how similar is this to existing architectures” needs to be considered.
It’s fairly difficult to pick what you need. Brainfuck, the lambda calculus, various combinator calculi, string rewriting, etc. all point to what the right answer might be. I don’t think 6502-likes are along that path, but I respect the efforts, as they’re chosen due to the closeness to the hosts we currently operate on.
I guess if you frame it as “systems that exist” this is technically true, but if you’re talking about “systems that you might actually scavenge outside a museum” I doubt it. Nowadays we have x86 and ARM accounting for 99% plus PPC, AVR, and Xtensa, with RISC-V is just barely starting to become a thing, but this is dwarfed by the diversity of architectures that existed when Forth was developed.
The architectures I can name off the top of my head.
That’s not even counting any of the subfamilies like ARM64/Thumb, many of which can have significantly different architectural decisions. It’s also not counting how these individual architectures are set in specific systems: a particular hardware configuration isn’t guaranteed.
Compare this to the environment that Moore existed in: large mainframe computers and minicomputers. There were a finite amount of hardware configurations because the hardware was simple, the costs were high, and the machines themselves were large, hulking beasts that just became solid state and had a backing vendor.
The diversity of the machines today can’t even be compared to the likes of when Chuck Moore was initially getting started.
All the systems you list are byte addressable, with 8 bit bytes, and a power-of-two word size. Signed integers are always twos-complement. If floating point hardware exists, it is always IEEE compliant. A string of 7-bit ASCII characters is invariable stored as an array of 8-bit bytes, with the high bit 0. This standardization is useful, but hardware was much more diverse when Chuck got started.
Consider the GE-600/Honeywell 6000, which had 36 bit words and 18 bit addresses. ASCII character strings were packed either 4 to a word (with 9 bit bytes) or 5 to a word (with 7 bit bytes plus an extra bit to use as a flag). Or the CDC 6600, with 60 bit words and 1’s complement arithmetic.
Let’s look at just a single manufacturer: DEC. Just across their PDP line, you had a huge amount of variety from 12-bit, 18-bit, 16-bit, and 36-bit families. Sometimes the newer models within one family would be expanded versions of older ones, like the PDP-5 -> PDP-8, but not always. But once you include DEC’s earlier devices, you’ve already got half the diversity you listed above just coming from a single producer. Broaden it to the entire market (remember, basically everyone is reinventing ISAs from scratch because there is no expectation of portability) and there’s a bewildering amount of variety.
Then the year after Forth, Inc was founded saw the introduction of the Altair; once personal computers get thrown into the mix it gets even more diverse, up to the point manufacturers started standardizing on IBM’s PC architecture.
What I listed was just processor architecture families. I didn’t mention what those things went into. DEC shipped full, working machines. The processor families I list above can be a part of larger heterogeneous systems that have different methods of programming. Hell, your computer is multiple computers all talking to each-other.
That explosion of variety is what I’m trying to convey as the difference. Because machines got smaller, and the components became generic/able to be integrated into larger systems, the number of possibilities shot through the roof.
I kind of wish more people would take a computer science degree. These rants by ultra-minimalists/retro computing fans often get situated in a kind of weak context, with a bit of a scraped together notion.
Not leading with “Why portable C and an abstraction layer over the non-portable memory bits is definitely wrong” is kind of a demonstration of the entire problem with this article. Because that’s actually the real answer to the problem: to write a stable compiled program in a stable language which utilizes a cross-platform library that’s retargetable. Example: https://en.wikipedia.org/wiki/OpenGL_Utility_Toolkit - I could spend some time recompiling a program I wrote against that in 2001, recompile, and it’d work. It’s a solved problem, if you’re a developer with an interest in it.
I also note that entire episodes of original star trek were about this concept:
because the mythologies induced by, e.g., What the Dormouse Said (a good book when dosed with other histories) are not fully representative of reality.
To the author, if you’re reading: you need to start digging into the contexts deeper, you’re still “ I had a vague idea of what programming was.” only vaguely touching the context the technologies were created in and the social reasons they were formed the way they were.
This comment saddens me. From where I’m sitting, it amounts to calling a self motivated person ignorant, calling their project pointless, and telling them to do better. There is constructive criticism here, but it is sandwiched by less helpful comments.
Nice exposition of the 100r philosophy. I know Devine is on here but I don’t know his handle: Devine, have you taken a look at mu?
Devine’s pronouns are they/them
Yes, I use the Mastodon instance they admin and we have lots of discussion/debate/feedback.
They’re @neauoire.
Interesting read, and fairly encouraging. Someone out there had frustrations that overlap my own, and they tried to do something about it.
What’s the tl;dr on this thing? It had nice drawings, and it looked like the person put in effort, so I wanted to support content like this by reading it and discussing it, but my eyes just glazed over (I probably have adult ADHD). It began to sound like a rant against cloud only software (which I also agree with: I like to work on the train during my commute) but then it got so long and windy. What was the solution?
From my understanding (after reading a few other posts on the author’s site), it’s about two things:
The author lives on a sailboat, with electricity provided by small solar panels, where every resource (whether that be electricity, water, food…) needs to be used conservatively. When they first started on their journey in that boat, they were using modern stuff - MacBooks and ThinkPads, running contemporary OSes, but quickly discovered that with the ever-growing reliance of modern tech on a high-speed internet connection (and the large battery drain involved with using any of that tech), they needed something better.
The article talks about a few of the things they drew inspiration from, and then at the end talks about the thing they created - uxn/Varvara - a relatively simple sorta-Forth-ish bytecode VM, which has been ported to many different OSes, and running on many different “embedded” platforms.
Some context is that this is from a uxn developer. A small assembly language for making lightweight applications.
I say this presentation touches on two main points:
The developers have lived on an off-grid sailboat so don’t have access to traditional software development tools. Most people wouldn’t consider the barriers someone living in the Solomon Islands would face trying to write an Android or iPhone app. Even consider the amount of data needed to write a node.js app or to use fancy development environments on NixOS. Latency makes remote development not feasible.
And the rest of the presentation is about software preservation. If I build an Android app, will you be able to run it in 10 years? How can we make something more portable? Virtual machines can be pretty useful, but they have to present the right level of abstraction.
Thanks @iris and @puffnfresh
The subscription model of software is kind of disturbing, but I can’t tell if that’s because I’ve become an old fuddy duddy. I do very few classes of things on a computer nowadays.
I worry about the longevity of my data formats, not the software I used to create them.
I assume that the hardware and software platforms will move. It is inexorable. I detest subscription models - I want to own something: software, hardware whatever. I don’t care that it gets obsolete, I just care that it works like it used to. I don’t want it to suddenly stop working because of money or connectivity.
However when I buy new hardware, I accept that it may not run old software. Hence my concern with the durability of data formats.
I do get super annoyed when Disney tries to force me to discard my nice 8 year old iPad just to use its updated App. Fortunately enough people feel like me, and the App store now allows me to use older versions of the App.
In my experience, very likely yes. I have dug up a ten year old Android apk file before and found it still working perfectly on modern Android. It pops up a warning now to the effect that “this is really old and might not still work” but it does just fine.
Android had a very different culture to both Apple and the main body of Google and actively cares about backwards compatibility.
My thoughts:
Starlink has issues but mostly works really well. If we have systems like Starlink, but more accessible (price and regions) then do we actually need to worry about a 10GB download of Xcode? Today, a rich software developer could put Starlink on their sailboat and not think about data, right?
Lithium batteries can be expensive today but with electric cars and home batteries, etc. the recycled battery market will make it very cheap to have a lot of storage on a sailboat. I know batteries are heavy and space is a premium so solar is limited, but do you still you have to think about energy, if you have enough storage?
Starlink isn’t going to cover the whole world, and to your point mostly benefits those in the developed world who can afford $125/mo. or more for access.
A large percentage of the world’s population simply cannot pay that, and even if they could, Starlink isn’t going to build ground stations anywhere near them.
This “Elon will save us!” culture has to end. He might well create little bubbles of gilded comfort in for the rich and powerful (satellite Internet, pay-to-play social media, and fast, clean, $100k cars) but his interest ends where the profit margins approach zero.
Yes, I agree that price is a huge issue but I think Starlink is opening up to regions without ground stations - I guess the latency might not be the 50ms standard, but I imagine it will still be a good thing for places like the Solomon Islands.
I would love for Starlink to be worker owned or see an alternative. I just care about the digital divide and Starlink has done things for places like rural Australia that our Government is not able to. I know the example is 10GB Xcode for iPhone development, but what about watching a YouTube video or participating in a Zoom call for school?
And on price, a 4Mbps fibre connection in the Solomon Islands is $170USD per month at the moment. Yes, Starlink’s price is ridiculous for these regions but you have to understand that it’s sadly an improvement.
At some point, you’re going to have to stop. The problem (and underlying motivation behind projects like
uxn
) is that we just don’t know where to stop. The abstraction towers are getting higher each year, and that 10GB isn’t going to be 10GB in a few years with newer architectures, patterns for programming, and support for legacy devices being shipped under the moniker of a development environment.Whatever standards you target, they won’t be sufficient once they’re widely used and ultimately pushed beyond their limit. Growth is the problem, not the constants of the current year.
In some ways we’re slowly improving, e.g. home energy efficiency - but yeah, good point, constant growth does exist in software!
I “read” like maybe 20%. Maybe I missed the point. It seems like the author is drawn to what I’d call “programming in the dirt”, where if we got stranded on some desert island, the programs they could bring with (on paper, in carry-on luggage) would be
Gilligan’s PL, if you will. I take it the author has been developing their own language most similar to 6502 assembly.
Other PLs that come to mind would be: Don Knuth’s MMIX, BASIC, maybe even Standard ML or Featherweight Java.