This is really cool and I love the site’s tagline, which speaks to me on so many levels:
Solving yesterday’s problems today
I find most of my open source software work and my hobbyist microelectronics work to congregate around a similar approach of improving or extending the core tech that was available in the 80s, starting with eschewing GUIs and embracing the hacker ethos. It’s unfortunately not a very lucrative proposition: that ship sailed long, long ago and there’s not much to be gained (in a literal sense, on a macro level) by embracing legacy tech over the modern life that revolves around web and mobile. Alas..
I should have said “modern” GUIs. I very much enjoy wiring up an OLED or eInk to a microcontroller and have written my own GUI (a thin X11-compatible thing) for PXE/USB bootable minimal systems, but there’s no denying that modern GUIs are far too disconnected from the underlying machines and hacker ethos.
Why does the hacker ethos require things to be connected to the underlying machine? Can a person have the hacker ethos by delighting in the complexity and weirdness of CSS?
The underlying machine is just an infinite tower of turtles, anyway. “Machine code” in most modern CPUs is a facade, an emulator on a very complex microarchitecture. The design of that CPU is a Verilog file abstracted from the physical layout of gates and wires.
My dad was a hardware engineer (at Zilog, AMD, Xilinx) who never learned to program; to him, all software was a vague airy-fairy abstraction.
I have a friend who’s a fab engineer, who worked a long time at Intel; to him, computers are layers of silicon compounds created by huge million-dollar machines by electrochemical processes I don’t understand.
Hey, I was working in tech in the 80s, and GUIs were the most awesome thing around. I’d been hacking on Apple IIs and PETs and timeshared PDP-11s, and when I read the tech reports from Xerox PARC, and Ted Nelson’s “Dream Machines”, my head exploded.
If you don’t think PARC, who were inventing GUIs in the 70s, were connected to the hacker ethos, go read about their work. They literally had to build their own minicomputers out of small-scale chips because Xerox suits wouldn’t let them buy from Data General. They rolled their own programmable microcode. They wrote four or five operating systems. They hacked the first laser printer out of a big Xerox copier with a freakin’ laser wired into it.
PARC was the end of an era. The RISC revolution made it much harder to prototype hardware. Between the decline of PARC and the start of the CHERI project, there were very few research groups doing full-stack (hardware, OS, compiler/language) co-design. The Alto at PARC was a fantastic design for prototyping. As far as I can tell, it was the origin of bytecode: the user-facing instructions were a single byte, used as a lookup into a microcode table. The microcode table was entirely programmable but when you wrote a compiler for the Alto you’d write the microcode ops and then a translator from your source language into that bytecode. The idea was that a future CPU could then provide hardware implementations of the performance-critical bytecodes and the rest could be implemented via simpler operations.
I had the opportunity to speak to Alan Kay about some of these things about 15 years ago. He said (among other things) that if you want to get good work out of PhD students then you should buy them a cubic foot of the most powerful processing that’s available on the market. Back then, he suggested GPUs, but today I’d suggest FPGAs. You can simulate a fairly modern CPU on a beefy FPGA at around 5% of real speed. That’s not enough for it to be competitive but it is fast enough that it’s useable and great for experimentation. We’re starting to see a resurgence of this kind of work now that there are decent open-source RISC-V soft cores. RISC-V is a terrible ISA, but it’s fine for research and with high-level HDLs like BlueSpec you can easily modify something like the Toooba out-of-order core to do interesting things. When the CHERI project started, we had to build a MIPS64 core from scratch, now you can just download one and have something that can run an easily modifiable *BSD / Linux OS and can hack LLVM to make it do something different with a very small team.
The problem I have with a lot of modern research is that it’s very stratified. Everyone treats the layer above or below them in the stack as an immutable problem that they need to work around. PARC didn’t do that and some of their most influential work was from the kind of cross-cutting work that wasn’t afraid to modify many layers at a time.
Believe me, it’s not possible to convey in a paragraph how awesome PARC was. Steven Levy’s classic book “Hackers” has a good account, and “Fumbling The Future” is a book-length account of PARC and how Xerox failed to commercialize much of their work.
I have to wonder what an accelerator like this would do to most of the extant software running on 6502 era micro-computers. Pretty much all of it was incredibly timing specific. I know on the Atari (where my 6502 experience is) you wrote your code to take advantage of the ‘jiffy timer’ to run in the time between when the CRT beam finished scanning one frame and started at the top of the next.
This is really cool and I love the site’s tagline, which speaks to me on so many levels:
I find most of my open source software work and my hobbyist microelectronics work to congregate around a similar approach of improving or extending the core tech that was available in the 80s, starting with eschewing GUIs and embracing the hacker ethos. It’s unfortunately not a very lucrative proposition: that ship sailed long, long ago and there’s not much to be gained (in a literal sense, on a macro level) by embracing legacy tech over the modern life that revolves around web and mobile. Alas..
GUIs were very much around in the 80s, and if you were embracing the hacker ethos, you would be implementing one.
The Oberon system is a product of the hacker ethos, or read the interview with Bill Joy linked here this week.
But don’t you know? The hacker ethos is “better things aren’t possible”.
I should have said “modern” GUIs. I very much enjoy wiring up an OLED or eInk to a microcontroller and have written my own GUI (a thin X11-compatible thing) for PXE/USB bootable minimal systems, but there’s no denying that modern GUIs are far too disconnected from the underlying machines and hacker ethos.
Why does the hacker ethos require things to be connected to the underlying machine? Can a person have the hacker ethos by delighting in the complexity and weirdness of CSS?
The underlying machine is just an infinite tower of turtles, anyway. “Machine code” in most modern CPUs is a facade, an emulator on a very complex microarchitecture. The design of that CPU is a Verilog file abstracted from the physical layout of gates and wires.
My dad was a hardware engineer (at Zilog, AMD, Xilinx) who never learned to program; to him, all software was a vague airy-fairy abstraction.
I have a friend who’s a fab engineer, who worked a long time at Intel; to him, computers are layers of silicon compounds created by huge million-dollar machines by electrochemical processes I don’t understand.
There are as many hacker ethos as there are hackers. It’s a Potter Stewart sort of thing.
The Hacker Ethos is to decide who is not a True
ScotsmanHackerHey, I was working in tech in the 80s, and GUIs were the most awesome thing around. I’d been hacking on Apple IIs and PETs and timeshared PDP-11s, and when I read the tech reports from Xerox PARC, and Ted Nelson’s “Dream Machines”, my head exploded.
If you don’t think PARC, who were inventing GUIs in the 70s, were connected to the hacker ethos, go read about their work. They literally had to build their own minicomputers out of small-scale chips because Xerox suits wouldn’t let them buy from Data General. They rolled their own programmable microcode. They wrote four or five operating systems. They hacked the first laser printer out of a big Xerox copier with a freakin’ laser wired into it.
Holy shit that’s awesome!
PARC was the end of an era. The RISC revolution made it much harder to prototype hardware. Between the decline of PARC and the start of the CHERI project, there were very few research groups doing full-stack (hardware, OS, compiler/language) co-design. The Alto at PARC was a fantastic design for prototyping. As far as I can tell, it was the origin of bytecode: the user-facing instructions were a single byte, used as a lookup into a microcode table. The microcode table was entirely programmable but when you wrote a compiler for the Alto you’d write the microcode ops and then a translator from your source language into that bytecode. The idea was that a future CPU could then provide hardware implementations of the performance-critical bytecodes and the rest could be implemented via simpler operations.
I had the opportunity to speak to Alan Kay about some of these things about 15 years ago. He said (among other things) that if you want to get good work out of PhD students then you should buy them a cubic foot of the most powerful processing that’s available on the market. Back then, he suggested GPUs, but today I’d suggest FPGAs. You can simulate a fairly modern CPU on a beefy FPGA at around 5% of real speed. That’s not enough for it to be competitive but it is fast enough that it’s useable and great for experimentation. We’re starting to see a resurgence of this kind of work now that there are decent open-source RISC-V soft cores. RISC-V is a terrible ISA, but it’s fine for research and with high-level HDLs like BlueSpec you can easily modify something like the Toooba out-of-order core to do interesting things. When the CHERI project started, we had to build a MIPS64 core from scratch, now you can just download one and have something that can run an easily modifiable *BSD / Linux OS and can hack LLVM to make it do something different with a very small team.
The problem I have with a lot of modern research is that it’s very stratified. Everyone treats the layer above or below them in the stack as an immutable problem that they need to work around. PARC didn’t do that and some of their most influential work was from the kind of cross-cutting work that wasn’t afraid to modify many layers at a time.
Believe me, it’s not possible to convey in a paragraph how awesome PARC was. Steven Levy’s classic book “Hackers” has a good account, and “Fumbling The Future” is a book-length account of PARC and how Xerox failed to commercialize much of their work.
This is a super neat project!
I have to wonder what an accelerator like this would do to most of the extant software running on 6502 era micro-computers. Pretty much all of it was incredibly timing specific. I know on the Atari (where my 6502 experience is) you wrote your code to take advantage of the ‘jiffy timer’ to run in the time between when the CRT beam finished scanning one frame and started at the top of the next.