1. 17

We see on occasion some rather interesting features come along in PLT such as ocaml’s effects, and various haskell related things.

I am also interested in the Commander X16 computer that is being developed using a 6502, similar to a C64 for comparison.

So I was wondering what would be features either seen in more modern languages or more theoretical PLT research could be of benefit to languages targeting machines that have limited resources such as 8 bit cpu, 64kB ram, etc?

  1. 10

    “statically typed” PLT ideas typically apply equally well to small machines. (Then it’s a matter of whether the languages as a whole support constrained environments. ML/Haskell languages were initially designed to run on “good machines”, but some of them also have ressource-constrained implementation. For example, the standard OCaml implementation is not suited to running in very resource-constrained environments, but the OMicroB is an OCaml implementation targeting microcontrollers.)

    You may be interested in Gibbon, an experiment language design based on the idea of keeping the in-memory representation of tree-shaped data identical to a compact serialization format (relative offsets rather than pointers, etc.).

    1. 4

      (Maybe OT: I realize people have a fondness for the 6502 based on beloved hardware, but IIRC it is considered a particularly hostile (challenging?) target for compilers because of its extreme shortage of registers and 16-bit operations. Even Woz got frustrated enough to write and use a small 16-bit interpreter called “Sweet16” for the Apple ][ ROM. Back in the day I found the Z80 much easier to code for.)

      Anyway. FORTH worked really well on 8-bit systems, and there’s been a lot of progress in concatenative languages lately, so I wonder if any of those would work well in that domain. I’m guessing you’d still want a traditional threaded interpreter, not a compiler, because of the above mentioned problems with native codegen, but modern features like static typing and lambdas/quotes would be great

      1. 1

        Is there any chance you might elaborate on what progress has been made in concatenative languages?

        1. 1

          I’m not enough of an expert to be able to do that. If you search on this site, or just do a web search you can find some good articles about them,

      2. 3

        As far as I am aware, no high level compiled language has ever done really well on an 8-bit CPU like a 6502. (Forth aside, perhaps.) You can do it but from what I’ve heard you tend to end up writing C or whatever in a dialect that ends up working a lot like the target machine’s assembly language anyway. But life gets a lot better on a 16-bit CPU where you have a bit more register space and probably enough memory for a stack.

        On the compiler backend, I think that good dataflow analysis would be very useful for minimizing the amount of RAM used by a program. With a small program that has to fit in 64 kB of RAM you could maybe even extend it to the program’s data segment and maybe dynamic memory, and have the compiler figure out some of the things for you that old assembly programmers did like “this address contains value X in this part of the program and value Y in this other part where X is never needed”.

        I bet if you really wanted to then you could even make a compiler that does (or tries to do) some of the really insane things super-constrained programs did, like use instructions in the code segment as a constant table. You’d have to code each “trick” into the compiler separately though, I don’t know of any system that just structures your program like this.

        1. 1

          Can data flow analysis be used in a similar way rust uses its borrow checker and lifetimes to figure out when memory is not longer needed? Perhaps pulling some ideas from ARC in objective C.

          1. 1

            Yeah to be honest I think is the biggest problem to solve: high level languages generally use memory inefficiently, and that’s a big problem on 8-bit machines and 64KB heaps.

            I agree the the parent that most languages for 8-bit machines end up being a thin layer over assembly language, if anything.

            So actually I think the one advance that is now applicable, that didn’t exist the time, is Rust borrow checking! (and I”m not a Rust user or even a proponent for most cases)

            1. 1

              Yeah I was thinking a little more of treating it like register allocation and running it over every temporary in your program to minimize it down to the minimum amount needed… But it’s not entirely unrelated!

        2. 2
          1. 1

            That is an interesting idea. There was this language I got off a cdrom connection in the 90s called FAST. It wasn’t that far removed from assembly itself. C feels like a higher level language - but a language that feels like assembly but with some short cuts added on and having types would be rather interesting.

          2. 2

            The steady progress of compiler technology would be interesting here: not only do we have more effective compilers at optimising, but modern machines are so much more powerful than e.g. a C64.

            Superoptimisation, techniques for brute-forcing short instruction sequences to find the most efficient option, are also viable sometimes.