I’m not usually interested in reading about young / experimental languages, but this is really speaking to me, and concatanative languages are totally new to me.
I’m really looking forward to the next post and I’m especially interested to see how you approach IO (I also don’t know anything about Haskell so I think there will be a lot of new concepts for me).
I wonder whether when IO blocks, it only needs to block one stack and the machine can potentially keep running on other stacks, or would the whole multi stack block and we would do concurrent programming in a more traditional way with a second Multi-Stack and a channel between.
Semantically, IO on a single stack could indeed be run concurrently with non-IO operations on another stack. Whether or not a runtime takes advantage of that concurrency is another matter. I don’t currently plan to implement such a runtime.
If you had two different stacks doing IO, though, then the ordering is explicit in the text. So you would need a separate explicit mechanism for concurrency in that case. I have some nascent ideas about that, but selecting the best approach will require experimentation with a working implementation.
This is just an assumption, but isn’t it that the out param is used for slices because they are not Sized?
The compiler can treat -> i32 and out: &mut i32 as the same thing if it already has a reference in which to store the result, as shown in the assembly, and can treat -> i32 as a special case where it can also create a new owned variable. With [u8] the memory needs to already be allocated as it is not Sized.
out: &mut i32
That’s a correlation, but not a cause. If it were as cheap to construct a 4KB buffer as it is to construct a 4 byte integer, you’d probably see a whole lot fewer out params in use. But buffers are expensive to allocate, which results in: