I’m really excited about this idea. I think it has a ton of potential to bring the concepts and ideas from Erlang/BEAM to languages like Rust. However this is definitely a massive undertaking.
Right now, as far as I can tell, this is leveraging wasmtime to compile to native bytecode, but this is very limiting for yielding (as seen in the normalization steps: https://github.com/lunatic-lang/lunatic/blob/main/src/normalisation/reduction_counting.rs) Hot loops without function calls can potentially loop indefinitely with the current implementation, and I don’t see an easy solution to that. Either you take the performance hit of inserting branching at every potential loop location or the code is JIT’ed rather than compiled ahead of time. Pretty fundamental problem which I think is going to be hard to solve properly.
There is a lot of work to get this into a state where it is usable, beyond the running and yielding of code. It feels like this is a project that needs serious funding before it can really take off, which sucks because the concept is awesome.
Hi! I’m the author of Lunatic. I have spent some time thinking about the hot loop issue. I don’t have much data back this up, but recognising that a loop doesn’t have any function calls inside and inserting a check on top of each iteration should not be a big performance hit. Modern CPUs are extremely good at branch prediction and the reduction counter is always hot and stays inside L1 cache. Until now this wasn’t a big issue so I didn’t spend time implementing and testing it.
I’m glad to hear that you think it should be an approachable fix. And you’re right, it is somewhat difficult to say what the perf impact is given branch prediction. My initial thought was that that would be too slow, and that JITing with wasmtime and interpreting otherwise might be the way to go, but obviously that is a huge undertaking and might not even yield better performance
I am super excited about this project, so I look forward to its future developments! Best of luck.