@bcantrill: Is the lack of fine-grained allocator control hurting you in embedded contexts? I know that was a major pain point for some and a big motivating factor for Zig, but maybe Oxide doesn’t actually need any of that?
Well, at the moment we are really doing everything possible to avoid dynamic allocation full-stop – and in fact, what is interesting about the system that we’re developing (in my opinion) is the creativity in assuring that traditionally dynamic activities (like task creation) are done entirely statically. So if/when/where we do have dynamic memory allocation, it is likely to be exceedingly simple – and focused on space efficiency rather than time.
Having done embedded for a long time in the past. not having dynamic allocation (with all the gimmick to get the size “right”) was the best long term decision to guarantee long uptime.
It is very different when you have the luxury of your application being started from the OS fresh each time it needs to run,
Adding on: it’s not “only” a matter of uptime, it also helps tremendously with testing, reviewing and analysis. When everything is allocated statically, you know whether you have enough memory or not right from the start. There’s no imaginable cornercase under which malloc might fail to allocate some memory because, well, nobody’s malloc-ing anything.
The fact that you don’t get OOM crashes (and, thus, your uptime doesn’t get thrashed) is just one of the nice things about it. You also get more predictable timing, you have a more solid basis for all sorts of hardware-related decisions and so on.
It’s not a matter of embedded developers being opposed to being dragged out of the stone age of computing, there are some valid technical reasons behind our preference for flint stone arrowheads :).