I only read the beginning of the README, and I wish there was a more formal overview of the key innovations of the language. The premises are very interesting, but I couldn’t get the gist of what exactly is the impact on syntax and semantics of avoiding allocating data on the heap.
I didn’t understand it either. Much more hype than information. Plus, the author claims a focus on doing better than C or LISP on things like GPU’s but ignores all the languages built for parallel programming. Some of those were extensions to or libraries for C that were used successfully on supercomputers. Author also obsesses about not doing everything on the heap when HPC-style languages mostly don’t that I’m aware.
I’m keeping it since it looked like it might still have some useful ideas. I’m just skeptical the author’s developments should be trusted after having done no research on prior work except for a fork of Futhark.
Glad I’m not the only one! I found Futhark very clearly presented, both on terms of features and actual code examples.
I’d really like to see examples of things like tree traversal without heap allocated objects. Is that even possible/desirable? Most of the software I’ve written is more about transforming heterogeneous data than computing things on large homogeneous datasets. GPU programming, just like DSP programming evolved from signal processing to general programming – but is it really that generic? Is the high parallelization always a win?
I’d like to answer you but can’t. Too far out of my expertise. If not INFOSEC, algorithms on large amounts of data would’ve been the other I’d choose since the tradeoffs are endless. I do suggest looking at X10, Chapel, and ParaSail to see how some parallel languages were designed. I know ParaSail got rid of global heap. Cray is still pushing Chapel recently comparing it to some mainstream stuff in benchmarks.
I never heard about X10, thanks for the tip! I might actual dive a little bit deeper into Spiral and see if there’s something of substance in there.