Chapel is one of those languages which seems like it has a crazy high skill ceiling, but also is so obscure that nobody’s writing blogs or talks showcasing that ceiling. Like you can straight-up specify that two variables should be stored on different machines. What can you do with that kind of control?
Chapel targets high performance computing systems that involve mixed shared memory / distributed memory programming models where one often wants fine control over placement of computations and data. A developer often is trying to avoid the cost of data movement within that complex memory hierarchy that may span disjoint shared memory spaces, both in terms of the cost of the actual movement as well as the delay such movement induces on computations that require it. Most applications outside HPC don’t require such fine grained tuning that is accessible to the programmer, so it’s safe to delegate the responsibility to a runtime or compiler. I don’t think you see much blogging about this since it’s a relatively niche community, and there aren’t many bloggers or other evangelists in that community as compared to others. Within the HPC world, Chapel is even more niche - most HPC work is dominated by C++ these days. I’ve been in it for going on 30 years and over that time I’ve noticed that the HPC community is pretty different than others when it comes to the online community.
Thanks for posting our release announcement, @hwayne!
Chapel’s ability to declare variables on arbitrary machines was (perhaps obviously) designed primarily for HPC users who have benefitted from the capability in a variety of scalable applications from Computational Fluid Dynamics to Satellite Image Analysis to Pandas-like Dataframes at scale. But it is arguably becoming relevant to more users due to the increase in GPU-based computing, where it’s very useful to specify whether a given variable or array should be allocated in the memory of the CPU or one of the GPUs. Moreover, expressing these host-device data transfers using assignments between variables is very attractive as compared with using cudaMemCpy() or some other vendor’s equivalent. This blog article by Engin Kayraklioglu goes into more detail about the application of this longstanding Chapel feature to the GPU context.
Though we love when our users write about their uses of Chapel, most of them are admittedly more focused on their applications and science rather than the programming approach. Something we’ve recently launched to try and help with this is a new 7 Questions for Chapel Users interview series to shine a light on some of their work and experiences. The annual PAW-ATM workshop at SC is also a place where users have summarized their work in more detail over the years, like this pair of paper presentations from PAW-ATM 2023 or Eric Laurendeau’s distinguished speaker talk at PAW-ATM 2024 last month.
Chapel is one of those languages which seems like it has a crazy high skill ceiling, but also is so obscure that nobody’s writing blogs or talks showcasing that ceiling. Like you can straight-up specify that two variables should be stored on different machines. What can you do with that kind of control?
Chapel targets high performance computing systems that involve mixed shared memory / distributed memory programming models where one often wants fine control over placement of computations and data. A developer often is trying to avoid the cost of data movement within that complex memory hierarchy that may span disjoint shared memory spaces, both in terms of the cost of the actual movement as well as the delay such movement induces on computations that require it. Most applications outside HPC don’t require such fine grained tuning that is accessible to the programmer, so it’s safe to delegate the responsibility to a runtime or compiler. I don’t think you see much blogging about this since it’s a relatively niche community, and there aren’t many bloggers or other evangelists in that community as compared to others. Within the HPC world, Chapel is even more niche - most HPC work is dominated by C++ these days. I’ve been in it for going on 30 years and over that time I’ve noticed that the HPC community is pretty different than others when it comes to the online community.
Thanks for posting our release announcement, @hwayne!
Chapel’s ability to declare variables on arbitrary machines was (perhaps obviously) designed primarily for HPC users who have benefitted from the capability in a variety of scalable applications from Computational Fluid Dynamics to Satellite Image Analysis to Pandas-like Dataframes at scale. But it is arguably becoming relevant to more users due to the increase in GPU-based computing, where it’s very useful to specify whether a given variable or array should be allocated in the memory of the CPU or one of the GPUs. Moreover, expressing these host-device data transfers using assignments between variables is very attractive as compared with using
cudaMemCpy()or some other vendor’s equivalent. This blog article by Engin Kayraklioglu goes into more detail about the application of this longstanding Chapel feature to the GPU context.Though we love when our users write about their uses of Chapel, most of them are admittedly more focused on their applications and science rather than the programming approach. Something we’ve recently launched to try and help with this is a new 7 Questions for Chapel Users interview series to shine a light on some of their work and experiences. The annual PAW-ATM workshop at SC is also a place where users have summarized their work in more detail over the years, like this pair of paper presentations from PAW-ATM 2023 or Eric Laurendeau’s distinguished speaker talk at PAW-ATM 2024 last month.