It was clear for me from the beginning that I wanted to have multimethods in Next Generation Shell from the start. It’s one of the main features of the language. I went with no declaration though. Whenever you define a method, it’s a multimethod. Then continue defining methods with the same name as needed. That’s it.
F my_method(x:Int) …
F my_method(x:Str) …
Additionally, when a multimethod is called, the matching is done simpler than anywhere else that I have seen: methods are scanned bottom to top. Whenever a match between arguments and parameters is found, that method is invoked.
Not quite. Generic methods have to be declared with GENERIC: (or one of its siblings, like GENERIC#) prior to defining any methods. Regular words (introduced with : instead of M:) do not dispatch based on object types.
The Factor documentation has a tendency to define everything in terms of its own weird jargon, so it’s a bit hard to find the right places to start, IMO. The main way to define a multimethod is M: which you can read about here:
I’m not certain that it uses matching multimethod definitions from the bottom upwards, but that’s generally how Factor does things - it parses its input code just once and executes things as it finds them, so the last definition generally takes precedence.
Nice approach! Similar to C++ overloading, except for the scanning order. Scanning order will always be a pain if methods can be extended cross-modules, I think, since you likely never know which module will be included first across any arbitrary project.
For Clojure, we don’t actually use multimethods often. Every function is polymorphic by default and we work with dynamic typing. But occasionally we want one function name with some behavior specializations and we’ll reach for a multimethod.
The scanning order works fine for now. Different libraries are expected to bring their own types so there shouldn’t be a conflict. Simplicity and ability to reason were the considerations. We’ll solve the problems as they arise.
Edit: in case of inheritance the parent type with its methods is already loaded by the time that the subtype is defined with its own methods. That means that the subtype methods are define later in the list and found first during the scan.
Still in the works. Clojure tree sitter will work well. LSP will work except for jank’s C++ interop, which is different from Java interop. For that, I aim to ultimately get clojure-lsp updated to support jank, rather than to re-implement it.
nREPL server support is currently being worked on. Your existing nREPL client should work out of the box.
In short, once jank is ready, you can expect full parity with Clojure JVM for interactive programming from your editor.
Do you intend to keep benchmarks against the Clojure JVM? One of the advantages of Jank, in my mind, is a “lighter” toolchain (debatable when using LLVM) and potentially better performance (native compilation).
I have benchmarked several aspects of jank versus Clojure JVM so far, including microbenchmarks for things such as sequence processing (in which jank is much more efficient and can be much faster) as well as larger benchmarks, such as a pure Clojure ray tracer I wrote. The first handful of posts on the blog (so, starting from the bottom) were all framed around implementing features of jank and benchmarking them against Clojure JVM until I could beat (or at least match) Clojure: https://jank-lang.org/blog/ This also includes a blog post dedicated to jank’s C++ string class, which I’ve written from scratch and benchmarked against the a couple of the leading C++ string implementations (libc++ and folly).
I haven’t been benchmarking much this year, compared to last year, since I’ve been more focused on feature parity than performance. Going forward, though, I would like to have a continuous benchmarking suite.
I think the way to think of jank is that it’s a lot like a GraalVM native image, except it still supports JIT compilation as well. jank’s AOT compilation isn’t implemented yet, but we’ll be able to make AOT builds in two flavors:
Static runtime (no JIT compilation, whole program optimizations) – like a native image, but should be even lighter
Dynamic runtime (JIT compilation, optimizations, but more var-based and less inlining, since fns can still be replaced at any time)
This should allow a great amount of flexibility, such that we can make AOT builds during development and still REPL right into them. Then we can make static AOT builds as lean as possible for scenarios where we know we won’t be evaling any new code.
Thanks! I’ve read you articles on Jank with great interest in the past, and was a bit surprised to not see benchmarks given the early focus on performance shown by the previous posts. AOT is perfect for production workloads, it’s great to hear it’s on the roadmap.
It was clear for me from the beginning that I wanted to have multimethods in Next Generation Shell from the start. It’s one of the main features of the language. I went with no declaration though. Whenever you define a method, it’s a multimethod. Then continue defining methods with the same name as needed. That’s it.
F my_method(x:Int) …
F my_method(x:Str) …
Additionally, when a multimethod is called, the matching is done simpler than anywhere else that I have seen: methods are scanned bottom to top. Whenever a match between arguments and parameters is found, that method is invoked.
Have a nice weekend!
Factor is the same, I think; very similar, anyway.
Not quite. Generic methods have to be declared with
GENERIC:(or one of its siblings, likeGENERIC#) prior to defining any methods. Regular words (introduced with:instead ofM:) do not dispatch based on object types.Now how did you get that fancy tag after your name and where can I get one for being jank’s developer? :)
https://lobste.rs/hats
Nice! Thanks!
Anyone has example? “factor language multimethod example” didn’t bring anything meaningful.
The Factor documentation has a tendency to define everything in terms of its own weird jargon, so it’s a bit hard to find the right places to start, IMO. The main way to define a multimethod is M: which you can read about here:
https://docs.factorcode.org/content/article-generic.html
https://docs.factorcode.org/content/article-tour-objects.html
I’m not certain that it uses matching multimethod definitions from the bottom upwards, but that’s generally how Factor does things - it parses its input code just once and executes things as it finds them, so the last definition generally takes precedence.
Not that it really matters here, but those are not multimethods. Default generic functions in Factor provide only single dispatch.
However there is an experimental vocabulary providing multiple dispatch and a work-in-progress / in-standby PR aimed at making it the default.
https://docs.factorcode.org/content/vocab-multi-methods.html
Nice approach! Similar to C++ overloading, except for the scanning order. Scanning order will always be a pain if methods can be extended cross-modules, I think, since you likely never know which module will be included first across any arbitrary project.
For Clojure, we don’t actually use multimethods often. Every function is polymorphic by default and we work with dynamic typing. But occasionally we want one function name with some behavior specializations and we’ll reach for a multimethod.
The scanning order works fine for now. Different libraries are expected to bring their own types so there shouldn’t be a conflict. Simplicity and ability to reason were the considerations. We’ll solve the problems as they arise.
Edit: in case of inheritance the parent type with its methods is already loaded by the time that the subtype is defined with its own methods. That means that the subtype methods are define later in the list and found first during the scan.
How’s the editor support for jank? Does a clojure tree sitter grammar work? What about an LSP?
Still in the works. Clojure tree sitter will work well. LSP will work except for jank’s C++ interop, which is different from Java interop. For that, I aim to ultimately get clojure-lsp updated to support jank, rather than to re-implement it.
nREPL server support is currently being worked on. Your existing nREPL client should work out of the box.
In short, once jank is ready, you can expect full parity with Clojure JVM for interactive programming from your editor.
Do you intend to keep benchmarks against the Clojure JVM? One of the advantages of Jank, in my mind, is a “lighter” toolchain (debatable when using LLVM) and potentially better performance (native compilation).
I have benchmarked several aspects of jank versus Clojure JVM so far, including microbenchmarks for things such as sequence processing (in which jank is much more efficient and can be much faster) as well as larger benchmarks, such as a pure Clojure ray tracer I wrote. The first handful of posts on the blog (so, starting from the bottom) were all framed around implementing features of jank and benchmarking them against Clojure JVM until I could beat (or at least match) Clojure: https://jank-lang.org/blog/ This also includes a blog post dedicated to jank’s C++ string class, which I’ve written from scratch and benchmarked against the a couple of the leading C++ string implementations (libc++ and folly).
I haven’t been benchmarking much this year, compared to last year, since I’ve been more focused on feature parity than performance. Going forward, though, I would like to have a continuous benchmarking suite.
I think the way to think of jank is that it’s a lot like a GraalVM native image, except it still supports JIT compilation as well. jank’s AOT compilation isn’t implemented yet, but we’ll be able to make AOT builds in two flavors:
This should allow a great amount of flexibility, such that we can make AOT builds during development and still REPL right into them. Then we can make static AOT builds as lean as possible for scenarios where we know we won’t be evaling any new code.
Thanks! I’ve read you articles on Jank with great interest in the past, and was a bit surprised to not see benchmarks given the early focus on performance shown by the previous posts. AOT is perfect for production workloads, it’s great to hear it’s on the roadmap.