There is no static type system, so you don’t need to “emulate the compiler” in your head to reason about compilation errors.
Similar to how dynamic languages don’t require you to “emulate the compiler” in your head, purely functional languages don’t require you to “emulate the state machine”.
This is not how I think about static types. They’re a mechanism for allowing me to think less by making a subset of programs impossible. Instead of needing to think about if s can be “hello” or 7 I know I only have to worry about s being 7 or 8. The compiler error just meant I accidentally wrote a program where it is harder to think about the possible states of the program. The need to reason about the error means I already made a mistake about reasoning about my program, which is the important thing. Less errors before the program is run doesn’t mean the mistakes weren’t made.
I am not a zealot, I use dynamically typed languages. But it is for problems where the degree of dynamism inherent in the problem means introducing the ceremony of a program level runtime typing is extra work, not because reading the compiler errors is extra work.
This is very analogous to the benefits of functional languages you point out. By not having mutable globals the program is easier to think about, if s is 7 it is always 7.
Introducing constraints to the set of possible programs makes it easier to reason about our programs.
I appreciate the sentiment of your reply, and I do understand the value of static typing for certain problem domains.
Regarding this:
“making a subset of programs impossible”
How do you know what subset becomes impossible? My claim is you have to think like the compiler to do that. That’s the problem.
I agree there’s value in using types to add clarity through constraints. But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.
I really like your point about having to master several languages. I’m glad to be rid of a preprocessor, and languages like Zig and Nim are making headway on unifying compile-time and runtime programming. I disagree about the type system, though: it does add complexity, but it’s scalable and, I think, very important for larger codebases.
Ideally the “impossible subset” corresponds to what you already know is incorrect application behavior — that happens a lot of the time, for example declaring a “name” parameter as type “string” and “age” as “number”. Passing a number for the name is nonsense, and passing a string for the age probably means you haven’t parsed numeric input yet, which is a correctness and probably security problem.
It does get a lot more complicated than this, of course. Most of the time that seems to occur when building abstractions and utilities, like generic containers or algorithms, things that less experienced programmers don’t do often.
In my experience, dynamically-typed languages make it easier to write code, but harder to test, maintain and especially refactor it. I regularly make changes to C++ and Go code, and rely on the type system to either guide a refactoring tool, or at least to produce errors at all the places where I need to fix something.
How do you know what subset becomes impossible? My claim is you have to think like the compiler to do that. That’s the problem.
You’re right that you have “think like the compiler” to be able to describe the impossible programs for it to check it, but everybody writing a program has an idea of what they want it to do.
If I don’t have static types and I make the same mistake, I will have to reason about the equivalent runtime error at some point.
I suppose my objection is framing it as “static typing makes it hard to understand the compiler errors.” It is “static typing makes programming harder” (with the debatably worth it benefit of making running the program easier). The understandability of the errors is secondary, if there is value there’s still value even the error was as shitty as “no.”
But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.
I think this is the same for “functionalness”. For example, often I find I’d rather set up a thread local or similar because it is easier to deal with then threading through some context argument through everything.
I suppose there is a difference in the sense that being functional is not (as of) a configurable constraint. It’s more or less on or off.
I agree there’s value in using types to add clarity through constraints. But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.
I sometimes divide programmers in two categories: the first acknowledge that programming is a form of applied maths. The seconds went to programming to run from maths.
It is very difficult for me to relate to the second category. There’s no escaping the fact that our computers ultimately run formal systems, and most of our job is to formalise unclear requirements into an absolutely precise specification (source code), which is then transformed by a formal system (the compiler) into a stream of instructions (object code) that will then be interpreted by some hardware (the CPU, GPU…) with more or less relevant limits & performance characteristics. (It’s obviously a little different if we instead use an interpreter or a JIT VM).
Dynamic type systems mostly allow scared-of-maths people to ignore the mathematical aspects of their programs for a bit longer, until of course they get some runtime error. Worse, they often mistake their should-have-been-a-type-error mistakes for logic errors, and then claim a type system would not have helped them. Because contrary to popular beliefs, type errors don’t always manifest as such at runtime. Especially when you take advantage of generics & sum types: they make it much easier to “define errors out of existence”, by making sure huge swaths of your data is correct by construction.
And the worst is, I suspect you’re right: it is quite likely most programmers are scared of maths. But I submit maths aren’t the problem. Being scared is. People need to learn.
My claim is you have to think like the compiler to do that.
My claim is that I can just run the compiler and see if it complains. This provides a much tighter feedback loop than having to actually run my code, even if I have a REPL. With a good static type system my compiler is disciplined so I don’t have to be.
Saying that people who like dynamic types are “scared of math” is incredibly condescending and also ignorant. I teach formal verification and am writing a book on formal logic in programming, but I also like dynamic types. Lots of pure mathematics research is done with Mathematica, Python, and Magma.
I’m also disappointed but unsurprised that so many people are arguing with a guy for not making the “right choices” in a language about exploring tradeoffs. The whole point is to explore!
Obviously people aren’t monoliths, and there will be exceptions (or significant minorities) in any classification.
Nevertheless, I have observed that:
Many programmers have explicitly taken programming to avoid doing maths.
Many programmers dispute that programming is applied maths, and some downvote comments saying otherwise.
The first set is almost perfectly included in the second.
As for dynamic typing, almost systematically, arguments in favour seem to be less rigorous than arguments against. Despite CISP. So while the set of dynamic typing lovers is not nearly as strongly correlated with “maths are scary”, I do suspect a significant overlap.
While I do use Python for various reasons (available libraries, bignum arithmetic, and popularity among cryptographers (SAGE) being the main ones), dynamic typing has systematically hurt me more than it helped me, and I avoid it like the plague as soon as my programs reach non-trivial sizes.
I could just be ignorant, but despite having engaged in static/dynamic debates with articulate peers, I have yet to see any compelling argument in favour. I mean there’s the classic sound/complete dilemma, but non-crappy systems like F* or what we see in ML and Haskell very rarely stopped me from writing a program I really wanted to write. Sure, some useful programs can’t be typed. But for those most static check systems have escape hatches. and many programs people think can’t be typed, actually can. Se Ritch Hickey’s transducers for instance. All his talk he was dismissively daring static programmers to type it, only to have a Haskell programmer actually do it.
There are of course very good arguments favouring some dynamic language at the expense of some static language, but they never survive a narrowing down to static & dynamic typing in general. The dynamic language may have a better standard library, the static language may have a crappy type system with lots of CVE inducing holes… all ancillary details that have little to do with the core debate. I mean it should be obvious to anyone that Python, Mathematica, and Magma have many advantages that have little to do with their typing discipline.
Back to what I was originally trying to respond to, I don’t understand people who feel like static typing has a high cognitive cost. Something in the way their brain works (or their education) is either missing or alien. And I’m highly sceptical of claims that some people are just wired differently. It must be cultural or come from training.
And to be honest I have an increasingly hard time considering the dynamic and static positions equal. While I reckon dynamic type systems are easier to implement and more approachable, beyond that I have no idea how they help anyone write better programs faster, and I increasingly suspect they do not.
Even after trying to justify that you’ve had discussions with “articular peers” and “could just be ignorant” and this is all your own observations, you immediately double back to declaring that people who prefer dynamic typing are cognitively or culturally defective. That makes it really, really hard to assume you’re having any of these arguments in good faith.
To be honest I only recall one such articulate peer. On Reddit. He was an exception, and you’re the second one that I recall. Most of the time I see poorer arguments strongly suggesting either general or specific ignorance (most of the time they use Java or C++ as the static champion). I’m fully aware how unsettling and discriminatory is the idea that people who strongly prefer dynamic typing would somehow be less. But from where I stand it doesn’t look that false.
Except for the exceptions. I’m clearly missing something, though I have yet to be told what.
Thing is, I suspect there isn’t enough space in a programming forum to satisfactorily settle that debate. I would love to have strong empirical evidence, but I have reasons to believe this would be very hard: if you use real languages there will be too many confounding variables, and if you use a toy language you’ll naturally ignore many of the things both typing disciplines enable. For now I’d settle for a strong argument (or set thereof). If someone has a link that would be much appreciated.
And no, I don’t have a strong link in favour of static typing either. This is all deeply unsatisfactory.
I know of — oops I do not, I was confusing it with some other study… Thanks a ton for the link, I’ll take a look.
Edit: from the abstract there seem to be some evidence of the absence of a big effect, which would be just as huge as evidence of effect one way or the other.
Edit 2: just realised this is a list of studies, not just a single study. Even better.
Well, it’s the subset of programs which decidably don’t have the desired type signature! Such programs provably aren’t going to implement the desired function.
Let me flip this all around. Suppose that you’re tasked with encoding some function as a subroutine in your code. How do you translate the function’s type to the subroutine’s parameters? Surely there’s an algorithm for it. Similarly, there are algorithms for implementing the various primitive pieces of functions, and the types of each primitive function are embeddable. So, why should we build subroutines out of anything besides well-typed fragments of code?
Sure, but I think you’re talking past the argument. It’s a tradeoff. Here is another good post that explains the problem and gives it a good name: biformity.
People in the programming language design community strive to make their languages more expressive, with a strong type system, mainly to increase ergonomics by avoiding code duplication in final software; however, the more expressive their languages become, the more abruptly duplication penetrates the language itself.
That’s the issue that explains why separate compile-time languages arise so often in languages like C++ (mentioned in the blog post), Rust (at least 3 different kinds of compile-time metaprogramming), OCaml (many incompatible versions of compile-time metaprogramming), Haskell, etc.
Those languages are not only harder for humans to understand, but tools as well
The Haskell meta programming system that jumps immediately to mind is template Haskell, which makes a virtue of not introducing a distinct meta programming language: you use Haskell for that purpose as well as the main program.
Yeah the linked post mentions Template Haskell and gives it some shine, but also points out other downsides and complexity with Haskell. Again, not saying that types aren’t worth it, just that it’s a tradeoff, and that they’re different when applied to different problem domains.
One common compelling reason is that dynamic languages like Python only require you to learn a single tool in order to use them well. […] Code that runs at compile/import time follows the same rules as code running at execution time. Instead of a separate templating system, the language supports meta-programming using the same constructs as normal execution. Module importing is built-in, so build systems aren’t necessary.
That’s exactly what Zig is doing with it’s “comptime” feature, using the same language, but while keeping a statically typed and compiled approach.
I’m wondering where you feel dynamic functional languages like Clojure and Elixir fall short? I’m particularly optimistic about Elixir as of late since they’re putting a lot of effort in expanding to the data analytics and machine learning space (their NX projects), as well as interactive and literate computing (Livebook and Kino). They are also trying to understand how they could make a gradual type system work. Those all feel like traits that have made Python so successful and I feel like it is a good direction to evolve the Elixir language/ecosystem.
I think there are a lot of excellent ideas in both Clojure and Elixir!
With Clojure the practical dependence on the JVM is one huge deal breaker for many people because of licensing concerns. BEAM is better in that regard, but shares how VMs require a lot of runtime complexity that make them harder to debug and understand (compared to say, the C ecosystem tools).
For the languages themselves, simple things like explicit returns are missing, which makes the languages feel difficult to wield, especially for beginners. So enumerating that type of friction would be one way to understand where the languages fall short. Try to recoup some of the language’s strangeness budget.
I’m guessing the syntax is a pretty regular Lisp, but with newlines and indents making many of the parenthesis unnecessary?
Some things I wish Lisp syntax did better:
More syntactically first-class data types besides lists. Most obviously dictionaries, but classes kind of fit in there too. And lightweight structs (which get kind of modeled as dicts or tuples or objects or whatever in other languages).
If you have structs you need accessors. And maybe that uses the same mechanism as namespaces. Also a Lisp weak point.
Named and default arguments. The Lisp approaches feel like cludges. Smalltalk is a kind of an ideal, but secretly just the weirdest naming convention ever. Though maybe it’s not so crazy to imagine Lisp syntax with function names blown out over the call like in Smalltalk.
Great suggestions thank you! The syntax is trying to avoid parentheses like that for sure. If you have more thoughts like this please send them my way!
This might be an IDE / LSP implementation detail, but would it be possible to color-code the indentation levels? Similar to how editors color code matching brackets these days. I always have a period of getting used to Python where the whitespace sensitivity disorients me for a while.
Most editors will show a very lightly shaded vertical line for each indentation level with Python. The same works well for this syntax too. I have seen colored indentation levels (such as https://archive.fosdem.org/2022/schedule/event/lispforeveryone/), but I think it won’t be needed because of the lack of parentheses. It’s the same reason I don’t think it’ll be necessary to use a structural editor like https://calva.io/paredit/
Are you hoping/expecting this language/implementation to be actually useful for real world production stuff or be a place to experiment with ideas that could be adopted into more mature functional language ecosystems?
He spoke of the potential for functional languages to provide a significant, intrinsic advantage when it comes to parallel computing. I had never observed this myself, so I found the concept intriguing and it stuck with me.
There’s one functional runtime language that is making this a reality, although it’s currently a prototype that’s mostly only applicable to formal proofs, due to limited IO. And it’s also not really based on the lambda calculus, but on something called interaction nets, which are an inherently parallel model of computation. The runtime is called the HVM. The authors recently raised funding to turn it into a production-ready runtime for other languages, like JavaScript.
I do think it’s an important advancement, but after having spent some time implementing my own implementation of the core, since I thought it would be helpful for some problems that I was interested in, I realizes that there aren’t a whole lot of applications that have complicated enough parallelism that they require such a runtime. Proof programming languages might be the best use case. Most applications that benefit from parallelization are trivially parallel.
So, I think it’s an important advancement, but I don’t think it’s as revolutionary as some advocates Invision.
Do you have any more information on the project? This is a bit light.
I haven’t shared the open source project publicly yet, but I plan to later this year.
This thread has some example code and a link for more info if you’re interested (some details have changed since): https://twitter.com/haxor/status/1618054900612739073
And I wrote a related post about motivations here: https://www.onebigfluke.com/2022/11/the-case-for-dynamic-functional.html
This is not how I think about static types. They’re a mechanism for allowing me to think less by making a subset of programs impossible. Instead of needing to think about if s can be “hello” or 7 I know I only have to worry about s being 7 or 8. The compiler error just meant I accidentally wrote a program where it is harder to think about the possible states of the program. The need to reason about the error means I already made a mistake about reasoning about my program, which is the important thing. Less errors before the program is run doesn’t mean the mistakes weren’t made.
I am not a zealot, I use dynamically typed languages. But it is for problems where the degree of dynamism inherent in the problem means introducing the ceremony of a program level runtime typing is extra work, not because reading the compiler errors is extra work.
This is very analogous to the benefits of functional languages you point out. By not having mutable globals the program is easier to think about, if s is 7 it is always 7.
Introducing constraints to the set of possible programs makes it easier to reason about our programs.
I appreciate the sentiment of your reply, and I do understand the value of static typing for certain problem domains.
Regarding this:
How do you know what subset becomes impossible? My claim is you have to think like the compiler to do that. That’s the problem.
I agree there’s value in using types to add clarity through constraints. But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.
I really like your point about having to master several languages. I’m glad to be rid of a preprocessor, and languages like Zig and Nim are making headway on unifying compile-time and runtime programming. I disagree about the type system, though: it does add complexity, but it’s scalable and, I think, very important for larger codebases.
Ideally the “impossible subset” corresponds to what you already know is incorrect application behavior — that happens a lot of the time, for example declaring a “name” parameter as type “string” and “age” as “number”. Passing a number for the name is nonsense, and passing a string for the age probably means you haven’t parsed numeric input yet, which is a correctness and probably security problem.
It does get a lot more complicated than this, of course. Most of the time that seems to occur when building abstractions and utilities, like generic containers or algorithms, things that less experienced programmers don’t do often.
In my experience, dynamically-typed languages make it easier to write code, but harder to test, maintain and especially refactor it. I regularly make changes to C++ and Go code, and rely on the type system to either guide a refactoring tool, or at least to produce errors at all the places where I need to fix something.
You’re right that you have “think like the compiler” to be able to describe the impossible programs for it to check it, but everybody writing a program has an idea of what they want it to do.
If I don’t have static types and I make the same mistake, I will have to reason about the equivalent runtime error at some point.
I suppose my objection is framing it as “static typing makes it hard to understand the compiler errors.” It is “static typing makes programming harder” (with the debatably worth it benefit of making running the program easier). The understandability of the errors is secondary, if there is value there’s still value even the error was as shitty as “no.”
I think this is the same for “functionalness”. For example, often I find I’d rather set up a thread local or similar because it is easier to deal with then threading through some context argument through everything.
I suppose there is a difference in the sense that being functional is not (as of) a configurable constraint. It’s more or less on or off.
I sometimes divide programmers in two categories: the first acknowledge that programming is a form of applied maths. The seconds went to programming to run from maths.
It is very difficult for me to relate to the second category. There’s no escaping the fact that our computers ultimately run formal systems, and most of our job is to formalise unclear requirements into an absolutely precise specification (source code), which is then transformed by a formal system (the compiler) into a stream of instructions (object code) that will then be interpreted by some hardware (the CPU, GPU…) with more or less relevant limits & performance characteristics. (It’s obviously a little different if we instead use an interpreter or a JIT VM).
Dynamic type systems mostly allow scared-of-maths people to ignore the mathematical aspects of their programs for a bit longer, until of course they get some runtime error. Worse, they often mistake their should-have-been-a-type-error mistakes for logic errors, and then claim a type system would not have helped them. Because contrary to popular beliefs, type errors don’t always manifest as such at runtime. Especially when you take advantage of generics & sum types: they make it much easier to “define errors out of existence”, by making sure huge swaths of your data is correct by construction.
And the worst is, I suspect you’re right: it is quite likely most programmers are scared of maths. But I submit maths aren’t the problem. Being scared is. People need to learn.
My claim is that I can just run the compiler and see if it complains. This provides a much tighter feedback loop than having to actually run my code, even if I have a REPL. With a good static type system my compiler is disciplined so I don’t have to be.
Saying that people who like dynamic types are “scared of math” is incredibly condescending and also ignorant. I teach formal verification and am writing a book on formal logic in programming, but I also like dynamic types. Lots of pure mathematics research is done with Mathematica, Python, and Magma.
I’m also disappointed but unsurprised that so many people are arguing with a guy for not making the “right choices” in a language about exploring tradeoffs. The whole point is to explore!
Obviously people aren’t monoliths, and there will be exceptions (or significant minorities) in any classification.
Nevertheless, I have observed that:
As for dynamic typing, almost systematically, arguments in favour seem to be less rigorous than arguments against. Despite CISP. So while the set of dynamic typing lovers is not nearly as strongly correlated with “maths are scary”, I do suspect a significant overlap.
While I do use Python for various reasons (available libraries, bignum arithmetic, and popularity among cryptographers (SAGE) being the main ones), dynamic typing has systematically hurt me more than it helped me, and I avoid it like the plague as soon as my programs reach non-trivial sizes.
I could just be ignorant, but despite having engaged in static/dynamic debates with articulate peers, I have yet to see any compelling argument in favour. I mean there’s the classic sound/complete dilemma, but non-crappy systems like F* or what we see in ML and Haskell very rarely stopped me from writing a program I really wanted to write. Sure, some useful programs can’t be typed. But for those most static check systems have escape hatches. and many programs people think can’t be typed, actually can. Se Ritch Hickey’s transducers for instance. All his talk he was dismissively daring static programmers to type it, only to have a Haskell programmer actually do it.
There are of course very good arguments favouring some dynamic language at the expense of some static language, but they never survive a narrowing down to static & dynamic typing in general. The dynamic language may have a better standard library, the static language may have a crappy type system with lots of CVE inducing holes… all ancillary details that have little to do with the core debate. I mean it should be obvious to anyone that Python, Mathematica, and Magma have many advantages that have little to do with their typing discipline.
Back to what I was originally trying to respond to, I don’t understand people who feel like static typing has a high cognitive cost. Something in the way their brain works (or their education) is either missing or alien. And I’m highly sceptical of claims that some people are just wired differently. It must be cultural or come from training.
And to be honest I have an increasingly hard time considering the dynamic and static positions equal. While I reckon dynamic type systems are easier to implement and more approachable, beyond that I have no idea how they help anyone write better programs faster, and I increasingly suspect they do not.
Even after trying to justify that you’ve had discussions with “articular peers” and “could just be ignorant” and this is all your own observations, you immediately double back to declaring that people who prefer dynamic typing are cognitively or culturally defective. That makes it really, really hard to assume you’re having any of these arguments in good faith.
To be honest I only recall one such articulate peer. On Reddit. He was an exception, and you’re the second one that I recall. Most of the time I see poorer arguments strongly suggesting either general or specific ignorance (most of the time they use Java or C++ as the static champion). I’m fully aware how unsettling and discriminatory is the idea that people who strongly prefer dynamic typing would somehow be less. But from where I stand it doesn’t look that false.
Except for the exceptions. I’m clearly missing something, though I have yet to be told what.
Thing is, I suspect there isn’t enough space in a programming forum to satisfactorily settle that debate. I would love to have strong empirical evidence, but I have reasons to believe this would be very hard: if you use real languages there will be too many confounding variables, and if you use a toy language you’ll naturally ignore many of the things both typing disciplines enable. For now I’d settle for a strong argument (or set thereof). If someone has a link that would be much appreciated.
And no, I don’t have a strong link in favour of static typing either. This is all deeply unsatisfactory.
There seems to be no conclusive evidence one way or the other: https://danluu.com/empirical-pl/
Sharing this link is the only correct response to a static/dynamic argument thread.
I know of — oops I do not, I was confusing it with some other study… Thanks a ton for the link, I’ll take a look.
Edit: from the abstract there seem to be some evidence of the absence of a big effect, which would be just as huge as evidence of effect one way or the other.
Edit 2: just realised this is a list of studies, not just a single study. Even better.
Well, it’s the subset of programs which decidably don’t have the desired type signature! Such programs provably aren’t going to implement the desired function.
Let me flip this all around. Suppose that you’re tasked with encoding some function as a subroutine in your code. How do you translate the function’s type to the subroutine’s parameters? Surely there’s an algorithm for it. Similarly, there are algorithms for implementing the various primitive pieces of functions, and the types of each primitive function are embeddable. So, why should we build subroutines out of anything besides well-typed fragments of code?
Sure, but I think you’re talking past the argument. It’s a tradeoff. Here is another good post that explains the problem and gives it a good name: biformity.
https://hirrolot.github.io/posts/why-static-languages-suffer-from-complexity
That’s the issue that explains why separate compile-time languages arise so often in languages like C++ (mentioned in the blog post), Rust (at least 3 different kinds of compile-time metaprogramming), OCaml (many incompatible versions of compile-time metaprogramming), Haskell, etc.
Those languages are not only harder for humans to understand, but tools as well
The Haskell meta programming system that jumps immediately to mind is template Haskell, which makes a virtue of not introducing a distinct meta programming language: you use Haskell for that purpose as well as the main program.
Yeah the linked post mentions Template Haskell and gives it some shine, but also points out other downsides and complexity with Haskell. Again, not saying that types aren’t worth it, just that it’s a tradeoff, and that they’re different when applied to different problem domains.
This is probably a fair characterization.
I am a bit skeptical of this. Certainly C++ is harder for a tool to understand than C say, but I would be much less certain of say Ruby vs Haskell.
Though I suppose it depends on if the tool is operating on the program source or a running instance.
That’s exactly what Zig is doing with it’s “comptime” feature, using the same language, but while keeping a statically typed and compiled approach.
I’m wondering where you feel dynamic functional languages like Clojure and Elixir fall short? I’m particularly optimistic about Elixir as of late since they’re putting a lot of effort in expanding to the data analytics and machine learning space (their NX projects), as well as interactive and literate computing (Livebook and Kino). They are also trying to understand how they could make a gradual type system work. Those all feel like traits that have made Python so successful and I feel like it is a good direction to evolve the Elixir language/ecosystem.
I think there are a lot of excellent ideas in both Clojure and Elixir!
With Clojure the practical dependence on the JVM is one huge deal breaker for many people because of licensing concerns. BEAM is better in that regard, but shares how VMs require a lot of runtime complexity that make them harder to debug and understand (compared to say, the C ecosystem tools).
For the languages themselves, simple things like explicit returns are missing, which makes the languages feel difficult to wield, especially for beginners. So enumerating that type of friction would be one way to understand where the languages fall short. Try to recoup some of the language’s strangeness budget.
I’m guessing the syntax is a pretty regular Lisp, but with newlines and indents making many of the parenthesis unnecessary?
Some things I wish Lisp syntax did better:
Great suggestions thank you! The syntax is trying to avoid parentheses like that for sure. If you have more thoughts like this please send them my way!
This might be an IDE / LSP implementation detail, but would it be possible to color-code the indentation levels? Similar to how editors color code matching brackets these days. I always have a period of getting used to Python where the whitespace sensitivity disorients me for a while.
Most editors will show a very lightly shaded vertical line for each indentation level with Python. The same works well for this syntax too. I have seen colored indentation levels (such as https://archive.fosdem.org/2022/schedule/event/lispforeveryone/), but I think it won’t be needed because of the lack of parentheses. It’s the same reason I don’t think it’ll be necessary to use a structural editor like https://calva.io/paredit/
Are you hoping/expecting this language/implementation to be actually useful for real world production stuff or be a place to experiment with ideas that could be adopted into more mature functional language ecosystems?
I’d love to use it in production and that’s what I’m aiming for (and where my experience is). If it influences anyone else that would be great too.
There’s one functional runtime language that is making this a reality, although it’s currently a prototype that’s mostly only applicable to formal proofs, due to limited IO. And it’s also not really based on the lambda calculus, but on something called interaction nets, which are an inherently parallel model of computation. The runtime is called the HVM. The authors recently raised funding to turn it into a production-ready runtime for other languages, like JavaScript.
I do think it’s an important advancement, but after having spent some time implementing my own implementation of the core, since I thought it would be helpful for some problems that I was interested in, I realizes that there aren’t a whole lot of applications that have complicated enough parallelism that they require such a runtime. Proof programming languages might be the best use case. Most applications that benefit from parallelization are trivially parallel.
So, I think it’s an important advancement, but I don’t think it’s as revolutionary as some advocates Invision.
You might look at Unison, a recent functional programming language designed with distributed computing in mind.
Thanks for the link! Yes I think that and Wing are both interesting, blurring the line between infrastructure and workload.
Looks like Honu.
Looks like https://hg.sr.ht/~duangle/scopes
Great link thanks for that!