It’s a little more complicated because if you have two identical named interfaces, A and B, and you have a method M that either takes or returns that interface, whether it uses A or B affects which interfaces M implements. That is M() A and M() B are nominally different even though A and B are structurally identical. In other words, interfaces are only “flattened” by one level. Generics now let you paper over the difference, but it was a practical problem before that.
Yup. And this is one of several reason why interfaces should generally be defined only in terms of primitive types, ideally stdlib types. Anything else invites ambiguity like the kind you’re describing here.
Yeah, the way I’d describe it is that Go is still nominally typed by default, so method signature matching still works nominally rather than structurally. Assignability of interfaces is a structurally typed “island” with the system.
Planning is a really cool domain which doesn’t get enough attention. This post a nice intro to it.
It’s a very active area of research and there are different solvers which compete (on performance, solution quality etc). in the International Planning Competition. At the time I was looking into this, the leader was Fast Downward but that may have changed.
http://planning.domains/ is a nice resource, with a web editor where you can write problems in PDDL and run them on a variety of solvers.
My understanding of planning is that it’s basically graph search with some smart stuff added on. A lot of work goes in to cheaply pruning the search space and prioritising what’s left. In contrast to other AI fields, the algorithms are logical and understandable, and hence also explainable.
There’s massive untapped potential for applying planning and constraint solving to real world domains - and no GPUs are required!
There’s massive untapped potential for applying planning and constraint solving to real world domains - and no GPUs are required!
Not required maybe, but if you can represent your graph as an adjacency matrix you can probably run a bunch of these algorithms quickly on a GPU, which should help scale them up a lot.
I’ve been thinking about this recently from the point of view of someone who has had to teach functional programming in the past. I came to the conclusion that a new language was necessary, design specifically for teaching, but I must admit that scheme is also a good choice, and already exists.
I can’t really agree with your conclusion regarding Haskell. In my not too expansive experience (taking a mandatory basic and an advanced FP course, and later being a TA for the faculty) install was simply not a problem — dependencies are a nontrivial problem in haskell, but simply getting ghc or ghci up and running is a no-brainer. We also had an online evaluator for homeworks/exams. For a beginner course I don’t think any dependency is needed.
As for the syntax, it seems to boil down to familiarity only. Anything foreign will look difficult at first, and I noticed that often those who came to the course with extensive former experience in usually more imperative languages had a harder time grokking Haskell. But those starting from a “clean slate” had no more problem with it.
With that said, the exam results were abysmally bad, but not worse than what one would see in other courses and you had plenty people who finished it with flying colors.
This is very interesting, and I’ve thought along similar lines (language features to mitigate supply chain vulnerabilities). I’ve written about that here, more from the context of a package manager but covering language features as well. I considered purity (ie lack of side effects) a powerful defence but hadn’t considered capabilities/effect systems, which are basically a more fine grained version of purity.
I’m not sure how fundamental linear types are to the capability system in Austral. It sounds like all you need is a way to hide type constructors? I don’t understand why it would be bad if you could, for example, duplicate a NetworkCapability value.
They need to be linear types if you want to use them as “conch shell” that gives access to an underlying system resource. If they could be duplicated then there’s no way to know that there aren’t multiple logical threads with access to it.
I can understand a resource handle being linear, and I guess you’re saying that you might want to wrap up a handle and its associated capability into a single object, which would need to be linear. But that still doesn’t require the capability to be linear - the combined handle-capability pair would inherit linearity from the handle.
If the capability is a linear type, you know whoever presents it has the right to that exclusive resource and you don’t need any further locking. If capabilities can be duplicated, you need locking.
For point 5 (don’t run EXPLAIN ANALYZE on real data), using a decently sized sandbox environment is a good idea. Even better is to use a full size copy of your production environment. Tools like draupnir and postgres.ai can provide ad-hoc, anonymised copies of your production database, and this is a massive productivity boost. Not only for debugging query performance, but for testing schema migrations and even for general development. I’ve actually been working on a modern, fast-setup, automatically-anonymising successor to draupnir - drop me a message if you’d like to know more or try it out.
I’ve seen this principle in a few different guises. In Haskell there is an approach called Scrap Your Type Classes (https://www.haskellforall.com/2012/05/scrap-your-type-classes.html), where you replace a type class (which is equivalent to a trait in Rust) with a type that represents the behaviour, eg
class Eq a where
eq :: a -> a -> Bool
-- becomes:
data Eq a = Eq { eq :: a -> a -> Bool }
Instances of the class then become normal values.
instance Eq Bool where
eq True True = True
eq False False = True
eq _ _ = False
-- becomes
eqBool = Eq { eq = f }
where f True True = True
f False False = True
f _ _ = False
A type class constraint on a function becomes a normal function parameter, and type class instances are passed explicitly
notEq :: Eq a => a -> a -> Bool
notEq x y = not (eq x y)
-- becomes
notEq :: Eq a -> a -> a -> Bool
notEq eqa x y = not (eq eqa x y)
The obvious downside is that you have to explicitly pass around these instance values, but the upside is you drastically simplify the language, getting rid of an entire separate namespace, syntax and many typechecking rules.
The problem of having to explicitly apply instances can be alleviated by introducing implicit arguments to the language - this is what Agda does, and is something I’m exploring with my own language, which is aiming to be a simplified Haskell. I’ve written a little bit about the design here: https://github.com/hmac/kite/blob/main/docs/implicits.md
Elm has received some flak for asking for this approach [1][2] (this one is a bit inflammatory) [3]
I’m going to refrain from sharing my thoughts on whether it’s right or wrong to scrap your typeclasses since I’m both biased by the languages I know and not very knowledgeable on what is Actually Good for users of a language in this case. But I found the discourse rather interesting.
Thank you! It’s still an extremely rough prototype so I don’t feel ready to share it yet, but I have been planning to write some blog posts on the design of the language. I may share those here, if they seem interesting.
There is a technical definition of syntactic sugar proposed over 30 years ago by Matthias Felleisen. Syntactic sugar can be equated with his notion of “macro-definability,” which means that a feature can be implemented using a single local rewrite rule. So, “+=” is syntactic sugar because it can be implemented by the rewrite rule +=(A,B) -> =(A, +(A,B)).
Under this definition, async/await is not syntactic sugar, because it is a global program transformation, albeit a very regular one.
x[y] += z is a bit of a special case, because the LHS is an expression, whereas assignments generally expect variables on the LHS. For example, randrange(4) += 1 is not a valid expression in Python.
Typically you deal with array assignments specially, with a rule such as
The paper that introduced unboxed types to Haskell is a great introduction to this concept. It describes the motivation for this feature and also how it is implemented (quite elegantly) in GHC.
For me, the key takeaway from this paper is the idea of separating the two parts of any data structure transformation: how you traverse the structure and what you do at each step.
Recursion schemes are reusable tactics for traversing data structures in different ways, allowing you to focus purely on what you want to do with your data. You write much less code and are shielded from bugs in repetitive traversal code because you don’t write any.
This distinction is very similar to the one made in this article, except it splits the module manager into two subcategories:
Language package managers, e.g. go get, which manage packages for a particular language, globally.
Project dependency managers, e.g. cargo, which manage packages for a particular language and a particular local project.
To be fair, many package managers play both roles by allowing you to install a package locally or globally. I tend to think that global package installation is an anti-pattern, and the use cases for it are better served by improving the UX around setting up local projects. For example, nix-shell makes it extremely easy to create an ad-hoc environment containing some set of packages, and as a result there’s rarely a need to use nix-env.
I tend to think that global package installation is an anti-pattern
From experience, I agree with this very strongly. Any “how to do X” tutorial that encourages you to run something like “sudo pio install …” or “sudo gem install …” is immediately very suspect. It’s such a pain in the hindquarters to cope with the mess that ends up accruing.
Honestly I’m surprised to read that this still exists in newer languages.
Back when I was hacking on Rubygems in 2008 or so it was very clear that this was a mistake, and tools like isolate and bundler were having to backport the project-local model onto an ecosystem which had spent over a decade building around a flawed global install model, and it was really ugly. The idea that people would repeat those same mistakes without the excuse of a legacy ecosystem is somewhat boggling.
Gah, this is one thing that frustrates me so much about OPAM. Keeping things scoped to a a specific project is not the default, global installations of libraries is more prominently encouraged in the docs, and you need to figure out how to use a complicated, stateful workflow using global ‘switches’ to avoid getting into trouble.
The big takeaway here for me is that “yanking” a package should not be as easy as it currently is in RubyGems and other package ecosystems. Most of the disruption here was caused by all existing versions of the dependency being yanked, immediately breaking the build of countless Rails applications, and Rails itself.
Had the maintainer been forced to make a request to remove these versions which would be reviewed by the RubyGems team, they could have coordinated with Rails and others to greatly minimise disruption.
Furthermore I’d argue that outside of very extreme cases, deleting a package version from a repository should not be permitted at all. A key promise of modern package managers is that a build plan will continue to work indefinitely into the future, and deleting packages breaks that promise. This doesn’t preclude marking a version as “bad” in some way such that new build plans will not choose it.
Abstractions with types is a bad type of abstraction because it ignores the basic fact that programs deal with data, and data has no types.
followed by some argument about how natural language supposedly has no types. That’s just not even wrong. In studies of (natural) languages you assign all kinds of ‘types’ to parts of languages, because that makes it a lot easier (!) to reason about the meanings/properties of communication.
Yes! In particular you’ve reminded me of Montague semantics, a category-theoretic approach that blends parsing and type-checking. A Montague system not only knows how to put words together, but knows how the meanings of each component contribute to the meaning of the entire utterance.
Funnily enough, there’s an even more direct link via recent research into type-theoretic modelling of natural language semantics. Whilst Montague grammar has a few coarse “types” for different grammatical constructs, this approach assigns more specific types to concepts and is a literal application of a dependent type theory. See for example this paper: https://www.stergioschatzikyriakidis.com/uploads/1/0/3/6/10363759/type-theory-natural.pdf
Yeah, that part wasn’t particularly well thought out. It seemed to me like an undeveloped argument for dynamic vs static types. That’s a separate problem, but the author seems to simply favor dynamic languages. Dynamic languages are the only ones he complimented in the article at least.
However, the surrounding argument, about the lack of maturity in the tooling and shortage of successful software projects built in Haskell is a fair argument. Yes, there’s pandoc and a few static analysis tools built in Haskell, but IMO that’s a bit of a cop out, since parsers/compilers are trivial to implement in a functional language. None of those projects say much about the effectiveness or benefit of using Haskell to solve more general software engineering problems.
I also think the criticism of Haskell using its own terminology is valid. This cuts the Haskell community off from the rest of software developers without much gain (as viewed from the outside at least). It’s fine to note the ties between the various mathematical and language concepts, but expecting a new developer to learn the plethora of terms required to even read the stdlib docs is a tall order.
Rust has some of the same features, but does a better job of introducing them with concrete examples rather than abstract type definitions. Abstract concepts are fine, but without any proven benefits its going to be hard to motivate people to learn and use them.
However, the surrounding argument, about the lack of maturity in the tooling and shortage of successful software projects built in Haskell is a fair argument.
That’s an argument against Haskell being a suitable programming language for certain purposes. Does that make it a bad programming language? I don’t think so, unless you show how that maturity is fundamentally impossible to achieve.
I also think the criticism of Haskell using its own terminology is valid.
That’s an argument against Haskell being easy to learn, likely to become popular or influential. Do any of those things make it a bad programming language? Again, I don’t think so.
The post is just a collection of things the author is unhappy with with regard to Haskell in the broadest sense: the language, platform, ecosystem, community, … At least part of them are plain opinion, another part is unfounded and a few things may be complaints generally shared, but in shades of grey.
On the whole, that makes it a bad article in my view.
Yes, there’s pandoc and a few static analysis tools built in Haskell, but IMO that’s a bit of a cop out, since parsers/compilers are trivial to implement in a functional language. None of those projects say much about the effectiveness or benefit of using Haskell to solve more general software engineering problems.
First of all, it seems easy to dismiss pandoc it is a amazing piece of software (imho). But even without it, the following programs of top of my head may fit the “real world haskell” examples (whatever that means): PostgREST,Nix, the package manager, Dhall, neuron. I don’t really understand why disregarding parser/compilers or anything that is easier to do in a language/paradigm.
I also think the criticism of Haskell using its own terminology is valid.
This critic does not stand any ground in the article from my point of view and personal experience. When you work in various domains or scientific fields, each of one as his own idiosyncratic way to express similar or identical concepts due to history, culture, theory currently approved. You construct a terminology and when you learn with it. I think the author has been educated on various OOP languages and internalized this vocabulary as the only way to express something. Creating a network of equivalence between various concept worlds may be bothersome but probably essential to make it your own. Why various variations in French used differents words to speak about the same object? If you can accept the impact of localization on the vocabulary of a language why not in programming language?
I don’t get the dialectic about “If this language is that old, it must be one of the most used”. Haskell have flaws, qualities and it is just another programming language. I mean Common Lisp may take the same bullet and at the same time have crazy toolings and at the same time a whole other set of issue.
Yes, there’s pandoc and a few static analysis tools built in Haskell, but IMO that’s a bit of a cop out, since parsers/compilers are trivial to implement in a functional language. None of those projects say much about the effectiveness or benefit of using Haskell to solve more general software engineering problems.
First of all, it seems easy to dismiss pandoc it is a amazing piece of software (imho).
Fair point – “trivial” was the wrong word to use there. Building a parser/compiler for a real world language or data format takes a huge amount of effort, and functional languages tend to be particularly good at the types of tree/graph transforms that compilers spend most of their time doing. Functional programming is definitely the right approach to solving the problem, but I’m not sure Haskell provides any advantage over any other language that encourages the functional paradigm.
That’s my main issue with Haskell. What is it providing that Rust/Swift/Kotlin/Scala/Clojure/F# aren’t? I know the type system is more advanced, but what’s the benefit in that[1]? I haven’t seen a convincing example of a “real world system” that was built faster or with less bugs in Haskell than any of the other functional languages I’ve mentioned. And I have heard stories of how Haskell requires nasty monad transformer juggling in some situations or has difficult-to-spot thunk leaks (which are problems that almost no other languages worry about).
I agree that every programming language has its own drawbacks, terminology, and specialized domain knowledge. I’m personally even willing to pay the upfront cost of learning that domain sometimes. For instance, I’ve recently been learning J which encourages problem solving through vectorized transforms. But this can result in extremely compact and efficient solutions that require orders of magnitude less code and are faster than the alternatives implementations. That’s a clear advantage, and even if I would never pick J to build a real world system (due to the lack of popularity/support), I’ll pick up some clever new ways of thinking about the problems and structuring the data. Plus it just makes a cool desk calculator :P
At the end of the day, I write systems to solve problems, and my choice of tools is about deciding what allows me to build it quickly and robustly. For this reason, the majority of the code I write these days is Python/Rust, even though they both have a laundry list of issues that I wish were fixed. I don’t think Haskell is a bad language (and the author could probably do with a less click-baity title, but such is the way of titling rants…). I’m sure if I bothered learning it there’s a lot of internal elegance to the language, but I don’t see a clear-cut advantage to it. Maybe I’ll learn it to build a compiler some day, but there are several other language choices I’d go with first.
[1] I’ve personally found that whenever I go crazy with trying to create extremely precise types in most languages, I eventually hit a wall in the expressiveness of the type system anyway. I think I need dependent types in some of those cases, but haven’t gotten around to learning Agda or Idris yet, so I usually just reformulate the data structure to make the type constraints simpler, or punt it to a runtime check.
That’s my main issue with Haskell. What is it providing that Rust/Swift/Kotlin/Scala/Clojure/F# aren’t?
Haskell is a functional programming research language. It served as the playground where programming language theory researchers could experiment with some of the ideas that, when mature, could be adapted by those younger and more strictly production-focused languages.
That’s my main issue with Haskell. What is it providing that Rust/Swift/Kotlin/Scala/Clojure/F# aren’t?
Your mileage may vary (besides Haskell, if I am right, being older than all those languages). Do not want to depends on .Net or JVM? Do want to have access to a GC and not manage memory? Honestly, functional paradigm’s mechanisms or idioms percolated in a lot more languages than 10 years ago. For example, I don’t know jack about the JVM ecosystem and as much as I like Clojure or Scala or Kotlin, it is a whole ecosystem to learn to find the equivalent to my non-JVM library or wrapper of choices. We live in a time of choices for most of problem solving we want to resolve, knowing the trade-off and that’s it. It is nice to see the “influence”-ish of the functional paradigm on more recent languages. Do not like the trade-offs you see, choose something next. I will never blame any one saying “I don’t see any advantage for me using this, I will take that”.
I relate to your experience. I worked in Python, R, learn in of C++ to edit some program when needed to solve problems at work. But I always looked a bit everywhere to expand my mindset/domain knowledge. I also dabbled a bit with J and it is fun, it is concise and lack of support stopped me on my way too. But it was fun and I don’t see why I will be entitled to rant about J ecosystem or language state. I took my new knowledge and try to see if it helps me to improve my numpy chops. I am no Haskell advocate, my knowledge is mostly read-only but I really like the abstractions proposed by the language. Right now, I settled on the inverse solution of the solving problem approach and decided to learn Raku. It is slow-paced, fun and super expressive. It is my anti-“get the thing done now” because I want to have fun with it (others will solve real problem with it).
I don’t have the answers on why Haskell for everything because like you, I don’t think there is one. But if I have to build something similar to pandoc, PostgREST or Dhall, sure Haskell will pop in my mind due to my exposure to them. Maybe it will never fit the bills for you and honestly that’s ok. A hell lot of stuff can be done with Python/Rust with their popularity and communities.
Note that ashift=13 will give you good performance for SSDs, and is the only pool option that can’t be changed after the fact.
Then I can set the datasets I want to mount (/, /nix, /var, /home, and others) as canmount=on and mountpoint=legacy. Setting up datasets like this will help you ridiculously for backups (check out services.sanoid). Then of course you can do dedicated datasets for containers and such too.
Oh, also, get a load of this, which happened on my laptop running a similar ZFS setup while I was working on androidenv and probably had several dozen Android SDKs built in my Nix store:
$ nix-collect-garbage -d
75031 store paths deleted, 215436.41 MiB freed
What’s funny is, after that, I had ~180 GB free on my SSD. Due to ZFS compression of my Nix store, I ended up with more being deleted than could be on my disk…
A normal garbage collection is a great cronjob. The exact command numinit gave deletes old generations, which may be surprising in the worst ways when trying to undo bad configs.
Reading this made me realize that the great divide in functional programming goes deeper than I thought. (Caution: half-baked thoughts ahead.)
Typed FP embodies set theory, so it traces back to Whitehead & Russell with their Principia Mathematica and the effort to place mathematics on a firm foundation. It is axiomatic in nature.
Dynamic FP embodies lambda calculus, which is all about constructions. It traces back to Church and Turing.
That’s an interesting perspective! Though it was Church himself who introduced the Simply Typed Lambda Calculus in 1940, so it seems like you could conclude that he, too, was keen to put these systems on a firm logical footing.
McCarthy stresses in his ACM paper that LISP has (general) recursive, partial functions - something Church and his contemporaries were determined to avoid. To this end he includes a form
label(a, e)
Where a is a name given to e which is then bound within e - ie this is a sort of fixed point operator.
I don’t know if these ideas were derived from earlier work or if he came up with them all himself, but it seems to me they’re quite a distinct contribution from the efforts of Church et al, with a very different goal in mind.
I find bulk find-and-replace to be too risky or fuzzy for me. I prefer to go match-by-match and confirm the replace. My approach is usually something like:
git grep -l something-regexp | xargs nvim
Which will open vim with every file that matched that regexp, and then I can do a traditional find/replace flow on each file.
In what sense does Go have structural typing?
Interfaces are typed structurally (as long as the type signatures match) rather than normally (requiring explicit “implements” annotations like Java).
It’s a little more complicated because if you have two identical named interfaces, A and B, and you have a method M that either takes or returns that interface, whether it uses A or B affects which interfaces M implements. That is M() A and M() B are nominally different even though A and B are structurally identical. In other words, interfaces are only “flattened” by one level. Generics now let you paper over the difference, but it was a practical problem before that.
Yup. And this is one of several reason why interfaces should generally be defined only in terms of primitive types, ideally stdlib types. Anything else invites ambiguity like the kind you’re describing here.
Yeah, the way I’d describe it is that Go is still nominally typed by default, so method signature matching still works nominally rather than structurally. Assignability of interfaces is a structurally typed “island” with the system.
(“Normally” is supposed to be “nominally”).
Off-topic: There’s an edit button :)
You can’t edit your post after a certain amount of time has passed.
Planning is a really cool domain which doesn’t get enough attention. This post a nice intro to it.
It’s a very active area of research and there are different solvers which compete (on performance, solution quality etc). in the International Planning Competition. At the time I was looking into this, the leader was Fast Downward but that may have changed.
http://planning.domains/ is a nice resource, with a web editor where you can write problems in PDDL and run them on a variety of solvers.
My understanding of planning is that it’s basically graph search with some smart stuff added on. A lot of work goes in to cheaply pruning the search space and prioritising what’s left. In contrast to other AI fields, the algorithms are logical and understandable, and hence also explainable.
There’s massive untapped potential for applying planning and constraint solving to real world domains - and no GPUs are required!
Not required maybe, but if you can represent your graph as an adjacency matrix you can probably run a bunch of these algorithms quickly on a GPU, which should help scale them up a lot.
I’ve been thinking about this recently from the point of view of someone who has had to teach functional programming in the past. I came to the conclusion that a new language was necessary, design specifically for teaching, but I must admit that scheme is also a good choice, and already exists.
Scheme per se may not be designed specifically for teaching, but some dialects of Racket are, such as what Programming Languages: Application and Interpretation (PLAI) uses.
I can’t really agree with your conclusion regarding Haskell. In my not too expansive experience (taking a mandatory basic and an advanced FP course, and later being a TA for the faculty) install was simply not a problem — dependencies are a nontrivial problem in haskell, but simply getting ghc or ghci up and running is a no-brainer. We also had an online evaluator for homeworks/exams. For a beginner course I don’t think any dependency is needed.
As for the syntax, it seems to boil down to familiarity only. Anything foreign will look difficult at first, and I noticed that often those who came to the course with extensive former experience in usually more imperative languages had a harder time grokking Haskell. But those starting from a “clean slate” had no more problem with it.
With that said, the exam results were abysmally bad, but not worse than what one would see in other courses and you had plenty people who finished it with flying colors.
I’ve found Ruby really good at getting people going on FP.
This is very interesting, and I’ve thought along similar lines (language features to mitigate supply chain vulnerabilities). I’ve written about that here, more from the context of a package manager but covering language features as well. I considered purity (ie lack of side effects) a powerful defence but hadn’t considered capabilities/effect systems, which are basically a more fine grained version of purity.
I’m not sure how fundamental linear types are to the capability system in Austral. It sounds like all you need is a way to hide type constructors? I don’t understand why it would be bad if you could, for example, duplicate a NetworkCapability value.
They need to be linear types if you want to use them as “conch shell” that gives access to an underlying system resource. If they could be duplicated then there’s no way to know that there aren’t multiple logical threads with access to it.
I can understand a resource handle being linear, and I guess you’re saying that you might want to wrap up a handle and its associated capability into a single object, which would need to be linear. But that still doesn’t require the capability to be linear - the combined handle-capability pair would inherit linearity from the handle.
If the capability is a linear type, you know whoever presents it has the right to that exclusive resource and you don’t need any further locking. If capabilities can be duplicated, you need locking.
For point 5 (don’t run EXPLAIN ANALYZE on real data), using a decently sized sandbox environment is a good idea. Even better is to use a full size copy of your production environment. Tools like draupnir and postgres.ai can provide ad-hoc, anonymised copies of your production database, and this is a massive productivity boost. Not only for debugging query performance, but for testing schema migrations and even for general development. I’ve actually been working on a modern, fast-setup, automatically-anonymising successor to draupnir - drop me a message if you’d like to know more or try it out.
I’ve seen this principle in a few different guises. In Haskell there is an approach called Scrap Your Type Classes (https://www.haskellforall.com/2012/05/scrap-your-type-classes.html), where you replace a type class (which is equivalent to a trait in Rust) with a type that represents the behaviour, eg
Instances of the class then become normal values.
A type class constraint on a function becomes a normal function parameter, and type class instances are passed explicitly
The obvious downside is that you have to explicitly pass around these instance values, but the upside is you drastically simplify the language, getting rid of an entire separate namespace, syntax and many typechecking rules.
The problem of having to explicitly apply instances can be alleviated by introducing implicit arguments to the language - this is what Agda does, and is something I’m exploring with my own language, which is aiming to be a simplified Haskell. I’ve written a little bit about the design here: https://github.com/hmac/kite/blob/main/docs/implicits.md
Elm has received some flak for asking for this approach [1] [2] (this one is a bit inflammatory) [3]
I’m going to refrain from sharing my thoughts on whether it’s right or wrong to scrap your typeclasses since I’m both biased by the languages I know and not very knowledgeable on what is Actually Good for users of a language in this case. But I found the discourse rather interesting.
Also, notably, they kinda compile down to that under the hood.
Hey, Kite looks really interesting! You should post it as a separate submission so we can discuss and upvote :)
Thank you! It’s still an extremely rough prototype so I don’t feel ready to share it yet, but I have been planning to write some blog posts on the design of the language. I may share those here, if they seem interesting.
There is a technical definition of syntactic sugar proposed over 30 years ago by Matthias Felleisen. Syntactic sugar can be equated with his notion of “macro-definability,” which means that a feature can be implemented using a single local rewrite rule. So, “+=” is syntactic sugar because it can be implemented by the rewrite rule
+=(A,B) -> =(A, +(A,B)).Under this definition, async/await is not syntactic sugar, because it is a global program transformation, albeit a very regular one.
A += Bis not equivalent toA = A + B.x[y] += zis a bit of a special case, because the LHS is an expression, whereas assignments generally expect variables on the LHS. For example,randrange(4) += 1is not a valid expression in Python.Typically you deal with array assignments specially, with a rule such as
where
x,yandzare fresh variables.The paper that introduced unboxed types to Haskell is a great introduction to this concept. It describes the motivation for this feature and also how it is implemented (quite elegantly) in GHC.
For me, the key takeaway from this paper is the idea of separating the two parts of any data structure transformation: how you traverse the structure and what you do at each step.
Recursion schemes are reusable tactics for traversing data structures in different ways, allowing you to focus purely on what you want to do with your data. You write much less code and are shielded from bugs in repetitive traversal code because you don’t write any.
This blog post series does a great job of breaking down each tactic and showing how you can use them in Haskell: https://blog.sumtypeofway.com/posts/introduction-to-recursion-schemes.html
This distinction is very similar to the one made in this article, except it splits the module manager into two subcategories:
go get, which manage packages for a particular language, globally.cargo, which manage packages for a particular language and a particular local project.To be fair, many package managers play both roles by allowing you to install a package locally or globally. I tend to think that global package installation is an anti-pattern, and the use cases for it are better served by improving the UX around setting up local projects. For example,
nix-shellmakes it extremely easy to create an ad-hoc environment containing some set of packages, and as a result there’s rarely a need to usenix-env.From experience, I agree with this very strongly. Any “how to do X” tutorial that encourages you to run something like “sudo pio install …” or “sudo gem install …” is immediately very suspect. It’s such a pain in the hindquarters to cope with the mess that ends up accruing.
Honestly I’m surprised to read that this still exists in newer languages.
Back when I was hacking on Rubygems in 2008 or so it was very clear that this was a mistake, and tools like isolate and bundler were having to backport the project-local model onto an ecosystem which had spent over a decade building around a flawed global install model, and it was really ugly. The idea that people would repeat those same mistakes without the excuse of a legacy ecosystem is somewhat boggling.
Gah, this is one thing that frustrates me so much about OPAM. Keeping things scoped to a a specific project is not the default, global installations of libraries is more prominently encouraged in the docs, and you need to figure out how to use a complicated, stateful workflow using global ‘switches’ to avoid getting into trouble.
One big exception…
sudo gem install bundler;)(Though in prod I do actually find it easier/more comfortable to just use Bundler from APT.)
The big takeaway here for me is that “yanking” a package should not be as easy as it currently is in RubyGems and other package ecosystems. Most of the disruption here was caused by all existing versions of the dependency being yanked, immediately breaking the build of countless Rails applications, and Rails itself.
Had the maintainer been forced to make a request to remove these versions which would be reviewed by the RubyGems team, they could have coordinated with Rails and others to greatly minimise disruption.
Furthermore I’d argue that outside of very extreme cases, deleting a package version from a repository should not be permitted at all. A key promise of modern package managers is that a build plan will continue to work indefinitely into the future, and deleting packages breaks that promise. This doesn’t preclude marking a version as “bad” in some way such that new build plans will not choose it.
I think this article is bad. It says things like
followed by some argument about how natural language supposedly has no types. That’s just not even wrong. In studies of (natural) languages you assign all kinds of ‘types’ to parts of languages, because that makes it a lot easier (!) to reason about the meanings/properties of communication.
Yes! In particular you’ve reminded me of Montague semantics, a category-theoretic approach that blends parsing and type-checking. A Montague system not only knows how to put words together, but knows how the meanings of each component contribute to the meaning of the entire utterance.
Funnily enough, there’s an even more direct link via recent research into type-theoretic modelling of natural language semantics. Whilst Montague grammar has a few coarse “types” for different grammatical constructs, this approach assigns more specific types to concepts and is a literal application of a dependent type theory. See for example this paper: https://www.stergioschatzikyriakidis.com/uploads/1/0/3/6/10363759/type-theory-natural.pdf
Yeah, that part wasn’t particularly well thought out. It seemed to me like an undeveloped argument for dynamic vs static types. That’s a separate problem, but the author seems to simply favor dynamic languages. Dynamic languages are the only ones he complimented in the article at least.
However, the surrounding argument, about the lack of maturity in the tooling and shortage of successful software projects built in Haskell is a fair argument. Yes, there’s pandoc and a few static analysis tools built in Haskell, but IMO that’s a bit of a cop out, since parsers/compilers are trivial to implement in a functional language. None of those projects say much about the effectiveness or benefit of using Haskell to solve more general software engineering problems.
I also think the criticism of Haskell using its own terminology is valid. This cuts the Haskell community off from the rest of software developers without much gain (as viewed from the outside at least). It’s fine to note the ties between the various mathematical and language concepts, but expecting a new developer to learn the plethora of terms required to even read the stdlib docs is a tall order.
Rust has some of the same features, but does a better job of introducing them with concrete examples rather than abstract type definitions. Abstract concepts are fine, but without any proven benefits its going to be hard to motivate people to learn and use them.
That’s an argument against Haskell being a suitable programming language for certain purposes. Does that make it a bad programming language? I don’t think so, unless you show how that maturity is fundamentally impossible to achieve.
That’s an argument against Haskell being easy to learn, likely to become popular or influential. Do any of those things make it a bad programming language? Again, I don’t think so.
The post is just a collection of things the author is unhappy with with regard to Haskell in the broadest sense: the language, platform, ecosystem, community, … At least part of them are plain opinion, another part is unfounded and a few things may be complaints generally shared, but in shades of grey.
On the whole, that makes it a bad article in my view.
First of all, it seems easy to dismiss pandoc it is a amazing piece of software (imho). But even without it, the following programs of top of my head may fit the “real world haskell” examples (whatever that means): PostgREST,
Nix, the package manager, Dhall, neuron. I don’t really understand why disregarding parser/compilers or anything that is easier to do in a language/paradigm.This critic does not stand any ground in the article from my point of view and personal experience. When you work in various domains or scientific fields, each of one as his own idiosyncratic way to express similar or identical concepts due to history, culture, theory currently approved. You construct a terminology and when you learn with it. I think the author has been educated on various OOP languages and internalized this vocabulary as the only way to express something. Creating a network of equivalence between various concept worlds may be bothersome but probably essential to make it your own. Why various variations in French used differents words to speak about the same object? If you can accept the impact of localization on the vocabulary of a language why not in programming language?
I don’t get the dialectic about “If this language is that old, it must be one of the most used”. Haskell have flaws, qualities and it is just another programming language. I mean Common Lisp may take the same bullet and at the same time have crazy toolings and at the same time a whole other set of issue.
This seems to be a common misconception and I’m not sure where it stems from, but Nix is written in C++.
Corrected, thanks. Seeing too much haskell programs deployed through nix packages, at least for me.
Fair point – “trivial” was the wrong word to use there. Building a parser/compiler for a real world language or data format takes a huge amount of effort, and functional languages tend to be particularly good at the types of tree/graph transforms that compilers spend most of their time doing. Functional programming is definitely the right approach to solving the problem, but I’m not sure Haskell provides any advantage over any other language that encourages the functional paradigm.
That’s my main issue with Haskell. What is it providing that Rust/Swift/Kotlin/Scala/Clojure/F# aren’t? I know the type system is more advanced, but what’s the benefit in that[1]? I haven’t seen a convincing example of a “real world system” that was built faster or with less bugs in Haskell than any of the other functional languages I’ve mentioned. And I have heard stories of how Haskell requires nasty monad transformer juggling in some situations or has difficult-to-spot thunk leaks (which are problems that almost no other languages worry about).
I agree that every programming language has its own drawbacks, terminology, and specialized domain knowledge. I’m personally even willing to pay the upfront cost of learning that domain sometimes. For instance, I’ve recently been learning J which encourages problem solving through vectorized transforms. But this can result in extremely compact and efficient solutions that require orders of magnitude less code and are faster than the alternatives implementations. That’s a clear advantage, and even if I would never pick J to build a real world system (due to the lack of popularity/support), I’ll pick up some clever new ways of thinking about the problems and structuring the data. Plus it just makes a cool desk calculator :P
At the end of the day, I write systems to solve problems, and my choice of tools is about deciding what allows me to build it quickly and robustly. For this reason, the majority of the code I write these days is Python/Rust, even though they both have a laundry list of issues that I wish were fixed. I don’t think Haskell is a bad language (and the author could probably do with a less click-baity title, but such is the way of titling rants…). I’m sure if I bothered learning it there’s a lot of internal elegance to the language, but I don’t see a clear-cut advantage to it. Maybe I’ll learn it to build a compiler some day, but there are several other language choices I’d go with first.
[1] I’ve personally found that whenever I go crazy with trying to create extremely precise types in most languages, I eventually hit a wall in the expressiveness of the type system anyway. I think I need dependent types in some of those cases, but haven’t gotten around to learning Agda or Idris yet, so I usually just reformulate the data structure to make the type constraints simpler, or punt it to a runtime check.
Haskell is a functional programming research language. It served as the playground where programming language theory researchers could experiment with some of the ideas that, when mature, could be adapted by those younger and more strictly production-focused languages.
Your mileage may vary (besides Haskell, if I am right, being older than all those languages). Do not want to depends on .Net or JVM? Do want to have access to a GC and not manage memory? Honestly, functional paradigm’s mechanisms or idioms percolated in a lot more languages than 10 years ago. For example, I don’t know jack about the JVM ecosystem and as much as I like Clojure or Scala or Kotlin, it is a whole ecosystem to learn to find the equivalent to my non-JVM library or wrapper of choices. We live in a time of choices for most of problem solving we want to resolve, knowing the trade-off and that’s it. It is nice to see the “influence”-ish of the functional paradigm on more recent languages. Do not like the trade-offs you see, choose something next. I will never blame any one saying “I don’t see any advantage for me using this, I will take that”.
I relate to your experience. I worked in Python, R, learn in of C++ to edit some program when needed to solve problems at work. But I always looked a bit everywhere to expand my mindset/domain knowledge. I also dabbled a bit with J and it is fun, it is concise and lack of support stopped me on my way too. But it was fun and I don’t see why I will be entitled to rant about J ecosystem or language state. I took my new knowledge and try to see if it helps me to improve my numpy chops. I am no Haskell advocate, my knowledge is mostly read-only but I really like the abstractions proposed by the language. Right now, I settled on the inverse solution of the solving problem approach and decided to learn Raku. It is slow-paced, fun and super expressive. It is my anti-“get the thing done now” because I want to have fun with it (others will solve real problem with it).
I don’t have the answers on why Haskell for everything because like you, I don’t think there is one. But if I have to build something similar to pandoc, PostgREST or Dhall, sure Haskell will pop in my mind due to my exposure to them. Maybe it will never fit the bills for you and honestly that’s ok. A hell lot of stuff can be done with Python/Rust with their popularity and communities.
Have you looked at ZFS Datasets for NixOS? I always do something like this on my boxes.
Also, as for pool options for SSD boot pools, here’s what I generally use:
Note that ashift=13 will give you good performance for SSDs, and is the only pool option that can’t be changed after the fact.
Then I can set the datasets I want to mount (/, /nix, /var, /home, and others) as canmount=on and mountpoint=legacy. Setting up datasets like this will help you ridiculously for backups (check out services.sanoid). Then of course you can do dedicated datasets for containers and such too.
Oh, also, get a load of this, which happened on my laptop running a similar ZFS setup while I was working on androidenv and probably had several dozen Android SDKs built in my Nix store:
What’s funny is, after that, I had ~180 GB free on my SSD. Due to ZFS compression of my Nix store, I ended up with more being deleted than could be on my disk…
Would it be a good idea to add that as a cronjob perhaps? What would be the downside?
A normal garbage collection is a great cronjob. The exact command numinit gave deletes old generations, which may be surprising in the worst ways when trying to undo bad configs.
I think you can also set up the Nix daemon to automatically optimize the store. It’s buried in the NixOS options somewhere.
Nice, I didn’t know about that. The setting is
nix.gc.automatic, by the looks of it.“It’s buried in the NixOS options somewhere” is going to be both a blessing and curse of this deployment model >.>
Here’s hoping people document their flakes well.
Reading this made me realize that the great divide in functional programming goes deeper than I thought. (Caution: half-baked thoughts ahead.)
Typed FP embodies set theory, so it traces back to Whitehead & Russell with their Principia Mathematica and the effort to place mathematics on a firm foundation. It is axiomatic in nature.
Dynamic FP embodies lambda calculus, which is all about constructions. It traces back to Church and Turing.
No wonder they can’t get along.
That’s an interesting perspective! Though it was Church himself who introduced the Simply Typed Lambda Calculus in 1940, so it seems like you could conclude that he, too, was keen to put these systems on a firm logical footing.
McCarthy stresses in his ACM paper that LISP has (general) recursive, partial functions - something Church and his contemporaries were determined to avoid. To this end he includes a form
Where a is a name given to e which is then bound within e - ie this is a sort of fixed point operator.
I don’t know if these ideas were derived from earlier work or if he came up with them all himself, but it seems to me they’re quite a distinct contribution from the efforts of Church et al, with a very different goal in mind.
I find bulk find-and-replace to be too risky or fuzzy for me. I prefer to go match-by-match and confirm the replace. My approach is usually something like:
Which will open vim with every file that matched that regexp, and then I can do a traditional find/replace flow on each file.
The article talks about using the
cflag on vim substitutions to have it ask for confirmation on each case.e.g.
:%s/foo/bar/gcwill do a global search and replace with confirmation