1. 9

There is a technical definition of syntactic sugar proposed over 30 years ago by Matthias Felleisen. Syntactic sugar can be equated with his notion of “macro-definability,” which means that a feature can be implemented using a single local rewrite rule. So, “+=” is syntactic sugar because it can be implemented by the rewrite rule `+=(A,B) -> =(A, +(A,B))`.

Under this definition, async/await is not syntactic sugar, because it is a global program transformation, albeit a very regular one.

1. 3

`A += B` is not equivalent to `A = A + B`.

``````arr = [0,1,2,3]
arr[randrange(4)] += 1
``````
1. 2

`x[y] += z` is a bit of a special case, because the LHS is an expression, whereas assignments generally expect variables on the LHS. For example, `randrange(4) += 1` is not a valid expression in Python.

Typically you deal with array assignments specially, with a rule such as

``````A[B] += C -> x    = A
y    = B
z    = x[y] + C
x[y] = z
``````

where `x`, `y` and `z` are fresh variables.

1. 2

The paper that introduced unboxed types to Haskell is a great introduction to this concept. It describes the motivation for this feature and also how it is implemented (quite elegantly) in GHC.

1. 3

For me, the key takeaway from this paper is the idea of separating the two parts of any data structure transformation: how you traverse the structure and what you do at each step.

Recursion schemes are reusable tactics for traversing data structures in different ways, allowing you to focus purely on what you want to do with your data. You write much less code and are shielded from bugs in repetitive traversal code because you don’t write any.

This blog post series does a great job of breaking down each tactic and showing how you can use them in Haskell: https://blog.sumtypeofway.com/posts/introduction-to-recursion-schemes.html

1. 7

This distinction is very similar to the one made in this article, except it splits the module manager into two subcategories:

• Language package managers, e.g. `go get`, which manage packages for a particular language, globally.
• Project dependency managers, e.g. `cargo`, which manage packages for a particular language and a particular local project.

To be fair, many package managers play both roles by allowing you to install a package locally or globally. I tend to think that global package installation is an anti-pattern, and the use cases for it are better served by improving the UX around setting up local projects. For example, `nix-shell` makes it extremely easy to create an ad-hoc environment containing some set of packages, and as a result there’s rarely a need to use `nix-env`.

1. 13

I tend to think that global package installation is an anti-pattern

From experience, I agree with this very strongly. Any “how to do X” tutorial that encourages you to run something like “sudo pio install …” or “sudo gem install …” is immediately very suspect. It’s such a pain in the hindquarters to cope with the mess that ends up accruing.

1. 3

Honestly I’m surprised to read that this still exists in newer languages.

Back when I was hacking on Rubygems in 2008 or so it was very clear that this was a mistake, and tools like isolate and bundler were having to backport the project-local model onto an ecosystem which had spent over a decade building around a flawed global install model, and it was really ugly. The idea that people would repeat those same mistakes without the excuse of a legacy ecosystem is somewhat boggling.

1. 3

Gah, this is one thing that frustrates me so much about OPAM. Keeping things scoped to a a specific project is not the default, global installations of libraries is more prominently encouraged in the docs, and you need to figure out how to use a complicated, stateful workflow using global ‘switches’ to avoid getting into trouble.

1. 3

One big exception… `sudo gem install bundler` ;)

(Though in prod I do actually find it easier/more comfortable to just use Bundler from APT.)

1. 4

The big takeaway here for me is that “yanking” a package should not be as easy as it currently is in RubyGems and other package ecosystems. Most of the disruption here was caused by all existing versions of the dependency being yanked, immediately breaking the build of countless Rails applications, and Rails itself.

Had the maintainer been forced to make a request to remove these versions which would be reviewed by the RubyGems team, they could have coordinated with Rails and others to greatly minimise disruption.

Furthermore I’d argue that outside of very extreme cases, deleting a package version from a repository should not be permitted at all. A key promise of modern package managers is that a build plan will continue to work indefinitely into the future, and deleting packages breaks that promise. This doesn’t preclude marking a version as “bad” in some way such that new build plans will not choose it.

1. 42

Abstractions with types is a bad type of abstraction because it ignores the basic fact that programs deal with data, and data has no types.

followed by some argument about how natural language supposedly has no types. That’s just not even wrong. In studies of (natural) languages you assign all kinds of ‘types’ to parts of languages, because that makes it a lot easier (!) to reason about the meanings/properties of communication.

1. 11

Yes! In particular you’ve reminded me of Montague semantics, a category-theoretic approach that blends parsing and type-checking. A Montague system not only knows how to put words together, but knows how the meanings of each component contribute to the meaning of the entire utterance.

1. 4

Funnily enough, there’s an even more direct link via recent research into type-theoretic modelling of natural language semantics. Whilst Montague grammar has a few coarse “types” for different grammatical constructs, this approach assigns more specific types to concepts and is a literal application of a dependent type theory. See for example this paper: https://www.stergioschatzikyriakidis.com/uploads/1/0/3/6/10363759/type-theory-natural.pdf

2. 8

Yeah, that part wasn’t particularly well thought out. It seemed to me like an undeveloped argument for dynamic vs static types. That’s a separate problem, but the author seems to simply favor dynamic languages. Dynamic languages are the only ones he complimented in the article at least.

However, the surrounding argument, about the lack of maturity in the tooling and shortage of successful software projects built in Haskell is a fair argument. Yes, there’s pandoc and a few static analysis tools built in Haskell, but IMO that’s a bit of a cop out, since parsers/compilers are trivial to implement in a functional language. None of those projects say much about the effectiveness or benefit of using Haskell to solve more general software engineering problems.

I also think the criticism of Haskell using its own terminology is valid. This cuts the Haskell community off from the rest of software developers without much gain (as viewed from the outside at least). It’s fine to note the ties between the various mathematical and language concepts, but expecting a new developer to learn the plethora of terms required to even read the stdlib docs is a tall order.

Rust has some of the same features, but does a better job of introducing them with concrete examples rather than abstract type definitions. Abstract concepts are fine, but without any proven benefits its going to be hard to motivate people to learn and use them.

1. 7

However, the surrounding argument, about the lack of maturity in the tooling and shortage of successful software projects built in Haskell is a fair argument.

That’s an argument against Haskell being a suitable programming language for certain purposes. Does that make it a bad programming language? I don’t think so, unless you show how that maturity is fundamentally impossible to achieve.

I also think the criticism of Haskell using its own terminology is valid.

That’s an argument against Haskell being easy to learn, likely to become popular or influential. Do any of those things make it a bad programming language? Again, I don’t think so.

The post is just a collection of things the author is unhappy with with regard to Haskell in the broadest sense: the language, platform, ecosystem, community, … At least part of them are plain opinion, another part is unfounded and a few things may be complaints generally shared, but in shades of grey.

On the whole, that makes it a bad article in my view.

1. 3

Yes, there’s pandoc and a few static analysis tools built in Haskell, but IMO that’s a bit of a cop out, since parsers/compilers are trivial to implement in a functional language. None of those projects say much about the effectiveness or benefit of using Haskell to solve more general software engineering problems.

First of all, it seems easy to dismiss pandoc it is a amazing piece of software (imho). But even without it, the following programs of top of my head may fit the “real world haskell” examples (whatever that means): PostgREST,Nix, the package manager, Dhall, neuron. I don’t really understand why disregarding parser/compilers or anything that is easier to do in a language/paradigm.

I also think the criticism of Haskell using its own terminology is valid.

This critic does not stand any ground in the article from my point of view and personal experience. When you work in various domains or scientific fields, each of one as his own idiosyncratic way to express similar or identical concepts due to history, culture, theory currently approved. You construct a terminology and when you learn with it. I think the author has been educated on various OOP languages and internalized this vocabulary as the only way to express something. Creating a network of equivalence between various concept worlds may be bothersome but probably essential to make it your own. Why various variations in French used differents words to speak about the same object? If you can accept the impact of localization on the vocabulary of a language why not in programming language?

I don’t get the dialectic about “If this language is that old, it must be one of the most used”. Haskell have flaws, qualities and it is just another programming language. I mean Common Lisp may take the same bullet and at the same time have crazy toolings and at the same time a whole other set of issue.

1. 7

Nix, the package manager

This seems to be a common misconception and I’m not sure where it stems from, but Nix is written in C++.

1. 2

Corrected, thanks. Seeing too much haskell programs deployed through nix packages, at least for me.

2. 7

Yes, there’s pandoc and a few static analysis tools built in Haskell, but IMO that’s a bit of a cop out, since parsers/compilers are trivial to implement in a functional language. None of those projects say much about the effectiveness or benefit of using Haskell to solve more general software engineering problems.

First of all, it seems easy to dismiss pandoc it is a amazing piece of software (imho).

Fair point – “trivial” was the wrong word to use there. Building a parser/compiler for a real world language or data format takes a huge amount of effort, and functional languages tend to be particularly good at the types of tree/graph transforms that compilers spend most of their time doing. Functional programming is definitely the right approach to solving the problem, but I’m not sure Haskell provides any advantage over any other language that encourages the functional paradigm.

That’s my main issue with Haskell. What is it providing that Rust/Swift/Kotlin/Scala/Clojure/F# aren’t? I know the type system is more advanced, but what’s the benefit in that[1]? I haven’t seen a convincing example of a “real world system” that was built faster or with less bugs in Haskell than any of the other functional languages I’ve mentioned. And I have heard stories of how Haskell requires nasty monad transformer juggling in some situations or has difficult-to-spot thunk leaks (which are problems that almost no other languages worry about).

I agree that every programming language has its own drawbacks, terminology, and specialized domain knowledge. I’m personally even willing to pay the upfront cost of learning that domain sometimes. For instance, I’ve recently been learning J which encourages problem solving through vectorized transforms. But this can result in extremely compact and efficient solutions that require orders of magnitude less code and are faster than the alternatives implementations. That’s a clear advantage, and even if I would never pick J to build a real world system (due to the lack of popularity/support), I’ll pick up some clever new ways of thinking about the problems and structuring the data. Plus it just makes a cool desk calculator :P

At the end of the day, I write systems to solve problems, and my choice of tools is about deciding what allows me to build it quickly and robustly. For this reason, the majority of the code I write these days is Python/Rust, even though they both have a laundry list of issues that I wish were fixed. I don’t think Haskell is a bad language (and the author could probably do with a less click-baity title, but such is the way of titling rants…). I’m sure if I bothered learning it there’s a lot of internal elegance to the language, but I don’t see a clear-cut advantage to it. Maybe I’ll learn it to build a compiler some day, but there are several other language choices I’d go with first.

[1] I’ve personally found that whenever I go crazy with trying to create extremely precise types in most languages, I eventually hit a wall in the expressiveness of the type system anyway. I think I need dependent types in some of those cases, but haven’t gotten around to learning Agda or Idris yet, so I usually just reformulate the data structure to make the type constraints simpler, or punt it to a runtime check.

1. 4

That’s my main issue with Haskell. What is it providing that Rust/Swift/Kotlin/Scala/Clojure/F# aren’t?

Haskell is a functional programming research language. It served as the playground where programming language theory researchers could experiment with some of the ideas that, when mature, could be adapted by those younger and more strictly production-focused languages.

1. 3

That’s my main issue with Haskell. What is it providing that Rust/Swift/Kotlin/Scala/Clojure/F# aren’t?

Your mileage may vary (besides Haskell, if I am right, being older than all those languages). Do not want to depends on .Net or JVM? Do want to have access to a GC and not manage memory? Honestly, functional paradigm’s mechanisms or idioms percolated in a lot more languages than 10 years ago. For example, I don’t know jack about the JVM ecosystem and as much as I like Clojure or Scala or Kotlin, it is a whole ecosystem to learn to find the equivalent to my non-JVM library or wrapper of choices. We live in a time of choices for most of problem solving we want to resolve, knowing the trade-off and that’s it. It is nice to see the “influence”-ish of the functional paradigm on more recent languages. Do not like the trade-offs you see, choose something next. I will never blame any one saying “I don’t see any advantage for me using this, I will take that”.

I relate to your experience. I worked in Python, R, learn in of C++ to edit some program when needed to solve problems at work. But I always looked a bit everywhere to expand my mindset/domain knowledge. I also dabbled a bit with J and it is fun, it is concise and lack of support stopped me on my way too. But it was fun and I don’t see why I will be entitled to rant about J ecosystem or language state. I took my new knowledge and try to see if it helps me to improve my numpy chops. I am no Haskell advocate, my knowledge is mostly read-only but I really like the abstractions proposed by the language. Right now, I settled on the inverse solution of the solving problem approach and decided to learn Raku. It is slow-paced, fun and super expressive. It is my anti-“get the thing done now” because I want to have fun with it (others will solve real problem with it).

I don’t have the answers on why Haskell for everything because like you, I don’t think there is one. But if I have to build something similar to pandoc, PostgREST or Dhall, sure Haskell will pop in my mind due to my exposure to them. Maybe it will never fit the bills for you and honestly that’s ok. A hell lot of stuff can be done with Python/Rust with their popularity and communities.

1. 2

Have you looked at ZFS Datasets for NixOS? I always do something like this on my boxes.

Also, as for pool options for SSD boot pools, here’s what I generally use:

``````zpool create -o ashift=13 -o autoexpand=on -o autotrim=on -O canmount=off -O mountpoint=none -O compression=on -O xattr=sa -O acltype=posixacl -O atime=off -O relatime=on -O checksum=fletcher4 tank /dev/disk/by-partuuid/<UUID>
``````

Note that ashift=13 will give you good performance for SSDs, and is the only pool option that can’t be changed after the fact.

Then I can set the datasets I want to mount (/, /nix, /var, /home, and others) as canmount=on and mountpoint=legacy. Setting up datasets like this will help you ridiculously for backups (check out services.sanoid). Then of course you can do dedicated datasets for containers and such too.

Oh, also, get a load of this, which happened on my laptop running a similar ZFS setup while I was working on androidenv and probably had several dozen Android SDKs built in my Nix store:

``````\$ nix-collect-garbage -d
75031 store paths deleted, 215436.41 MiB freed
``````

What’s funny is, after that, I had ~180 GB free on my SSD. Due to ZFS compression of my Nix store, I ended up with more being deleted than could be on my disk…

1. 1

Would it be a good idea to add that as a cronjob perhaps? What would be the downside?

1. 1

A normal garbage collection is a great cronjob. The exact command numinit gave deletes old generations, which may be surprising in the worst ways when trying to undo bad configs.

1. 1

I think you can also set up the Nix daemon to automatically optimize the store. It’s buried in the NixOS options somewhere.

1. 1

Nice, I didn’t know about that. The setting is `nix.gc.automatic`, by the looks of it.

1. 1

“It’s buried in the NixOS options somewhere” is going to be both a blessing and curse of this deployment model >.>

Here’s hoping people document their flakes well.

1. 1

Reading this made me realize that the great divide in functional programming goes deeper than I thought. (Caution: half-baked thoughts ahead.)

Typed FP embodies set theory, so it traces back to Whitehead & Russell with their Principia Mathematica and the effort to place mathematics on a firm foundation. It is axiomatic in nature.

Dynamic FP embodies lambda calculus, which is all about constructions. It traces back to Church and Turing.

No wonder they can’t get along.

1. 2

That’s an interesting perspective! Though it was Church himself who introduced the Simply Typed Lambda Calculus in 1940, so it seems like you could conclude that he, too, was keen to put these systems on a firm logical footing.

McCarthy stresses in his ACM paper that LISP has (general) recursive, partial functions - something Church and his contemporaries were determined to avoid. To this end he includes a form

label(a, e)

Where a is a name given to e which is then bound within e - ie this is a sort of fixed point operator.

I don’t know if these ideas were derived from earlier work or if he came up with them all himself, but it seems to me they’re quite a distinct contribution from the efforts of Church et al, with a very different goal in mind.

1. 1

I find bulk find-and-replace to be too risky or fuzzy for me. I prefer to go match-by-match and confirm the replace. My approach is usually something like:

``````git grep -l something-regexp | xargs nvim
``````

Which will open vim with every file that matched that regexp, and then I can do a traditional find/replace flow on each file.

1. 3

The article talks about using the `c` flag on vim substitutions to have it ask for confirmation on each case.

e.g. `:%s/foo/bar/gc` will do a global search and replace with confirmation