I think it really depends on how you actually define coding. The researchers seemed to have defined coding as programming Rock-Paper-Scissors in Python. To me, that seems to be a rather poor definition of coding. For example, debugging complex programs is a substantial part of coding. Debugging, at its core, is raw science - the application of the scientific method to figure out how a part of the world works (or does not work). A love for rigorous, thorough thinking is a prerequisite for excellence in programming. I have never seen anyone succeed as a programmer if they lacked this kind of thinking. I wouldn’t call it mathematical thinking because you could be terrible at math - simply because you didn’t invest much time to learn all the rules and theorems - and still be great at coding. It is also a misunderstanding that math knowledge is a prerequisite for coding. It certainly helps, but a lot of programs don’t require more math than basic arithmetic. Overall, I think you have to be intellectually rigorous and curious first and foremost and a linguist in the second place.

What is that “math ability” they are talking about? :)

The real issue is that modern math has little to do with stuff they teach in high school, and even typical introductory calculus classes. A lot of it is more about questions like “what set of axioms can describe this thing” and “what else would it be applicable to”. I recently realized that the reason monad tutorials that compare monads to burritos proliferate is that neither the language of basic modern algebra (where modern = late 19th century) nor its way of thinking about things made it to the mass consciousness—yet. I’m pretty sure it will.

The real reason why monads are commonly not understood is that the set of people who understand monads and the set of people who can explain concepts plainly to normal people are almost completely disjoint. For example, let’s make a quick experiment to find on Wikipedia what’s a monad, just following the links and keywords:

A monad is a certain type of endofunctor.

Endofunctor: A functor that maps a category to that same category; e.g., polynomial functor.

In mathematics, specifically category theory, a functor is a map between categories.

In mathematics, a category (sometimes called an abstract category to distinguish it from a concrete category) is a collection of “objects” that are linked by “arrows”.

…

It goes on and on. It seems like I have to learn an entire branch of mathematics just to understand one simple concept. A programmer who hasn’t had extensive formal mathematical training at university level is unlikely to ever get it. It is inexcusable how bad the Wikipedia explanation and most of the other explanations are. How about trying to explain a concept without pulling in an entire branch of mathematics as a dependency?

It is possible to explain the monad in much more understandable terms. For example, you could say that a monad is like a tagged union which can be passed to a function that performs an operation on it and returns the modified tagged union. Suddenly every C programmer gets it and the magic is gone.

You have to explain concepts in terms of things your audience already knows, and it’s sad that many people who could pass on treasure troves of knowledge don’t understand this.

The paper that introduced monads doesn’t use any category theory jargon. It discusses the monad laws in terms that everyone with basic training in modern math knows.
The great thing about that paper is that it starts with motivating examples instead of definitions, which is where most monad tutorials fail miserably.

However, the laws are important. You have to explain that not everything that takes a tagged union and returns a modified tagged union is a monad, somehow. Algebra 101 gives you a framework to discuss those things precisely and without even mentioning implementation details.

Reading that paper is on my todo list, so based on your recommendation I’ll probably move it up a bit.

Yes, I agree that axioms and definitions are important, it’s just that I think an explanation should first establish an intuition for the concept. Very few people can read axioms and understand a new concept from first principles. Ironically, first principles are often developed after the researchers of the field gained a good intuition for the concept.

Less funny is that those theory jargon filled Wikipedia pages scare me away from even reading those papers since I assume they’ll be even more obtuse. But perhaps that is just a personal failing of mine.

What is that “math ability” they are talking about? :)

Numeracy was assessed using a Rasch-Based Numeracy Scale which was created by evaluating 18 numeracy questions across multiple measures and determining the 8 most predictive items²³. The test was computerized and untimed.

That’s my point. I’m not saying “numeracy” is unimportant, but it has little to do with programming, or many branches of modern math for that matter.

I would be more interested to see how it correlates with ability to work with formal logic quantifiers. It can be assessed without funny symbols by asking nonsense questions like “All programmers are rabbits. Some rabbits smoke peppermint. Is it true that a) all programmers smoke peppermint b) at least one programmer smokes peppermint c) all rabbits are programmers”.

I know it was just a silly example, but phrasing the question like this may be unhelpful, as it could imply that the statements are either true or not, while the correct response for a), b) & c) should be “We don’t know / not enough information”.

“[An early cognitive model proposed by Shniederman and Mayer, and our data] converge to suggest that learning to program and writing programs depend upon somewhat different cognitive abilities.”

“All participants were right-handed native English speakers with no exposure to a second natural language before the age of 6 years”

Language aptitude predicts learning rate but not programming accuracy, which is better predicted by general cognition. Still, it makes sense that speakers of multiple languages with more rigorous grammars (i.e. not English, which is very sloppy) would have an easier time picking up the syntax of a programming language. But writing good code requires more than just language mastery: the language-agnostic component (e.g. algorithms) would likely be better predicted by math aptitude.

The grammar of English is no more sloppy than the grammar of any other language. There are absolutely strings of words that you can put together in ways that native English speakers would find undeniably ungrammatical, and it takes books hundreds of pages long to even start to describe the grammatical rules that native English speakers understand in all their complexity, just like for any other language you can name. It’s a truism in linguisics that all human languages are equally complex and just spread their complexity in different parts of their grammatical systems - I’m not sure I believe this, since I don’t think linguists actually have a good way to rigorously quantify some notion of “grammatical complexity” and compare this quantity across human languages. But I don’t think there’s any understanding of “sloppiness” in grammar such that you can say that English has more or less of it than other languages.

I think it really depends on how you actually define coding. The researchers seemed to have defined coding as programming Rock-Paper-Scissors in Python. To me, that seems to be a rather poor definition of coding. For example, debugging complex programs is a substantial part of coding. Debugging, at its core, is raw science - the application of the scientific method to figure out how a part of the world works (or does not work). A love for rigorous, thorough thinking is a prerequisite for excellence in programming. I have never seen anyone succeed as a programmer if they lacked this kind of thinking. I wouldn’t call it mathematical thinking because you could be terrible at math - simply because you didn’t invest much time to learn all the rules and theorems - and still be great at coding. It is also a misunderstanding that math knowledge is a prerequisite for coding. It certainly helps, but a lot of programs don’t require more math than basic arithmetic. Overall, I think you have to be intellectually rigorous and curious first and foremost and a linguist in the second place.

What is that “math ability” they are talking about? :)

The real issue is that modern math has little to do with stuff they teach in high school, and even typical introductory calculus classes. A lot of it is more about questions like “what set of axioms can describe this thing” and “what else would it be applicable to”. I recently realized that the reason monad tutorials that compare monads to burritos proliferate is that neither the language of basic modern algebra (where modern = late 19th century) nor its

way of thinking about thingsmade it to the mass consciousness—yet. I’m pretty sure it will.The real reason why monads are commonly not understood is that the set of people who understand monads and the set of people who can explain concepts plainly to normal people are almost completely disjoint. For example, let’s make a quick experiment to find on Wikipedia what’s a monad, just following the links and keywords:

It goes on and on. It seems like I have to learn an entire branch of mathematics just to understand one simple concept. A programmer who hasn’t had extensive formal mathematical training at university level is unlikely to ever get it. It is inexcusable how bad the Wikipedia explanation and most of the other explanations are. How about trying to explain a concept without pulling in an entire branch of mathematics as a dependency?

It is possible to explain the monad in much more understandable terms. For example, you could say that a monad is like a tagged union which can be passed to a function that performs an operation on it and returns the modified tagged union. Suddenly every C programmer gets it and the magic is gone.

You have to explain concepts in terms of things your audience already knows, and it’s sad that many people who could pass on treasure troves of knowledge don’t understand this.

The paper that introduced monads doesn’t use any category theory jargon. It discusses the monad laws in terms that everyone with basic training in modern math knows. The great thing about that paper is that it starts with motivating examples instead of definitions, which is where most monad tutorials fail miserably.

However, the laws

areimportant. You have to explain that not everything that takes a tagged union and returns a modified tagged union is a monad, somehow. Algebra 101 gives you a framework to discuss those things precisely and without even mentioning implementation details.Reading that paper is on my todo list, so based on your recommendation I’ll probably move it up a bit.

Yes, I agree that axioms and definitions are important, it’s just that I think an explanation should first establish an intuition for the concept. Very few people can read axioms and understand a new concept from first principles. Ironically, first principles are often developed after the researchers of the field gained a good intuition for the concept.

It’s quite funny that many classic FP papers are nothing like the category theory jargon filled Wikipedia pages and blog posts.

Less funny is that those theory jargon filled Wikipedia pages scare me away from even reading those papers since I assume they’ll be even more obtuse. But perhaps that is just a personal failing of mine.

23: https://onlinelibrary.wiley.com/doi/full/10.1002/bdm.1751

…which describes numeracy as “the ability to understand, manipulate, and use numerical information, including probabilities.”

That’s my point. I’m not saying “numeracy” is unimportant, but it has little to do with programming, or many branches of modern math for that matter.

I would be more interested to see how it correlates with ability to work with formal logic quantifiers. It can be assessed without funny symbols by asking nonsense questions like “All programmers are rabbits. Some rabbits smoke peppermint. Is it true that a) all programmers smoke peppermint b) at least one programmer smokes peppermint c) all rabbits are programmers”.

I know it was just a silly example, but phrasing the question like this may be unhelpful, as it could imply that the statements are either true or not, while the correct response for a), b) & c) should be “We don’t know / not enough information”.

n.b. This article references a paper posted here on lobsters four days ago. Some details in the paper that were glossed over by the article’s editorializing:

Thank you for pointing that out.

This makes a lot of sense, now that I hear it. I’m impressed that the researchers thought to study this.

Language aptitude predicts learning rate but not programming accuracy, which is better predicted by general cognition. Still, it makes sense that speakers of multiple languages with more rigorous grammars (i.e. not English, which is very sloppy) would have an easier time picking up the syntax of a programming language. But writing good code requires more than just language mastery: the language-agnostic component (e.g. algorithms) would likely be better predicted by math aptitude.

The grammar of English is no more sloppy than the grammar of any other language. There are absolutely strings of words that you can put together in ways that native English speakers would find undeniably ungrammatical, and it takes books hundreds of pages long to even start to describe the grammatical rules that native English speakers understand in all their complexity, just like for any other language you can name. It’s a truism in linguisics that all human languages are equally complex and just spread their complexity in different parts of their grammatical systems - I’m not sure I believe this, since I don’t think linguists actually have a good way to rigorously quantify some notion of “grammatical complexity” and compare this quantity across human languages. But I don’t think there’s any understanding of “sloppiness” in grammar such that you can say that English has more or less of it than other languages.