I very much disagree. Languages can change a lot more than just the syntax. Sure, switching between Python and Ruby (or any other imperative language( is trivial for anyone with significant experience using either, but switching to a language that works in a different paradigm will require you to change how you think and will require you to find different solutions. Anyone who has moved from imperative to (purely) functional will tell you that it’s like learning to program all over again.
Languages can impose totality requirements and having to write a total program to solve a specific problem will make you think very hard about the problem you’re supposed to solve. The language will force you to think of every possible case and you will often need to ask for specification requirements from a stakeholder due to this.
Choosing a language with dynamic types and/or type coercion will require a lot of discipline to write tests for every part of the program. If someone on the team fails to adhere to test-discipline for some part of the implementation you might find yourself with parts of your program you can’t refactor. Not being able to refactor will lead to a drop in velocity and in turn in delays with product shipping. At worst, a lack of testing-discipline will require a full rewrite of an otherwise impossible-to-refactor program and even more delays or missed sprint goals.
Language matters and it matters a lot.
I agree that other languages can make you think differently and impose various implementation practices, but I don’t think that’s what the article is talking about. (The title of the article is a bit provocative with the use of “syntax”. I think it means “syntax and semantics”, but using both in the that isn’t as catchy).
Whether you use C or Haskell doesn’t matter. What matters is how you express a problem and the solution via the program. In short, good programming transcends any programming language.
Haskell code written like good C code is not good Haskell code. C code written like good Haskell code is not good C code. As the paradigms go further afield (SQL, Prolog, Forth) the tradeoffs change.
It’s not about the idioms of the language so much as it’s “what data structure makes sense here,” “how can I decompose this problem into concurrent processes,” “how do I adapt this idea that I know to cut with the grain of this language so someone else can come use this work later.” This kind of stuff transcends language, and I’d argue that it’s the important stuff. No one is arguing that learning both Haskell and C isn’t valuable, but I think the article is arguing putting undue emphasis on proficiency in a particular language or syntax is missing the forest for the trees - even when evaluating yourself or others critically.
Especially data structures are vastly different in C and Haskell. There’s whole books written about lazy and pure datastructures, which rely on very specific things provided by the language (and runtime). They are rather hard to write in languages that don’t impose certain semantics.
While many of them could be implemented in C, one could argue that such a development comes close to developing a whole different sublanguage.
Data structures are very different in an immutable language like Haskell. In Haskell is quite common to use finger trees or zip lists. These immutable structures are a BAD IDEA if you’re not using a pure language where the compiler can inline aggressively but are nice and fast in Haskell. Concurrent/parallel programming in a pure functional language is very different from what you would do in Ruby so almost everything you learnt when working with pthreads is almost useless. Concurrency abstractions are different. Haskell programmers don’t use mutexes for one, preferring software transactional memory or MVar’s.
I believe you’re vastly underestimating how much a language can change everything. Out of curiosity, do you have significant work experience in a non-imperative language?
Lest this turn into a discussion that goes nowhere, I want to attempt to illustrate what I think the article is getting at with this statement:
Concurrent/parallel programming in a pure functional language is very different from what you would do in Ruby
I think the article is pointing out that it’s important to know the concepts of concurrent/parallel programming and know how to use them independent of the programming language. Sure, you’ll use them differently in Haskell and Ruby, but the principles remain. Crudely speaking, concurrency is a kind of API and languages just realize them in different ways.
I’m learning Haskell right now and I’m not learning much that I didn’t already know, nor is it changing my perspective on software development that much. The reason for this is that years (ages?) ago I studied things like GADTs, lazy evaluation, and type theory outside the world of any particular programming language. Then, when studying programming languages, I was able to see how these concepts manifested in different implementations. But when writing software, I think in terms of these general concepts then express them in some (hopefully) appropriate manner with the language at hand.
What this resolutely does not mean is taking the implementation of the concept in one language and writing it in another. So yes, it’s typically a bad idea to use the immutable structures pervasive Haskell and just use them in C, but that doesn’t mean you can’t make use of them in some way, if they seem appropriate.
Concurrency, immutability, and the like exist outside of programming languages and you do not need a programming language to study, learn from, or problem solve with them . Great programmers know this and use it to great effect. I believe that is what the article is talking about.
I am not sure problems and solutions will really transcend any programming language. The same way some concepts are better expressed in some talked languages, programming languages shape the way we think, through their syntax, their architecture, and the current practices of the community.
It is not easy to transfer a solution between two languages with very different approaches, or if it is easy, the result will not be idiomatic, or will not be a good expression of that solution in that language.
The same way, estimating the complexity of a problem depends a lot on the language.
Not really, you can’t really apply most concepts from OOP to a pure functional language. Best practices are different. Even at an algorithmical level you can’t expect solutions in an imperative language to map one-to-one to solutions in a pure functional languages. Good code in Haskell is very, very different from good code in Ruby. The skills aren’t very transferable either.
I think the reality is somewhere in the middle of the two extemes.
While it is quick and easy to ‘pick-up’ a new language it takes much more time and effort to become experienced and really productive with it.
With an existing codebase and a big enough team you can get experienced much quicker.
I’m curious. I know for sure that the ease of learning a language can heavily depend on syntax. I also know that some concepts I take for granted are not at all natural to people who haven’t trained on that during their plastic period. For example, the concepts of variables and assignments seems hard to get across at first to some one who has never programmed.
So I agree that learning the concepts of programming are very important to get right and to teach right and to teach at the right time. I would however say that a language is a medium of instruction and can make a difference in how hard students have to work to absorb concepts.
Saying that syntax doesn’t matter is like saying “Learn the concepts of addition and multiplication, the notation doesn’t matter”. Try multiplying two big numbers using Roman numerals.
By the way…
… and it matters to recruiters that new employees have the right values, technical skills, and experience, not the right list of languages.
My experience with recruiters in real life is actually quite the opposite.