Computer scientists should really be spending their time developing new libraries rather than inventing new programming languages.
This assumes that the purpose of computer scientists is to make professionals in industry more efficient programmers in the short term. I disagree with that assumption – the purpose of computer scientists is to further our knowledge of computer science, and any benefit to professionals in industry is a secondary effect.
For others who might read these comments before clicking through to the article: that quote is not a position taken by author. The author relays what a friend told them for the first two paragraphs (including that quote), and then in the third says,
Being a language designer myself, I, of course, don’t share this opinion.
and goes on for the rest of the article to defend why they think language design is worthwhile. Note that even the title of the article is in quotation marks.
On the other hand, I agree with @jmelesky’s comment after the quote and think it is still relevant to the article. The article does try to justify language design efforts in terms of making industry programmers more efficient and happier – the author’s own language was motivated by limitations he ran into as a library author. But this doesn’t mean all CS efforts are or should be motivated by industry.
For others who might read these comments before clicking through to the article: that quote is not a position taken by author.
Thanks for that.
I skipped this article because of jmelesky’s comment – this isn’t something that I want to argue about:
There’s tonnes of Java systems and Java programmers and if the world doesn’t need another language, then it’s Java, and I’m simply not okay with that.
[…] can you write a static-typing library for Scheme that then automatically checks your code for type errors? And the current answer, for now and for the foreseeable future, is no.
That’s wrong, there is Typed Racket. (Granted, it’s called “Racket”, not “Scheme” but the same thing should be possible in any Scheme implementation.)
No mainstream language today allows you to write a library to extend its type system
hmmmm that’s the most interesting sentence in the article. There’s alot of ways to extend type systems, so I’m not sure what the author has in mind exactly. I seem to remember somebody adding some flavor of dependent types into Haskell, which would certainly qualify.
Idris type providers do not exactly extend the type system, but they allow types to be defined in arbitrary new ways. One of the motivating examples is record types for a database that it connects to at compile-time.
The Common Lisp Metaobject Protocol is almost the exact opposite thing, an attempt to allow libraries to define new class semantics. One of the motivating examples is implementing multiple inheritance (in a different way than it’s already implemented) to interact better with C++ libraries.
Neither of these really extends the process of type checking. For example, you couldn’t implement linear types on top of either of them. That would be a fascinating area of research, but over my head for now.
Idris type providers
Haskellers already do this with Template Haskell, generating domain types from the database schema. F# popularized the nomenclature and technique without offering a general faculty such as you have in Haskell and Idris.
I think F# did a much better job offering tooling support around it–it’s very different from Template Haskell in that regard.
But given the state of tooling in Haskell, all of F#’s compute-types-on-the-fly and auto-complete goodness probably hasn’t been on the radar.
I’m also not sure it’s true. Common Lisp has ASDF packages to add algebraic data types and exhaustive pattern matching 0
Maybe Lisps are the exception to the rule, here? But as you say, Haskell’s type system has also been extended.
[Comment removed by author]
agreed. (though Clojure seems to be pretty popular)
The more powerful the language, the easier the libraries are to use.
Like Boost and ActiveRecord!
simple != easy
ActiveRecord is super easy to use.
I’m really happy that thanks to Rich Hickey’s talk, a lot of programmers (though, mainly those who frequent lobste.rs and HN) are now aware of the difference between the two words and are using them correctly; something easy can be complex, and something simple can be difficult to grasp. Additionally, if a problem is inherently complex, it’s difficult to argue that its solution is simple; it can be as simple as can be given the constraints, but still be complex.