Pattern matching has been available in functional programming languages for decades now, it was introduced in the 70s. (Logic programming languages expose even more expressive forms, at higher runtime cost.) It obviously improves readability of code manipulating symbolic expressions/trees, and there is a lot of code like this. I find it surprising that in the 2020s there are still people wondering whether “the feature provides enough value to justify its complexity”.
(The fact that Python did without for so long was rather a sign of closed-mindedness of its designer subgroup. The same applies, in my opinion, to languages (including Python, Go, etc.) that still don’t have proper support for disjoint union types / variants / sums / sealed case classes.)
Pretty much every feature that has ever been added to every language ever is useful in some way. You can leave a comment like this on almost any feature that a language may not want to implement for one reason or the other.
I think it makes more sense in statically typed languages, especially functional ones. That said, languages make different choices. For me, Python has always been about simplicity and readability, and as I’ve tried to show in the article, at least in Python, structural pattern matching is only useful in a relatively few cases. But it’s also a question of taste: I really value the simplicity of the Go language (and C before it), and don’t mind a little bit of verbosity if it makes things clearer and simpler. I did some Scala for a while, and I can see how people like the “power” of it, but the learning curve of its type system was very steep, and there were so many different ways to do things (not to mention the compiler was very slow, partly because of the very complex type system).
For the record, pattern-matching was developed mostly in dynamically-typed languages before being adopted in statically-typed languages, and it works just as well in a dynamically-typed world. (In the ML-family world, sum types and pattern-matching were introduced by Hope, an experimental dynamically-typed language; in the logic world, they are basic constructs of Prolog, which is also dynamically-typed – although some more-typed dialects exist.)
as I’ve tried to show in the article, at least in Python, structural pattern matching is only useful in a relatively few cases
Out of the 4 cases you describe in the tutorial, I believe your description of two of them is overly advantageous to if..elif:
In the match event.get() case, the example you show is a variation of the original example (the longer of the three such examples in the tutorial), and the change you made makes it easier to write an equivalent if..elif version, because you integrated a case (from another version) that ignores all other Click() events. Without this case (as in the original tutorial example), rewriting with if..elif is harder, you need to duplicate the failure case.
In the eval_expr example, you consider the two versions as readable, but the pattern-version is much easier to maintain. Consider, for example, supporting operations with 4 or 5 parameters, or adding an extra parameter to an existing operator; it’s an easy change with the pattern-matching version, and requires boilerplate-y, non-local transformations with if..elif. These may be uncommon needs for standard mathematical operations, but they are very common when working with other domain-specific languages.
the change you made makes it easier to write an equivalent if..elif version
Sorry if it appeared that way – that was certainly not my intention. I’m not quite sure what you mean, though. The first/original event example in the tutorial handles all click events with no filtering using the same code path, so it’s even simpler to convert. I added the Button.LEFT filtering from a subsequent example to give it a bit more interest so it wasn’t quite so simple. I might be missing something, though.
In the eval_expr example, you consider the two versions as readable, but the pattern-version is much easier to maintain. Consider, for example, supporting operations with 4 or 5 parameters, or adding an extra parameter to an existing operator;
I think those examples are very hypothetical – as you indicate, binary and unary operators aren’t suddenly going to support 4 or 5 parameters. A new operation might, but that’s okay. The only line that’s slightly repetitive is the “attribute unpacking”: w, x, y, z = expr.w, expr.x, expr.y, expr.z.
These may be uncommon needs for standard mathematical operations, but they are very common when working with other domain-specific languages.
You’re right, and that’s part of my point. Python isn’t used for implementing compilers or interpreters all that often. That’s where I’m coming from when I ask, “does the feature provide enough value to justify the complexity?” If 90% of Python developers will only rarely use this complex feature, does it make sense to add it to the language?
To be clear, I’m not suggesting that the change was intentional or sneaky, I’m just pointing out that the translation would be more subtle.
The first/original event example does not ignore “all other Click events” (there is no Click() case), and therefore an accurate if..elif translation would have to do things differently if there is no position field or if it’s not a pair, namely it would have to fall back to the ValueError case.
You’re right, and that’s part of my point. Python isn’t used for implementing compilers or interpreters all that often.
You don’t need to implement a compiler for C or Java, or anything people recognize as a programming language (or HTML or CSS, etc.), to be dealing with a domain-specific languages. Many problem domains contain pieces of data that are effectively expressions in some DSL, and recognizing this can very helpful to write programs in those domains – if the language supports the right features to make this convenient. For example:
to start with the obvious, many programs start by interpreting some configuration file to influence their behavior; many programs have simple needs well-served by linear formats, but many programs (eg. cron jobs, etc.) require more elaborate configurations that are DSL-like. Even if the configuration is written in some standard format (INI, Yaml, etc.) – so parsing can be delegated to a library – the programmer will still write code to interpret or analyze the configuration data.
more gnerally, “structured data formats” are often DSL-shaped; ingesting structured data is something we do super-often in programs
programs that offer a “query” capability typically provide a small language to express those queries
events in an event loop typically form a small language
I think it makes more sense in statically typed languages, especially functional ones.
In addition to the earlier ones gasche mentioned (it’s important to remember this history), it’s used to pervasively in Erlang, and later Elixir. Clojure has core.match, Racket has match, as does Guile. It’s now in Ruby as well!
Thanks! I didn’t know that. I have used pattern matching in statically typed language (mostly Scala), and had seen it in the likes of Haskell and OCaml, so I’d incorrectly assumed it was mainly a statically-typed language thing.
For me, it is the combination of algebraic data types + pattern matching + compile time exhaustiveness checking that is the real game changer. With just 1 out of 3, pattern matching in Python is much less compelling.
I agree. I wonder if they plan to add exhaustiveness checking to mypy. The way the PEP is so no hold barred makes it seem like the goal was featurefulness and not an attempt to support exhaustiveness checking.
I wonder if they plan to add exhaustiveness checking to mypy.
I don’t think that’s possible in the general case. If I understand the PEP correctly, __match_args__ may be a @property getter method, which could read the contents of a file, or perform a network request, etc.
I find it surprising that in the 2020s there are still people wondering whether “the feature provides enough value to justify its complexity”.
I find it surprising that people find this surprising.
Adding features like pattern matching isn’t trivial, and adding it too hastily can backfire in the long term; especially for an established language like Python. As such I would prefer a language take their time, rather than slapping things on because somebody on the internet said it was a good idea.
Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary.
And indeed, this pays off: in the Scheme world, there’s been a match package floating around for a long time, implemented simply as a macro. No changes to the core language needed.
I’m sure you recognize that this situation does not translate to other languages like in this case Python. Implementing it as a macro is just not feasible. And even in Scheme the usage of match macros is rather low. This can be because it is not that useful, but also might be because of the hurdle of adding dependencies is not worth the payoff. Once a feature is integrated in a language, its usage “costs” nothing, thus the value proposition when writing code can be quite different.
This is rather unrelated to the overall discussion, but as a user of the match macros in Scheme, I must say that I find the lack of integration into the base forms slightly annoying. You cannot pattern-match on a let or lambda, you have to use match-let and match-lambda, define/match (the latter only in Racket I think), etc. This makes reaching for pattern-matching feel heavier, and it may be a partial cause to their comparatively lower usage. ML-family languages generalize all binding positions to accept patterns, which is very nice to decompose records for example (or other single-case data structures). I wish Scheme dialects would embrace this generalization, but they haven’t for now – at least not Racket or Clojure.
In the case of Clojure while it doesn’t have pattern matching built-in, it does have quite comprehensive destructuring forms (like nested matching in maps, with rather elaborate mechanisms) that works in all binding positions.
Nice! I suppose (from your post above) that pattern-matching is somehow “integrated” in the Clojure implementation, rather than just being part of the base macro layer that all users see.
I think the case is that Clojure core special forms support it (I suppose the implementation itself is here and called “binding-forms”, which is then used by let, fn and loop which user defined macros often end up expanding to). Thus it is somewhat under the base layer that people use.
But bear in mind this is destructuring, in a more general manner than what Python 2.x already supported, not pattern matching. It also tends to get messy with deep destructuring, but the same can be said of deep pattern matches through multiple layers of constructors.
I agree about pattern matching and Python in general. It’s depressing how many features have died in python-ideas because it takes more than a few seconds for an established programmer to grok them. Function composition comes to mind.
But I think Python might be too complicated for pattern matching. The mechanism they’ve settled on is pretty gnarly. I wrote a thing for pattern matching regexps to see how it’d turn out (admittedly against an early version of the PEP; I haven’t checked it against the current state) and I think the results speak for themselves.
But I think Python might be too complicated for pattern matching. The mechanism they’ve settled on is pretty gnarly.
I mostly agree. I generally like pattern matching and have been excited about this feature, but am still feeling out exactly when I’ll use it and how it lines up with my intuition.
The part that does feel very Pythonic is that destructuring/unpacking is already pretty pervasive in Python. Not only for basic assignments, but also integrated into control flow constructs. For example, it’s idiomatic to do something like:
for key, val in some_dictionary.items():
# ...
Rather than:
for item in some_dictionary.items():
key, val = item
# ...
Or something even worse, like explicit item[0] and item[1]. So the lack of a conditional-with-destructuring, the way we already have foreach-with-destructuring, did seem like a real gap to me, making you have to write the moral equivalent of code that looks more like the 2nd case than the 1st. That hole is now filled by pattern matching. But I agree there are pitfalls around how all these features interact.
Go aims for simplicity of maintenance and deployment. It doesn’t “still don’t have those features”. The Go authors avoided them on purpose. If you want endless abstractions in Go, embedding Lisp is a possibilty: https://github.com/glycerine/zygomys
Disjoint sums are a basic programming feature (it models data whose shape is “either this or that or that other thing”, which ubiquitous in the wild just like pairs/records/structs). It is not an “endless abstraction”, and it is perfectly compatible with maintenance and deployment.
Go is a nice language in some respects, the runtime is excellent, the tooling is impressive, etc etc. But this is no rational excuse for the lack of some basic language features.
We are in the 2020s, there is no excuse for lacking support for sum types and/or pattern matching. Those features have been available for 30 years, their implementation is well-understood, they require no specific runtime support, and they are useful in basically all problem domains.
I’m not trying to bash a language and attract defensive reactions, but rather to discuss (with concrete examples) the fact that language designer’s mindsets can be influenced by some design cultures more than others, and as a result sometimes the design is held back by a lack of interest for things they are unfamiliar with. Not everyone is fortunate to be working with a deeply knowledgeable and curious language designer, such as Graydon Hoare; we need more such people in our language design teams. The default is for people to keep working on what they know; this sort of closed-ecosystem evolution can lead to beautiful ideas (some bits of Perl 6 for example are very nice!), but it can also hold back.
But this is no rational excuse for the lack of some basic language features.
Yes there is. Everyone has a favorite feature, and if all of those are implemented, there would easily be feature bloat, long build times and projects with too many dependencies that depend on too many dependencies, like in C++.
In my opinion, the question is not if a language lacks a feature that someone wants or not, but if it’s usable for goals that people wish to achieve, and Go is clearly suitable for many goals.
Ah yes, Python is famously closed-minded and hateful toward useful features. For example, they’d never adopt something like, say, list comprehensions. The language’s leaders are far too closed-minded, and dogmatically unwilling to ever consider superior ideas, to pick up something like that. Same for any sort of ability to work with lazy iterables, or do useful combinatoric work with them. That’s something that definitely will never be adopted into Python due to the closed-mindedness of its leaders. And don’t get me started on basic FP building blocks like map and folds. It’s well known that Guido hates them so much that they’re permanently forbidden from ever being in the language!
(the fact that Python is not Lisp was always unforgivable to many people; the fact that it is not Haskell has now apparently overtaken that on the list of irredeemable sins; yet somehow we Python programmers continue to get useful work done and shrug off the sneers and insults of our self-proclaimed betters much as we always have)
It is well-documented that Guido Van Rossum planned to removelambda from Python 3. (For the record, I agree that map and filter on lists are much less useful in presence of list comprehensions.) It is also well-documented that recursion is severely limited in Python, making many elegant definitions impractical.
Sure, Python adopted (in 2000 I believe?) list comprehensions from ABC (due to Guido working with the language in the 1980s), and a couple of library-definable iterators. I don’t think this contradicts my claim. New ideas came to the language since (generators, decorators), but it remains notable that the language seems to have resisted incorporating strong ideas from other languages. (More so than, say, Ruby, C#, Kotlin, etc.)
Meta: One aspect of your post that I find unpleasant is the tone. You speak of “sneers and insults”, but it is your post that is highly sarcastic and full of stray exagerations at this or that language community. I’m not interested in escalating in this direction.
I’m certainly biased, but I find Python’s list comprehension an abomination towards readability in comparison to higher-order pipelines or recursion. I’ve not personally coded Python in 8-9 years, but when I see examples, I feel like I need to put my head on upsidedown to understand it.
It is also well-documented that recursion is severely limited in Python, making many elegant definitions impractical.
For a subjective definition of “elegant”. But this basically is just “Python is not Lisp” (or more specifically, “Python is not Scheme”). And that’s OK. Not every language has to have Scheme’s approach to programming, and Scheme’s history has shown that maybe it’s a good thing for other languages not to be Scheme, since Scheme has been badly held back by its community’s insistence that tail-recursive implementations of algorithms should be the only implementations of those algorithms.
You speak of “sneers and insults”, but it is your post that is highly sarcastic and full of stray exagerations at this or that language community.
Your original comment started from a place of assuming – and there really is no other way to read it! – that the programming patterns you care about are objectively superior to other patterns, that languages which do not adopt those patterns are inherently inferior, and that the only reason why a language would not adopt them is due to “closed-mindedness”. Nowhere in your comment is there room for the (ironically) open-minded possibility that someone else might look at patterns you personally subjectively love, evaluate them rationally, and come to a different conclusion than you did – rather, you assume that people who disagree with your stance must be doing so because of personal faults on their part.
And, well, like I said we’ve got decades of experience of people looking down their noses at Python and/or its core team + community for not becoming a copy of their preferred languages. Your comment really is just another instance of that.
I’m not specifically pointing out the lack of tail-call optimization (TCO) in Python (which I think is unfortunate indeed; the main argument is that call stack matters, but it’s technically fully possible to preserve call stacks on the side with TC-optimizing implementations). Ignoring TCO for a minute, the main problem would be the fact that the CPython interpreter severely limits the call space (iirc it’s 1K calls by default; compare that to the 8Mb default on most Unix systems), making recursion mostly unusable in practice, except for logarithmic-space algorithms (balanced trees, etc.).
Scheme has been badly held back by its community’s insistence that tail-recursive implementations of algorithms should be the only implementations of those algorithms.
I’m not sure what you mean – that does not make any sense to me.
[you assume] that the programming patterns you care about are objectively superior to other patterns [..]
Well, I claimed
[pattern matching] obviously improves readability of code manipulating symbolic expressions/trees
and I stand by this rather modest claim, which I believe is an objective statement. In fact it is supported quite well by the blog post that this comment thread is about. (Pattern-matching combines very well with static typing, and it will be interesting to see what Python typers make of it; but its benefits are already evident in a dynamically-typed context.)
Again: your original comment admits of no other interpretation than that you do not believe anyone could rationally look at the feature you like and come to a different conclusion about it. Thus you had to resort to trying to find personal fault in anyone who did.
This does not indicate “closed-mindedness” on the part of others. They may prioritize things differently than you do. They may take different views of complexity and tradeoffs (which are the core of any new language-feature proposal) than you do. Or perhaps they simply do not like the feature as much as you do. But you were unwilling to allow for this — if someone didn’t agree with your stance it must be due to personal fault. You allowed for no other explanation.
That is a problem. And from someone who’s used to seeing that sort of attitude it will get you a dismissive “here we go again”. Which is exactly what you got.
This is perhaps more of a feeling, but saying that Rust isn’t adopting features as quickly as Ruby seems a bit off. Static type adoption in the Python community has been quicker. async/await has been painful, but is being attempted. Stuff like generalized unpacking (and this!) is also shipping out!
Maybe it can be faster, but honestly Python probably has one of the lowest “funding amount relative to impact” of the modern languages which makes the whole project not be able to just get things done as quickly IMO.
Python is truly in a funny place, where many people loudly complain about it not adopting enough features, and many other loudly complain about it loudly adopting too many! It’s of course “different people have different opinions” but still funny to see all on the same page.
It is well-documented that Guido Van Rossum planned to remove lambda from Python 3
Thank you for sharing that document. I think Guido was right: it’s not pythonic to map, nor to use lambdas in most cases.
Every feature is useful, but some ecosystems work better without certain features. I’m not sure where go’s generics fall on this spectrum, but I’m sure most proposed features for python move it away from it’s core competency, rather than augmenting a strong core.
We have previously discussed their tone problem. It comes from their political position within the Python ecosystem and they’re relatively blind to it. Just try to stay cool, I suppose?
I really do recommend clicking through to that link, and seeing just what an unbelievably awful thing I said that the user above called out as “emblematic” of the “contempt” I display to Python users. Or the horrific ulterior motive I was found to have further down.
Please, though, before clicking through, shield the eyes of children and anyone else who might be affected by seeing such content.
To pick one of my favorite examples, I talked to the author of PEP 498 after a presentation that they gave on f-strings, and asked why they did not add destructuring for f-strings, as well as whether they knew about customizeable template literals in ECMAScript, which trace their lineage through quasiliterals in E all the way back to quasiquotation in formal logic. The author knew of all of this history too, but told me that they were unable to convince CPython’s core developers to adopt any of the more advanced language features because they were not seen as useful.
I think that this perspective is the one which might help you understand. Where you see one new feature in PEP 498, I see three missing subfeatures. Where you see itertools as a successful borrowing of many different ideas from many different languages, I see a failure to embrace the arrays and tacit programming of APL and K, and a lack of pattern-matching and custom operators compared to Haskell and SML.
I think the issue is more about pattern matching being a late addition to Python, which means there will be lots of code floating around that isn’t using match expressions. Since it’s not realistic to expect this code to be ported, the old style if … elif will continue to live on. All of this adds up to a larger language surface area, which makes tool support, learning and consistency more difficult.
I’m not really a big fan of this “pile of features” style of language design - if you add something I’d prefer if something got taken away as well. Otherwise you’ll end up with something like Perl 5
I was expecting a technical rant, and I was curious about python specific issues with pattern matching, but it ended up being a rant about ergonomics and intuition. And as someone who codes in Scala, pattern matching is obviously obvious, and it is also obvious that you won’t want to replace every if/else with it. This person seems to be just annoyed about a new syntax construct. Perhaps they are lacking the intuition to think in terms of pattern matching.
At a recent local Python meetup, a friend was presenting some of the new features in Python 3.8 and 3.9, and afterwards we got to talking about the pattern matching feature coming in Python 3.10. I went on a mild rant about how I thought Python had lost the plot: first assignment expressions using :=, and now this rather sprawling feature.
One thing I find unfortunately missing from new-language-feature criticisms such as this is the idea of what could be built on top of this feature, once the language has it.
It reminds me a bit of our inability to estimate exponential times very well. I.e. it’s easy to suggest we don’t need something new right now; after all, we didn’t have it until this moment! How could it be worth it to add it? The point is, you don’t know the true worth right now, because it enables new things. What things? Well, it’s hard to say until we have it! If we knew the answer totally, we’d already have those new things!
I think there are some good quirky things pointed out by this article, (match_args being one of them), but I think it’s also important to think about how enabling new features can be. For example, one could imagine some extensions around type-safety which would wildly improve several of these examples.
At first I was in love with the idea of pattern matching in Python, but this article has put me off a bit, in seeing that it’s not as useful as it is in other languages that I write in (Rust, namely).
In thinking about it and discussing this article with an ex-coworker, it began to remind me of the smart match operator and given/when in Perl, where it was introduced in 5.10 (see https://metacpan.org/release/DAPM/perl-5.10.1/view/pod/perlsyn.pod#Switch-statements and the following section “Smart matching in detail” for how it worked), and then in four short years after it was introduced it was marked as experimental again in 5.18.
Obviously, Perl’s smart match is a lot more surprising than Python’s pattern matching, but this article does highlight that it’s a complex feature with some surprising syntax that breaks expectations. This makes me wonder if it’ll go the way of Perl’s smart match/given/when as a result.
Probably because it is very minuscule. After all, you work through the cases in the same order as you would go though if-elif-else, with about the same amount of checks, so from a runtime cost this is similar to the runtime cost of unpacking that already exists in the language anyway and is pretty popular.
Pattern matching has been available in functional programming languages for decades now, it was introduced in the 70s. (Logic programming languages expose even more expressive forms, at higher runtime cost.) It obviously improves readability of code manipulating symbolic expressions/trees, and there is a lot of code like this. I find it surprising that in the 2020s there are still people wondering whether “the feature provides enough value to justify its complexity”.
(The fact that Python did without for so long was rather a sign of closed-mindedness of its designer subgroup. The same applies, in my opinion, to languages (including Python, Go, etc.) that still don’t have proper support for disjoint union types / variants / sums / sealed case classes.)
Pretty much every feature that has ever been added to every language ever is useful in some way. You can leave a comment like this on almost any feature that a language may not want to implement for one reason or the other.
I think it makes more sense in statically typed languages, especially functional ones. That said, languages make different choices. For me, Python has always been about simplicity and readability, and as I’ve tried to show in the article, at least in Python, structural pattern matching is only useful in a relatively few cases. But it’s also a question of taste: I really value the simplicity of the Go language (and C before it), and don’t mind a little bit of verbosity if it makes things clearer and simpler. I did some Scala for a while, and I can see how people like the “power” of it, but the learning curve of its type system was very steep, and there were so many different ways to do things (not to mention the compiler was very slow, partly because of the very complex type system).
For the record, pattern-matching was developed mostly in dynamically-typed languages before being adopted in statically-typed languages, and it works just as well in a dynamically-typed world. (In the ML-family world, sum types and pattern-matching were introduced by Hope, an experimental dynamically-typed language; in the logic world, they are basic constructs of Prolog, which is also dynamically-typed – although some more-typed dialects exist.)
Out of the 4 cases you describe in the tutorial, I believe your description of two of them is overly advantageous to
if..elif
:match event.get()
case, the example you show is a variation of the original example (the longer of the three such examples in the tutorial), and the change you made makes it easier to write an equivalentif..elif
version, because you integrated a case (from another version) that ignores all otherClick()
events. Without this case (as in the original tutorial example), rewriting withif..elif
is harder, you need to duplicate the failure case.eval_expr
example, you consider the two versions as readable, but the pattern-version is much easier to maintain. Consider, for example, supporting operations with 4 or 5 parameters, or adding an extra parameter to an existing operator; it’s an easy change with the pattern-matching version, and requires boilerplate-y, non-local transformations withif..elif
. These may be uncommon needs for standard mathematical operations, but they are very common when working with other domain-specific languages.Sorry if it appeared that way – that was certainly not my intention. I’m not quite sure what you mean, though. The first/original event example in the tutorial handles all click events with no filtering using the same code path, so it’s even simpler to convert. I added the
Button.LEFT
filtering from a subsequent example to give it a bit more interest so it wasn’t quite so simple. I might be missing something, though.I think those examples are very hypothetical – as you indicate, binary and unary operators aren’t suddenly going to support 4 or 5 parameters. A new operation might, but that’s okay. The only line that’s slightly repetitive is the “attribute unpacking”:
w, x, y, z = expr.w, expr.x, expr.y, expr.z
.You’re right, and that’s part of my point. Python isn’t used for implementing compilers or interpreters all that often. That’s where I’m coming from when I ask, “does the feature provide enough value to justify the complexity?” If 90% of Python developers will only rarely use this complex feature, does it make sense to add it to the language?
To be clear, I’m not suggesting that the change was intentional or sneaky, I’m just pointing out that the translation would be more subtle.
The first/original event example does not ignore “all other
Click
events” (there is noClick()
case), and therefore an accurateif..elif
translation would have to do things differently if there is noposition
field or if it’s not a pair, namely it would have to fall back to theValueError
case.You don’t need to implement a compiler for C or Java, or anything people recognize as a programming language (or HTML or CSS, etc.), to be dealing with a domain-specific languages. Many problem domains contain pieces of data that are effectively expressions in some DSL, and recognizing this can very helpful to write programs in those domains – if the language supports the right features to make this convenient. For example:
In addition to the earlier ones gasche mentioned (it’s important to remember this history), it’s used to pervasively in Erlang, and later Elixir. Clojure has core.match, Racket has
match
, as does Guile. It’s now in Ruby as well!Thanks! I didn’t know that. I have used pattern matching in statically typed language (mostly Scala), and had seen it in the likes of Haskell and OCaml, so I’d incorrectly assumed it was mainly a statically-typed language thing.
It is an important feature of OCaml.
I am aware - was focusing on dynamically typed languages.
For me, it is the combination of algebraic data types + pattern matching + compile time exhaustiveness checking that is the real game changer. With just 1 out of 3, pattern matching in Python is much less compelling.
I agree. I wonder if they plan to add exhaustiveness checking to mypy. The way the PEP is so no hold barred makes it seem like the goal was featurefulness and not an attempt to support exhaustiveness checking.
I don’t think that’s possible in the general case. If I understand the PEP correctly,
__match_args__
may be a@property
getter method, which could read the contents of a file, or perform a network request, etc.I find it surprising that people find this surprising.
Adding features like pattern matching isn’t trivial, and adding it too hastily can backfire in the long term; especially for an established language like Python. As such I would prefer a language take their time, rather than slapping things on because somebody on the internet said it was a good idea.
That’s always been the Scheme philosophy:
And indeed, this pays off: in the Scheme world, there’s been a
match
package floating around for a long time, implemented simply as a macro. No changes to the core language needed.I’m sure you recognize that this situation does not translate to other languages like in this case Python. Implementing it as a macro is just not feasible. And even in Scheme the usage of
match
macros is rather low. This can be because it is not that useful, but also might be because of the hurdle of adding dependencies is not worth the payoff. Once a feature is integrated in a language, its usage “costs” nothing, thus the value proposition when writing code can be quite different.This is rather unrelated to the overall discussion, but as a user of the
match
macros in Scheme, I must say that I find the lack of integration into the base forms slightly annoying. You cannot pattern-match on alet
orlambda
, you have to usematch-let
andmatch-lambda
,define/match
(the latter only in Racket I think), etc. This makes reaching for pattern-matching feel heavier, and it may be a partial cause to their comparatively lower usage. ML-family languages generalize all binding positions to accept patterns, which is very nice to decompose records for example (or other single-case data structures). I wish Scheme dialects would embrace this generalization, but they haven’t for now – at least not Racket or Clojure.In the case of Clojure while it doesn’t have pattern matching built-in, it does have quite comprehensive destructuring forms (like nested matching in maps, with rather elaborate mechanisms) that works in all binding positions.
Nice! I suppose (from your post above) that pattern-matching is somehow “integrated” in the Clojure implementation, rather than just being part of the base macro layer that all users see.
I think the case is that Clojure core special forms support it (I suppose the implementation itself is here and called “binding-forms”, which is then used by
let
,fn
andloop
which user defined macros often end up expanding to). Thus it is somewhat under the base layer that people use.But bear in mind this is destructuring, in a more general manner than what Python 2.x already supported, not pattern matching. It also tends to get messy with deep destructuring, but the same can be said of deep pattern matches through multiple layers of constructors.
I agree about pattern matching and Python in general. It’s depressing how many features have died in python-ideas because it takes more than a few seconds for an established programmer to grok them. Function composition comes to mind.
But I think Python might be too complicated for pattern matching. The mechanism they’ve settled on is pretty gnarly. I wrote a thing for pattern matching regexps to see how it’d turn out (admittedly against an early version of the PEP; I haven’t checked it against the current state) and I think the results speak for themselves.
I mostly agree. I generally like pattern matching and have been excited about this feature, but am still feeling out exactly when I’ll use it and how it lines up with my intuition.
The part that does feel very Pythonic is that destructuring/unpacking is already pretty pervasive in Python. Not only for basic assignments, but also integrated into control flow constructs. For example, it’s idiomatic to do something like:
Rather than:
Or something even worse, like explicit
item[0]
anditem[1]
. So the lack of a conditional-with-destructuring, the way we already have foreach-with-destructuring, did seem like a real gap to me, making you have to write the moral equivalent of code that looks more like the 2nd case than the 1st. That hole is now filled by pattern matching. But I agree there are pitfalls around how all these features interact.looks like pattern matching to me
Go aims for simplicity of maintenance and deployment. It doesn’t “still don’t have those features”. The Go authors avoided them on purpose. If you want endless abstractions in Go, embedding Lisp is a possibilty: https://github.com/glycerine/zygomys
Disjoint sums are a basic programming feature (it models data whose shape is “either this or that or that other thing”, which ubiquitous in the wild just like pairs/records/structs). It is not an “endless abstraction”, and it is perfectly compatible with maintenance and deployment. Go is a nice language in some respects, the runtime is excellent, the tooling is impressive, etc etc. But this is no rational excuse for the lack of some basic language features.
We are in the 2020s, there is no excuse for lacking support for sum types and/or pattern matching. Those features have been available for 30 years, their implementation is well-understood, they require no specific runtime support, and they are useful in basically all problem domains.
I’m not trying to bash a language and attract defensive reactions, but rather to discuss (with concrete examples) the fact that language designer’s mindsets can be influenced by some design cultures more than others, and as a result sometimes the design is held back by a lack of interest for things they are unfamiliar with. Not everyone is fortunate to be working with a deeply knowledgeable and curious language designer, such as Graydon Hoare; we need more such people in our language design teams. The default is for people to keep working on what they know; this sort of closed-ecosystem evolution can lead to beautiful ideas (some bits of Perl 6 for example are very nice!), but it can also hold back.
Yes there is. Everyone has a favorite feature, and if all of those are implemented, there would easily be feature bloat, long build times and projects with too many dependencies that depend on too many dependencies, like in C++.
In my opinion, the question is not if a language lacks a feature that someone wants or not, but if it’s usable for goals that people wish to achieve, and Go is clearly suitable for many goals.
Ah yes, Python is famously closed-minded and hateful toward useful features. For example, they’d never adopt something like, say, list comprehensions. The language’s leaders are far too closed-minded, and dogmatically unwilling to ever consider superior ideas, to pick up something like that. Same for any sort of ability to work with lazy iterables, or do useful combinatoric work with them. That’s something that definitely will never be adopted into Python due to the closed-mindedness of its leaders. And don’t get me started on basic FP building blocks like
map
and folds. It’s well known that Guido hates them so much that they’re permanently forbidden from ever being in the language!(the fact that Python is not Lisp was always unforgivable to many people; the fact that it is not Haskell has now apparently overtaken that on the list of irredeemable sins; yet somehow we Python programmers continue to get useful work done and shrug off the sneers and insults of our self-proclaimed betters much as we always have)
It is well-documented that Guido Van Rossum planned to remove
lambda
from Python 3. (For the record, I agree thatmap
andfilter
on lists are much less useful in presence of list comprehensions.) It is also well-documented that recursion is severely limited in Python, making many elegant definitions impractical.Sure, Python adopted (in 2000 I believe?) list comprehensions from ABC (due to Guido working with the language in the 1980s), and a couple of library-definable iterators. I don’t think this contradicts my claim. New ideas came to the language since (generators, decorators), but it remains notable that the language seems to have resisted incorporating strong ideas from other languages. (More so than, say, Ruby, C#, Kotlin, etc.)
Meta: One aspect of your post that I find unpleasant is the tone. You speak of “sneers and insults”, but it is your post that is highly sarcastic and full of stray exagerations at this or that language community. I’m not interested in escalating in this direction.
I’m certainly biased, but I find Python’s list comprehension an abomination towards readability in comparison to higher-order pipelines or recursion. I’ve not personally coded Python in 8-9 years, but when I see examples, I feel like I need to put my head on upsidedown to understand it.
For a subjective definition of “elegant”. But this basically is just “Python is not Lisp” (or more specifically, “Python is not Scheme”). And that’s OK. Not every language has to have Scheme’s approach to programming, and Scheme’s history has shown that maybe it’s a good thing for other languages not to be Scheme, since Scheme has been badly held back by its community’s insistence that tail-recursive implementations of algorithms should be the only implementations of those algorithms.
Your original comment started from a place of assuming – and there really is no other way to read it! – that the programming patterns you care about are objectively superior to other patterns, that languages which do not adopt those patterns are inherently inferior, and that the only reason why a language would not adopt them is due to “closed-mindedness”. Nowhere in your comment is there room for the (ironically) open-minded possibility that someone else might look at patterns you personally subjectively love, evaluate them rationally, and come to a different conclusion than you did – rather, you assume that people who disagree with your stance must be doing so because of personal faults on their part.
And, well, like I said we’ve got decades of experience of people looking down their noses at Python and/or its core team + community for not becoming a copy of their preferred languages. Your comment really is just another instance of that.
I’m not specifically pointing out the lack of tail-call optimization (TCO) in Python (which I think is unfortunate indeed; the main argument is that call stack matters, but it’s technically fully possible to preserve call stacks on the side with TC-optimizing implementations). Ignoring TCO for a minute, the main problem would be the fact that the CPython interpreter severely limits the call space (iirc it’s 1K calls by default; compare that to the 8Mb default on most Unix systems), making recursion mostly unusable in practice, except for logarithmic-space algorithms (balanced trees, etc.).
I’m not sure what you mean – that does not make any sense to me.
Well, I claimed
and I stand by this rather modest claim, which I believe is an objective statement. In fact it is supported quite well by the blog post that this comment thread is about. (Pattern-matching combines very well with static typing, and it will be interesting to see what Python typers make of it; but its benefits are already evident in a dynamically-typed context.)
Nit: I don’t think you can have an objective statement of value.
Again: your original comment admits of no other interpretation than that you do not believe anyone could rationally look at the feature you like and come to a different conclusion about it. Thus you had to resort to trying to find personal fault in anyone who did.
This does not indicate “closed-mindedness” on the part of others. They may prioritize things differently than you do. They may take different views of complexity and tradeoffs (which are the core of any new language-feature proposal) than you do. Or perhaps they simply do not like the feature as much as you do. But you were unwilling to allow for this — if someone didn’t agree with your stance it must be due to personal fault. You allowed for no other explanation.
That is a problem. And from someone who’s used to seeing that sort of attitude it will get you a dismissive “here we go again”. Which is exactly what you got.
This is perhaps more of a feeling, but saying that Rust isn’t adopting features as quickly as Ruby seems a bit off. Static type adoption in the Python community has been quicker. async/await has been painful, but is being attempted. Stuff like generalized unpacking (and this!) is also shipping out!
Maybe it can be faster, but honestly Python probably has one of the lowest “funding amount relative to impact” of the modern languages which makes the whole project not be able to just get things done as quickly IMO.
Python is truly in a funny place, where many people loudly complain about it not adopting enough features, and many other loudly complain about it loudly adopting too many! It’s of course “different people have different opinions” but still funny to see all on the same page.
Thank you for sharing that document. I think Guido was right: it’s not pythonic to map, nor to use lambdas in most cases.
Every feature is useful, but some ecosystems work better without certain features. I’m not sure where go’s generics fall on this spectrum, but I’m sure most proposed features for python move it away from it’s core competency, rather than augmenting a strong core.
We have previously discussed their tone problem. It comes from their political position within the Python ecosystem and they’re relatively blind to it. Just try to stay cool, I suppose?
I really do recommend clicking through to that link, and seeing just what an unbelievably awful thing I said that the user above called out as “emblematic” of the “contempt” I display to Python users. Or the horrific ulterior motive I was found to have further down.
Please, though, before clicking through, shield the eyes of children and anyone else who might be affected by seeing such content.
To pick one of my favorite examples, I talked to the author of PEP 498 after a presentation that they gave on f-strings, and asked why they did not add destructuring for f-strings, as well as whether they knew about customizeable template literals in ECMAScript, which trace their lineage through quasiliterals in E all the way back to quasiquotation in formal logic. The author knew of all of this history too, but told me that they were unable to convince CPython’s core developers to adopt any of the more advanced language features because they were not seen as useful.
I think that this perspective is the one which might help you understand. Where you see one new feature in PEP 498, I see three missing subfeatures. Where you see itertools as a successful borrowing of many different ideas from many different languages, I see a failure to embrace the arrays and tacit programming of APL and K, and a lack of pattern-matching and custom operators compared to Haskell and SML.
I think the issue is more about pattern matching being a late addition to Python, which means there will be lots of code floating around that isn’t using match expressions. Since it’s not realistic to expect this code to be ported, the old style
if … elif
will continue to live on. All of this adds up to a larger language surface area, which makes tool support, learning and consistency more difficult.I’m not really a big fan of this “pile of features” style of language design - if you add something I’d prefer if something got taken away as well. Otherwise you’ll end up with something like Perl 5
I was expecting a technical rant, and I was curious about python specific issues with pattern matching, but it ended up being a rant about ergonomics and intuition. And as someone who codes in Scala, pattern matching is obviously obvious, and it is also obvious that you won’t want to replace every if/else with it. This person seems to be just annoyed about a new syntax construct. Perhaps they are lacking the intuition to think in terms of pattern matching.
OMG, thank you, this is exactly how I feel.
One thing I find unfortunately missing from new-language-feature criticisms such as this is the idea of what could be built on top of this feature, once the language has it.
It reminds me a bit of our inability to estimate exponential times very well. I.e. it’s easy to suggest we don’t need something new right now; after all, we didn’t have it until this moment! How could it be worth it to add it? The point is, you don’t know the true worth right now, because it enables new things. What things? Well, it’s hard to say until we have it! If we knew the answer totally, we’d already have those new things!
I think there are some good quirky things pointed out by this article, (
match_args
being one of them), but I think it’s also important to think about how enabling new features can be. For example, one could imagine some extensions around type-safety which would wildly improve several of these examples.At first I was in love with the idea of pattern matching in Python, but this article has put me off a bit, in seeing that it’s not as useful as it is in other languages that I write in (Rust, namely).
In thinking about it and discussing this article with an ex-coworker, it began to remind me of the smart match operator and given/when in Perl, where it was introduced in 5.10 (see https://metacpan.org/release/DAPM/perl-5.10.1/view/pod/perlsyn.pod#Switch-statements and the following section “Smart matching in detail” for how it worked), and then in four short years after it was introduced it was marked as experimental again in 5.18.
Obviously, Perl’s smart match is a lot more surprising than Python’s pattern matching, but this article does highlight that it’s a complex feature with some surprising syntax that breaks expectations. This makes me wonder if it’ll go the way of Perl’s smart match/given/when as a result.
I’m wondering why no body has mentioned the runtime cost of such construct in a fully interpreted dynamic typed language such as python.
Probably because it is very minuscule. After all, you work through the cases in the same order as you would go though
if-elif-else
, with about the same amount of checks, so from a runtime cost this is similar to the runtime cost of unpacking that already exists in the language anyway and is pretty popular.If anything, maybe it would unlock some optimizations for certain match patterns, eg literals?