Small suggestion: maybe having a “secret santa” as the main example is not the ideal option? For people who haven’t done secret santa, they don’t have a mental model that helps them read through the example and relate it to their lived experience.
Almost nothing interesting is purely culturally neutral and not ableist for someone.
Some people might actually find it easier to deal with a model that teaches them something about a culture they don’t know - certainly it’s a test of the clarity of exposition if those unfamiliar can learn from it.
Thanks for the suggestion :) We hoped this would be a relatively understandable example for the holiday season, but it’s true that if a reader hasn’t played that game before they may be a bit lost. I think we meant to include a link to a short explanation of the game, but forgot that!
We are still trying to think of an ideal, short, but illuminating example for after the holidays, but we haven’t struck upon the ideal one yet.
Do you have anything in mind that you think might be a great fit?
Maybe not tie it to “holidays”? Mine is already over, for example.
You could do something ambiguously holiday-like with a potluck dinner for friends? That is a relatively common experience, and lots of people do variations on potlucks around this time. It doesn’t have the recursion of secret santa, but it can have its own complexities: for example, making sure that there is at least one entree, that no more than one person brings disposable dishes, etc.
Linking to a description of secret santa is not super-helpful? I don’t particularly care to read about the intricacies of a social custom I’ll never participate in just to learn about a new formal methods syntax.
I’ve never met anyone who already knew how to do all the *nix stuff before docker and also loved docker after it became popular.
I think its a generational shift: those who learned Linux server administration thru docker and those for whom docker will always be an “extra” thing on top of an already solid sys eng foundation.
Is this a problem with kids these days? Do they not know how to fill in the config.php?
No, we really don’t, and we have no intention to learn.
I think there is a bit of irony about the author complaining that docker is too easy to use and as a result, there is this never ending proliferation of suboptimal configs and containers pushed by hobbyists and companies alike.
Guess what? If it wasn’t this easy, a lot of people who are able to do this stuff now would not be able to do it at all. They would not be able to figure out Linux packaging, 3 different init systems, how to manage and debug processes manually, etc.
They would have given up already.
So I always get a twinge of elitism when I read these kind of posts. For all of its problems and shortcomings, I think docker became so popular because it allows this “doing without understanding” to grow and thrive. I think that’s a good thing, because it makes server software and server ownership much more accessible to more people, even without app purveyors doing an admirable job w/ their container builds and docs.
Counter example: I knew how to do all of this stuff before OCI containers came along, and I absolutely think 90% of the time, OCI containers are the right solution.
I’ve never met anyone who already knew how to do all the *nix stuff before docker and also loved docker after it became popular.
Pleased to meet you! I suppose I may only do fairly basic things with docker, but as a *nix lover of some eons, I love it as a tool. I can apply all my usual knowledge to set up a runtime for an otherwise picky app and it can merrily run as if it is on my system without mucking up things for everyone else. To me this has been nothing short of life changing. No more install scripts messing with things I’d rather they not. A tiny bit of data separation. A degree of portability that has proved more handy than I first anticipated. I even use it in production and all the gripes about configuration management seem overblown to me. Sure, environment management is nontrivial, but for me docker has simplified it.
If the OCI container distributed by the project is not great, it’s likely that if the project tried to build a package (Debian? RPM?) or had to have clear instructions for building from sources, these would not be great either!
The existence of only an OCI container as an artifact is Bayesian evidence for an immature project. This is not because containers are a poor solution. It is because if a project is immature, it is more likely to reach for the “obvious” solution. This solution, at this point, is distributing an OCI container. Any project that has clear instructions for building from sources will be mature because these decisions tend to be made by mature projects.
This doesn’t mean that distributing as a container is not the right solution 90% of the time. In general, there are better signals for a project’s maturity, so using the (weak) Bayesian signal of the distribution mechanism is rarely worth it.
The premises are true, but do not imply any of the conclusions the author thinks they do.
I’d say what most working mathematicians mean is “any system which is equipotent and equiconsistent with reasoning by way of a Hilbert-style predicate logic system together with the ZFC axioms”. Different mathematicians will have different preferences for the system they actually use, but I don’t know of any which are not equipotent and equiconsistent with ZFC + predicate logic.
This rules out finitary/intuitionist stuff, which is not super popular, and does rule out non-explosive things. Even those can be formulated in the language of ZFC, though, so in practice it’s not a problem.
The technical meaning of “explosive” is that “everything follows from a contradiction”, so if you ever hit a contradiction, the whole thing explodes. In general, it doesn’t matter, but in some AI applications, non-explosive logics are needed to prevent a failure cascade.
ETCS plus Replacement is equiconsistent with ZF. Note that Choice can be optionally added to both systems.
Those of us who work with computers are forced to limit ourselves to that which is realizable, and Choice is not realizable. This means that working with ZFC necessarily entails not working with what computers actually compute. This is all beside the point that intuitionistic approaches have finer detail than classical approaches, as stressed by Bauer 2017.
Exactly what I said: I had to work with several à la carte frameworks that people threw together in Flask and I also disagree with its object/request model.
I’d say: Just use Django because that’s what they’re trying to do anyway and learning Django will have much more payoffs both for the person making the things as for the people maintaining it. Django does scale down reasonably well too so I wouldn’t dismiss it for a small website.
Otherwise for API based things it seems FastAPI is kinda becoming the new default.
I, myself, am hesitant on using anything that the creator of FastAPI is behind. That’s not because the projects themselves are bad, but because he seems to be the BDFL for all of them and is seemingly spread very thin between maintaining all of them, which is to be expected, but doesn’t seem all that doable in the long run while keeping all of the projects up to snuff.
Same “just write code, no need to ‘create a project’ and other things that feel heavy weight” that Flask has, without any of the weird stuff they pull (like global requests, or having external people maintain things which are critical to the framework).
Every time I see this in Flask code my entire mind and body just refuse to deal with it. Just cannot fathom how this became a thing. It’s such a PHP-vibe which… maybe is exactly how it became a thing.
For a library I was working in, I wanted to show case how to use it from Flask and wrote a minimal demo app. Even that was almost more Flask than I can bear.
This was a very well written example of quite a common genre of article: find a strawman claim that AI will replace humans. Then dive deep into some experiments with LLMs that show that claim not to be true… while incidentally demonstrating an astonishing array of capabilities that would have been seen as completely stunning just a couple of years ago.
To this author’s credit, unlike many other instances of this genre they didn’t conclude with “…. and therefore LLMs are a waste of time”.
I’m not sure I understand what you mean by “strawman” here. Matt Welsh seems like a real person who really did claim that “In situations where one needs a “simple” program […] those programs will, themselves, be generated by an AI rather than coded by hand.” I don’t think the author “intentionally misrepresented [the] proposition” (as per the definition of strawman) that Welsh made.
At best, if there’s any misrepresentation, it is that the word ladder finding program is not “simple”? I think even if you disagree that it is “simple” according to Welsh, a charitable reading would say that this is an unintentional misrepresentation, since I could not easily find Welsh’s definition of “simple” which disagrees with word ladder being “simple”. If anything, Welsh’s abstract is that “[t]he end of classical computer science is coming, and most of us are dinosaurs waiting for the meteor to hit” – unless “most of us” are people who cannot write programs more than 200 lines, word ladder would fall squarely inside of the definition.
Welsh, according to my quick review, holds a PhD in CS from University of California, Berkeley, a respected university and far from a diploma mill. While not being a specialist in AI, this doesn’t even point to any cherry picking on behalf of the writer of the article: he argued with a claim made in earnest with a professional in the field and published in a journal with over 50 years of history and an impact factor of over 20.
I think disputing claims made by CS PhDs in respected journals should not qualify as “arguing with a strawman”.
That’s fair: calling this a “strawman” doesn’t hold up to the definition of strawman.
The rest of my comment still stands. There are plenty of articles that start from what I think is a bad proposition - that AI is an effective replacement for human intellectual labour - and then argue against that in a way that also highlights quite how incredible the capabilities of modern LLMs are.
Welsh is an accomplished person, but “the End of Programming” is obviously using his academic credibility to promote a viewpoint beneficial to his AI coding startup.
It’s a little disappointing to see the obvious “troll” title and conflict of interest.
From quick inspection, it’s obvious that both sides are four degree polynomials, so checking that they agree on five elements is enough. (Sum(n degree polynomial) is n+1 degree polynomial, nice and fully general fact – this is what Babbage’s difference engine was based on.)
It’s mostly a Big-O argument – if you take the second half of the sum, you can bound it from below by constant times x**4, and if you take the whole sum, you can bound it from above by noting every element is <= x**3, so it can’t be more than x**4.
Also, you can do a plain differences-based argument, and use the fact that (x+1)**n - x**n is a degree n-1 polynomial by expanding the binomial.
I would have added another addendum. Usually, the probability is small (if it’s too big, it doesn’t matter how big it is – collisions are too likely). In that case, the log-odds of the probability is approximated by log(p), so:
log(k^2 / 2N) = 2log(k) - log(N) - 1
This is the “strength of belief” (or, since it is negative, “strength of disbelief” or “confidence that it won’t happen”, measured in bits).
For example, if you use SHA-256 (log(N)=256) and you have a few billion docs (log(k) = 40), then the confidence that there won’t be a collision is 257 - 80 = 177 bits. On the other hand, if you’re worried about colliding UUID v4 (that’s not a hash, but the math doesn’t care), you have 122 bits, 123 - 80 = 43. Still pretty good!
(Credit for thinking about confidence as log odds goes to ET Jaynes “Probability Theory: The Logic of Science” where he suggested decibels – but since most people here are software engineers rather than sound engineers, bits are more intuitive units than decibels)
The “mean” example falls flat – if numpy didn’t support “mean” itself, you could use the fact that it supports “sum” and then do the division operation. It would be slightly slower, of course, but you would still get the benefits from vectorization.
This is one reason why “it’s rare”.
It might make sense to add this to “ways to work around”: see if your operation can be broken into multiple supported vectorized operation. For example, if you needed to calculate stddev, you could break it down by:
Edit: Thanks for the feedback! I flipped the order of the first and second problems and updated the Numba example, and I think now the narrative makes a lot more sense.
I have been used heavily years ago and I hate him :)
The deferred/callback/errback mechanism is very powerful, but the code readability and code maintenance is terrible.
I find asyncio easier to mantain and I really love Trio
Odds are, if it’s a networking or network-related problem, and someone else has encountered it in the past, Twisted will already have the solution either in Twisted itself or in its ecosystem.
Very simple syntax, but I’m not sure I like these constructs. Unless I remember the specification, my knee-jerk reaction is to be surprised and suspicious upon seeing that kind of code. Is it going to be executed or not? Is it a merge conflict resolution gone bad? Especially if there are neighbouring if/else blocks with almost correct indentation.
Although, I must admit the for/else variant is great for simplifying searches in iterables :)
This is one of those cases where I think the Python solution is elegant but impractical. I find it much harder to reason about (and consequently I strongly suspect it is more error prone) than (for example) the Go solution to the same problem, which is to continue LABEL (example). That said, I recently screwed up some Go code by typing goto instead of continue, thereby creating a surprising infinite loop.
I predict that your knee-jerk surprise and suspicion will give way to mild feelings of approval as you get more comfortable with Python idioms. Other languages I’ve known have much stranger idioms, which also become ‘intuitive’ with familiarity. Language is a wonderful thing.
In my opinion, the most basic requirement of someone working on Python code is to know Python’s syntax. And in fact that goes for any programming language. One of the best things about Python is that it has really simple syntax for a language of its type (nothing approaching Lisp’s simplicity). It requires only a single token of lookahead to parse, which I think helps humans and computers. There are no “garden path” syntactic constructs where you only realise half way along a line that you need to go back to the beginning and start parsing that line again, like you can get in C++:
x * y = z;
So when you say ‘Unless I remember the specification’, it makes me wonder whether you actually know the language properly or whether you’re just cargo-culting code based on some sort of surface-level knowledge of the language that you’ve gained only by reading a lot of code. Python’s grammar is right there in the documentation, all these constructs are well-documented and I remember them being covered in the first Python book I ever read, so there’s really no excuse not to be familiar with them.
I know a lot of people aren’t really fond of Django’s template language, but its for tag uses an optional empty block instead of an else block to clarify that it’s only for things you want to do when the thing you tried to iterate was empty.
While it’s true that Python’s for/else is not quite identical to the Django template language’s for/empty, handling an empty iterable is the primary use case I’ve seen in the real world for people using Python’s for/else, and avoids some of the confusion of what Python’s for/else actually does (which is, I’d argue, a poor fit for else).
At some point, however, awk’s limitations start to show. It has no real concept of breaking files into modules, it lacks quality error reporting, and it’s missing other things that are now considered fundamentals of how a language works.
GAWK is not portable. You could possibly say “neither is Python”, but I would bet that Python is more available than GAWK. and even if it isnt, if youre going to have to install a package anyway, wouldnt a butcher knife be better than a plastic kids knife?
I like AWK, I have used it for many years and would consider myself an AWK expert. But when the question is “what is the right tool for the job”? The answer is rarely AWK or GAWK.
Came here to say that very thing. The syntax maps more precisely and the idioms fit more completely thanks to Perl’s history for this almost exact purpose. The right tool for the right job.
What do you mean “gawk is not portable”? Name one platform that has awk and python that does not have gawk?
The point is you can either spend your time rewriting or you can just keep using the same code with extensions.
And if you really really want to rewrite, Perl is a lot closer. This whole article just seems like someone who has arbitrarily decided that python is a “real” language so it’s inherently better to use it.
I suppose you mean that gawk features are not portable among the default awk on different OSes, so you shouldn’t use them and pretend that the script will work on any awk. That is totally true.
But the OP likely means that you can use gawk explicitly, treating it as a separate language. Gawk is available on almost all Unix OSes, so it is portable.
My point is if your going to have to install a package, you might as well install $proper_programming_language instead of AWK. Unless what you need can be easily done with GAWK alone, its not really worth using.
Keep in mind that even with GAWK, proper support for indexed arrays is not available, nor first class functions, private variables in AWK are footguns, no HTTP client, no JSON, etc.
Explicit interfaces are a form of the “Explicit is better than implicit,” “There should be only one obvious way to do it,” “Practicality beats purity” and “Now is better than never” principles.
What are the advantages versus something like typing/mypy and Protocol? Mypy has the benefits to not need to use the verify.verifyObject thing and explicitely define who implements what (@implementer).
If you’re importing from shapes to register implementations of get_area, and them the maintainer of shapes decides to implement a “get_area friendly” new shape class, won’t that cause a infinite import loop? Because area_calculation imports shapes and then shapes imports area_calculation?
One nice thing about doing things this way is that if someone else writes a new shape that is intended to play well with our code, they can implement the get_area themselves
I get why you might use singledispatch to add methods to classes someone else “owns”, but if you were implementing a new class why wouldn’t you just add the methods normally to the class, rather than going through the singledispatch route:
So, why not simply extend the class directly?
For e.g, given a Circle class,
class Circle(Circle):
def area(self):
...
Older Circle instances are not affected (and hence will not do random things if you are redefining a method) and newer instances will answer to shape.area().
So…you think that it’s “simpler” to have some Circle objects have “AttributeError” when you call area, and some not. Also, where do you plan to put this code? In the library that’s not yours? How are you going to make sure that all parts of your code generate only newer circles?
I’m not sure the word “simply” means what you think it means.
I was not being dismissive of your post, however, you haven’t explained why one should use the specified library. What are the pitfalls of the approach you dismissed as “While it is possible to reach into a class and add a method, this is a bad idea: nobody expects their class to grow new methods, and things might break in weird ways.”.
The approach I outlined is simple, with the caveat I outlined, and I do not think that is a wrong meaning of simple.
It’s an excellent point and it has nothing to do with Climate Change.
Let’s do a mental experiment. Switch the topic. Let’s say you’re concerned about world hunger.
What can a developer do about world hunger? Well, if you wrote an app that allowed people to sell hot dogs at 20% than they used to, that’d help. How? Because the incremental drop in hot dog prices would affect the entire food chain to some degree, thereby making food cheaper for folks who can’t afford it.
But that doesn’t feel right, does it? Even if overall, your hot dog app actually did more than anything else you could do for world hunger, it doesn’t feel like you’re directly doing something about it.
So, what do you want? What part of this question is about doing stuff, what part is about how you feel about doing stuff, and what part is about making a difference? These are three different issues. Whatever your answer on these issues, you should have some sort of measurable test to see how well you’re doing. Write the test, then make the test pass.
That’s a nice case where market forces actually align with ethics. It happens more often than people think about it, since you don’t notice things when they work right. But it’s ultimately a coincidence.
But what about those cases where it doesn’t align? A lot of environmental impact doesn’t show up on the balance sheets, because the cost has been externalized. The obvious example is dumping toxic waste in the river, which might not even affect the employees if they happen to be upstream, but even things like creating a toxic social norm and inducing substance addiction have significant costs that don’t immediately affect the business.
Also, food distribution is pretty well optimized already, except for harmful cultural norms that induce lots of food waste. Those can’t be fixed with app development.
Yes, I deliberately tried to keep this very simple as the point was that the complexities expand rather quickly. Folks are welcome to add ethics and other concepts as desired.
This is a very good question because it’s something a lot of people want to know about: I feel strongly about X. What can I do? Far too often the answer given is some sort of self-serving version of “join my cause!” as if everything has already been decided and a few tweets and some donations every month is all that’s needed. Do this, then go on about your life. No further thought required.
It’s an especially powerful question because it needs some decomposition. If you don’t know why you feel strongly about X, in concrete terms, you’re rather unlikely to feel as if you’ve made any progress with it, no matter what you do. How it should be decomposed it up to each person, of course. But it needs work.
Small suggestion: maybe having a “secret santa” as the main example is not the ideal option? For people who haven’t done secret santa, they don’t have a mental model that helps them read through the example and relate it to their lived experience.
Almost nothing interesting is purely culturally neutral and not ableist for someone.
Some people might actually find it easier to deal with a model that teaches them something about a culture they don’t know - certainly it’s a test of the clarity of exposition if those unfamiliar can learn from it.
Thanks for the suggestion :) We hoped this would be a relatively understandable example for the holiday season, but it’s true that if a reader hasn’t played that game before they may be a bit lost. I think we meant to include a link to a short explanation of the game, but forgot that!
We are still trying to think of an ideal, short, but illuminating example for after the holidays, but we haven’t struck upon the ideal one yet.
Do you have anything in mind that you think might be a great fit?
Maybe not tie it to “holidays”? Mine is already over, for example.
You could do something ambiguously holiday-like with a potluck dinner for friends? That is a relatively common experience, and lots of people do variations on potlucks around this time. It doesn’t have the recursion of secret santa, but it can have its own complexities: for example, making sure that there is at least one entree, that no more than one person brings disposable dishes, etc.
Linking to a description of secret santa is not super-helpful? I don’t particularly care to read about the intricacies of a social custom I’ll never participate in just to learn about a new formal methods syntax.
It’s socially acceptable here. See for example last week, on Lobsters.
That’s a good idea! We’ll keep this in mind. Thanks again :)
I’ve never met anyone who already knew how to do all the *nix stuff before docker and also loved docker after it became popular.
I think its a generational shift: those who learned Linux server administration thru docker and those for whom docker will always be an “extra” thing on top of an already solid sys eng foundation.
No, we really don’t, and we have no intention to learn.
I think there is a bit of irony about the author complaining that docker is too easy to use and as a result, there is this never ending proliferation of suboptimal configs and containers pushed by hobbyists and companies alike.
Guess what? If it wasn’t this easy, a lot of people who are able to do this stuff now would not be able to do it at all. They would not be able to figure out Linux packaging, 3 different init systems, how to manage and debug processes manually, etc.
They would have given up already.
So I always get a twinge of elitism when I read these kind of posts. For all of its problems and shortcomings, I think docker became so popular because it allows this “doing without understanding” to grow and thrive. I think that’s a good thing, because it makes server software and server ownership much more accessible to more people, even without app purveyors doing an admirable job w/ their container builds and docs.
Counter example: I knew how to do all of this stuff before OCI containers came along, and I absolutely think 90% of the time, OCI containers are the right solution.
Pleased to meet you! I suppose I may only do fairly basic things with docker, but as a *nix lover of some eons, I love it as a tool. I can apply all my usual knowledge to set up a runtime for an otherwise picky app and it can merrily run as if it is on my system without mucking up things for everyone else. To me this has been nothing short of life changing. No more install scripts messing with things I’d rather they not. A tiny bit of data separation. A degree of portability that has proved more handy than I first anticipated. I even use it in production and all the gripes about configuration management seem overblown to me. Sure, environment management is nontrivial, but for me docker has simplified it.
If the OCI container distributed by the project is not great, it’s likely that if the project tried to build a package (Debian? RPM?) or had to have clear instructions for building from sources, these would not be great either!
The existence of only an OCI container as an artifact is Bayesian evidence for an immature project. This is not because containers are a poor solution. It is because if a project is immature, it is more likely to reach for the “obvious” solution. This solution, at this point, is distributing an OCI container. Any project that has clear instructions for building from sources will be mature because these decisions tend to be made by mature projects.
This doesn’t mean that distributing as a container is not the right solution 90% of the time. In general, there are better signals for a project’s maturity, so using the (weak) Bayesian signal of the distribution mechanism is rarely worth it.
The premises are true, but do not imply any of the conclusions the author thinks they do.
I’d say what most working mathematicians mean is “any system which is equipotent and equiconsistent with reasoning by way of a Hilbert-style predicate logic system together with the ZFC axioms”. Different mathematicians will have different preferences for the system they actually use, but I don’t know of any which are not equipotent and equiconsistent with ZFC + predicate logic.
This rules out finitary/intuitionist stuff, which is not super popular, and does rule out non-explosive things. Even those can be formulated in the language of ZFC, though, so in practice it’s not a problem.
I’d say intuitionism is quite popular among people looking into math foundations.
That’s an exciting sounding term. What do you mean by this, please? :)
I believe GP referred to what is officially known as “paraconsistent logics”
Thanks, this was the term that I needed to look up more details.
The technical meaning of “explosive” is that “everything follows from a contradiction”, so if you ever hit a contradiction, the whole thing explodes. In general, it doesn’t matter, but in some AI applications, non-explosive logics are needed to prevent a failure cascade.
ETCS plus Replacement is equiconsistent with ZF. Note that Choice can be optionally added to both systems.
Those of us who work with computers are forced to limit ourselves to that which is realizable, and Choice is not realizable. This means that working with ZFC necessarily entails not working with what computers actually compute. This is all beside the point that intuitionistic approaches have finer detail than classical approaches, as stressed by Bauer 2017.
I thought this was finally the intervention where we were going to tell people to please finally stop using Flask!
I’ve seen one too many project where people oozed together their own framework with whatever they could find on pypi.
I agree with this sentiment, but I’m curious why you feel that way about it? Flask solves a use-case for many a simple website.
Exactly what I said: I had to work with several à la carte frameworks that people threw together in Flask and I also disagree with its object/request model.
What would you recommend in place of Flask?
I’d say: Just use Django because that’s what they’re trying to do anyway and learning Django will have much more payoffs both for the person making the things as for the people maintaining it. Django does scale down reasonably well too so I wouldn’t dismiss it for a small website.
Otherwise for API based things it seems FastAPI is kinda becoming the new default.
I, myself, am hesitant on using anything that the creator of FastAPI is behind. That’s not because the projects themselves are bad, but because he seems to be the BDFL for all of them and is seemingly spread very thin between maintaining all of them, which is to be expected, but doesn’t seem all that doable in the long run while keeping all of the projects up to snuff.
Pyramid.
Same “just write code, no need to ‘create a project’ and other things that feel heavy weight” that Flask has, without any of the weird stuff they pull (like global requests, or having external people maintain things which are critical to the framework).
Every time I see this in Flask code my entire mind and body just refuse to deal with it. Just cannot fathom how this became a thing. It’s such a PHP-vibe which… maybe is exactly how it became a thing.
Pretty much same.
For a library I was working in, I wanted to show case how to use it from Flask and wrote a minimal demo app. Even that was almost more Flask than I can bear.
This was a very well written example of quite a common genre of article: find a strawman claim that AI will replace humans. Then dive deep into some experiments with LLMs that show that claim not to be true… while incidentally demonstrating an astonishing array of capabilities that would have been seen as completely stunning just a couple of years ago.
To this author’s credit, unlike many other instances of this genre they didn’t conclude with “…. and therefore LLMs are a waste of time”.
I’m not sure I understand what you mean by “strawman” here. Matt Welsh seems like a real person who really did claim that “In situations where one needs a “simple” program […] those programs will, themselves, be generated by an AI rather than coded by hand.” I don’t think the author “intentionally misrepresented [the] proposition” (as per the definition of strawman) that Welsh made.
At best, if there’s any misrepresentation, it is that the word ladder finding program is not “simple”? I think even if you disagree that it is “simple” according to Welsh, a charitable reading would say that this is an unintentional misrepresentation, since I could not easily find Welsh’s definition of “simple” which disagrees with word ladder being “simple”. If anything, Welsh’s abstract is that “[t]he end of classical computer science is coming, and most of us are dinosaurs waiting for the meteor to hit” – unless “most of us” are people who cannot write programs more than 200 lines, word ladder would fall squarely inside of the definition.
Welsh, according to my quick review, holds a PhD in CS from University of California, Berkeley, a respected university and far from a diploma mill. While not being a specialist in AI, this doesn’t even point to any cherry picking on behalf of the writer of the article: he argued with a claim made in earnest with a professional in the field and published in a journal with over 50 years of history and an impact factor of over 20.
I think disputing claims made by CS PhDs in respected journals should not qualify as “arguing with a strawman”.
That’s fair: calling this a “strawman” doesn’t hold up to the definition of strawman.
The rest of my comment still stands. There are plenty of articles that start from what I think is a bad proposition - that AI is an effective replacement for human intellectual labour - and then argue against that in a way that also highlights quite how incredible the capabilities of modern LLMs are.
Yes good point. FWIW the article by Welsh was discussed here
https://lobste.rs/s/xpgorl/end_programming
Welsh is an accomplished person, but “the End of Programming” is obviously using his academic credibility to promote a viewpoint beneficial to his AI coding startup.
It’s a little disappointing to see the obvious “troll” title and conflict of interest.
From quick inspection, it’s obvious that both sides are four degree polynomials, so checking that they agree on five elements is enough. (Sum(n degree polynomial) is n+1 degree polynomial, nice and fully general fact – this is what Babbage’s difference engine was based on.)
Thank you, I love this.
Do you know that the sum of cubes is a four degree polynomial because you know Faulhaber’s formula or did you work it out some other way?
the sum of a deg n poly is always a deg n+1 poly
It’s mostly a Big-O argument – if you take the second half of the sum, you can bound it from below by constant times
x**4, and if you take the whole sum, you can bound it from above by noting every element is<= x**3, so it can’t be more thanx**4.Also, you can do a plain differences-based argument, and use the fact that
(x+1)**n - x**nis a degree n-1 polynomial by expanding the binomial.I would have added another addendum. Usually, the probability is small (if it’s too big, it doesn’t matter how big it is – collisions are too likely). In that case, the log-odds of the probability is approximated by log(p), so:
log(k^2 / 2N) = 2log(k) - log(N) - 1
This is the “strength of belief” (or, since it is negative, “strength of disbelief” or “confidence that it won’t happen”, measured in bits).
For example, if you use SHA-256 (log(N)=256) and you have a few billion docs (log(k) = 40), then the confidence that there won’t be a collision is 257 - 80 = 177 bits. On the other hand, if you’re worried about colliding UUID v4 (that’s not a hash, but the math doesn’t care), you have 122 bits, 123 - 80 = 43. Still pretty good!
(Credit for thinking about confidence as log odds goes to ET Jaynes “Probability Theory: The Logic of Science” where he suggested decibels – but since most people here are software engineers rather than sound engineers, bits are more intuitive units than decibels)
Oooh. 2log(k)-log(N)-1 perfectly explains the birthday problem bound too. Thanks
The “mean” example falls flat – if numpy didn’t support “mean” itself, you could use the fact that it supports “sum” and then do the division operation. It would be slightly slower, of course, but you would still get the benefits from vectorization.
This is one reason why “it’s rare”.
It might make sense to add this to “ways to work around”: see if your operation can be broken into multiple supported vectorized operation. For example, if you needed to calculate stddev, you could break it down by:
Edit: Thanks for the feedback! I flipped the order of the first and second problems and updated the Numba example, and I think now the narrative makes a lot more sense.
Thanks – this does make more sense (especially the “not even one copy needed” advantage of numba).
Follow-up question: at the top, you link to an article covering SIMD vectorization in recent NumPy versions. Does Numba JIT take advantage of SIMD?
My understanding is that it relies on LLVM to do the heavy lifting, so it will get SIMD in cases where LLVM is smart enough to auto-vectorize. So, sometimes, yes. See https://numba.readthedocs.io/en/stable/user/faq.html?highlight=simd#does-numba-vectorize-array-computations-simd and the following question.
Why still using twisted in 2020 when asyncio is part of the standard library or you can use more advanced async libraries like Trio or Curio?
What’s wrong with twisted? It’s battle tested and works well.
I have been used heavily years ago and I hate him :) The deferred/callback/errback mechanism is very powerful, but the code readability and code maintenance is terrible.
I find asyncio easier to mantain and I really love Trio
For a while now you can use async/await with twisted. Makes the code a lot more readable.
And the article shows that… :)
Odds are, if it’s a networking or network-related problem, and someone else has encountered it in the past, Twisted will already have the solution either in Twisted itself or in its ecosystem.
That’s pretty hard to beat.
Very simple syntax, but I’m not sure I like these constructs. Unless I remember the specification, my knee-jerk reaction is to be surprised and suspicious upon seeing that kind of code. Is it going to be executed or not? Is it a merge conflict resolution gone bad? Especially if there are neighbouring if/else blocks with almost correct indentation.
Although, I must admit the for/else variant is great for simplifying searches in iterables :)
This is one of those cases where I think the Python solution is elegant but impractical. I find it much harder to reason about (and consequently I strongly suspect it is more error prone) than (for example) the Go solution to the same problem, which is to
continue LABEL(example). That said, I recently screwed up some Go code by typinggotoinstead ofcontinue, thereby creating a surprising infinite loop.I predict that your knee-jerk surprise and suspicion will give way to mild feelings of approval as you get more comfortable with Python idioms. Other languages I’ve known have much stranger idioms, which also become ‘intuitive’ with familiarity. Language is a wonderful thing.
Is it an idiom (as in, commonly used pattern)? I’ve known about this for years, but never used or seen it in any code.
In my opinion, the most basic requirement of someone working on Python code is to know Python’s syntax. And in fact that goes for any programming language. One of the best things about Python is that it has really simple syntax for a language of its type (nothing approaching Lisp’s simplicity). It requires only a single token of lookahead to parse, which I think helps humans and computers. There are no “garden path” syntactic constructs where you only realise half way along a line that you need to go back to the beginning and start parsing that line again, like you can get in C++:
So when you say ‘Unless I remember the specification’, it makes me wonder whether you actually know the language properly or whether you’re just cargo-culting code based on some sort of surface-level knowledge of the language that you’ve gained only by reading a lot of code. Python’s grammar is right there in the documentation, all these constructs are well-documented and I remember them being covered in the first Python book I ever read, so there’s really no excuse not to be familiar with them.
I know a lot of people aren’t really fond of Django’s template language, but its
fortag uses an optionalemptyblock instead of anelseblock to clarify that it’s only for things you want to do when the thing you tried to iterate was empty.…which is not the same as what for/else does.
While it’s true that Python’s
for/elseis not quite identical to the Django template language’sfor/empty, handling an empty iterable is the primary use case I’ve seen in the real world for people using Python’sfor/else, and avoids some of the confusion of what Python’sfor/elseactually does (which is, I’d argue, a poor fit forelse).A lot of template engines support this. Actually, I think pretty much all of the ones I’ve worked with over the years across a variety of languages.
I once wrote a much smaller version as a tutorial for writing Prometheus exporters :)
https://opensource.com/article/19/4/weather-python-prometheus
Pretty cool! I also instrumented the service itself: https://weather.gsc.io/metrics
Gawk has all of these. Don’t port anything.
GAWK is not portable. You could possibly say “neither is Python”, but I would bet that Python is more available than GAWK. and even if it isnt, if youre going to have to install a package anyway, wouldnt a butcher knife be better than a plastic kids knife?
I like AWK, I have used it for many years and would consider myself an AWK expert. But when the question is “what is the right tool for the job”? The answer is rarely AWK or GAWK.
The right tool is obviously Perl in this case.
and the tool is a2p since forever.
Came here to say that very thing. The syntax maps more precisely and the idioms fit more completely thanks to Perl’s history for this almost exact purpose. The right tool for the right job.
What do you mean “gawk is not portable”? Name one platform that has awk and python that does not have gawk?
The point is you can either spend your time rewriting or you can just keep using the same code with extensions.
And if you really really want to rewrite, Perl is a lot closer. This whole article just seems like someone who has arbitrarily decided that python is a “real” language so it’s inherently better to use it.
The author blurp has:
Looks like a case of hammer, nail to me.
(and the examples with the
yields only convince me more that python is not the better choice)To be fair, I know far more carpenters with Python hammers than with Awk hammers.
I myself have but a ball peen Awk hammer, compared to my sledge Python hammer. So for really stubborn nails, Python is the better choice for me.
I’ve been using Awk for even longer though.
The story in https://opensource.com/article/19/2/drinking-coffee-awk was in 1996.
Um, Debian, BSD? should I go on?
I suppose you mean that gawk features are not portable among the default awk on different OSes, so you shouldn’t use them and pretend that the script will work on any awk. That is totally true.
But the OP likely means that you can use gawk explicitly, treating it as a separate language. Gawk is available on almost all Unix OSes, so it is portable.
My point is if your going to have to install a package, you might as well install $proper_programming_language instead of AWK. Unless what you need can be easily done with GAWK alone, its not really worth using.
Keep in mind that even with GAWK, proper support for indexed arrays is not available, nor first class functions, private variables in AWK are footguns, no HTTP client, no JSON, etc.
This doesn’t seem pythonic? idk why you’d appeal to the zen of python if you’re already skipping it.
Explicit interfaces are a form of the “Explicit is better than implicit,” “There should be only one obvious way to do it,” “Practicality beats purity” and “Now is better than never” principles.
… please google “look before you leap” it’s explicitly rejected by our community.
“Explicitly”
“Rejected”
“Our”
“Community”
Cite?
What are the advantages versus something like
typing/mypyandProtocol? Mypy has the benefits to not need to use theverify.verifyObjectthing and explicitely define who implements what (@implementer).You don’t need to use verifyObject. You have the ability to.
You can use interfaces with mypy – there’s a mypy plugin which lets you use interfaces as types: https://github.com/Shoobx/mypy-zope
If you’re importing from
shapesto register implementations ofget_area, and them the maintainer of shapes decides to implement a “get_areafriendly” new shape class, won’t that cause a infinite import loop? Becausearea_calculationimportsshapesand thenshapesimportsarea_calculation?No.
That’s not a very helpful reply.
You’re right.
I get why you might use
singledispatchto add methods to classes someone else “owns”, but if you were implementing a new class why wouldn’t you just add the methods normally to the class, rather than going through thesingledispatchroute:This seems a lot more conventional than implementing
get_areafor the newEllipseclass in the manner the article suggests?This way someone can call get_area(shape) without worrying whether the shape is a circle or an ellipse.
Oh 🤦♂️ - I totally misunderstood what
singledispatchwas doing here. Thanks :)So, why not simply extend the class directly? For e.g, given a
Circleclass,Older
Circleinstances are not affected (and hence will not do random things if you are redefining a method) and newer instances will answer toshape.area().So…you think that it’s “simpler” to have some Circle objects have “AttributeError” when you call area, and some not. Also, where do you plan to put this code? In the library that’s not yours? How are you going to make sure that all parts of your code generate only newer circles?
I’m not sure the word “simply” means what you think it means.
I was not being dismissive of your post, however, you haven’t explained why one should use the specified library. What are the pitfalls of the approach you dismissed as “While it is possible to reach into a class and add a method, this is a bad idea: nobody expects their class to grow new methods, and things might break in weird ways.”.
The approach I outlined is simple, with the caveat I outlined, and I do not think that is a wrong meaning of simple.
Missing the essential prequel, “why should a software developer do anything about climate change.”
“I don’t see what’s so important about preserving the environment that sustains my life that I should have to do anything about it.”
It’s an excellent point and it has nothing to do with Climate Change.
Let’s do a mental experiment. Switch the topic. Let’s say you’re concerned about world hunger.
What can a developer do about world hunger? Well, if you wrote an app that allowed people to sell hot dogs at 20% than they used to, that’d help. How? Because the incremental drop in hot dog prices would affect the entire food chain to some degree, thereby making food cheaper for folks who can’t afford it.
But that doesn’t feel right, does it? Even if overall, your hot dog app actually did more than anything else you could do for world hunger, it doesn’t feel like you’re directly doing something about it.
So, what do you want? What part of this question is about doing stuff, what part is about how you feel about doing stuff, and what part is about making a difference? These are three different issues. Whatever your answer on these issues, you should have some sort of measurable test to see how well you’re doing. Write the test, then make the test pass.
That’s a nice case where market forces actually align with ethics. It happens more often than people think about it, since you don’t notice things when they work right. But it’s ultimately a coincidence.
But what about those cases where it doesn’t align? A lot of environmental impact doesn’t show up on the balance sheets, because the cost has been externalized. The obvious example is dumping toxic waste in the river, which might not even affect the employees if they happen to be upstream, but even things like creating a toxic social norm and inducing substance addiction have significant costs that don’t immediately affect the business.
Also, food distribution is pretty well optimized already, except for harmful cultural norms that induce lots of food waste. Those can’t be fixed with app development.
Yes, I deliberately tried to keep this very simple as the point was that the complexities expand rather quickly. Folks are welcome to add ethics and other concepts as desired.
This is a very good question because it’s something a lot of people want to know about: I feel strongly about X. What can I do? Far too often the answer given is some sort of self-serving version of “join my cause!” as if everything has already been decided and a few tweets and some donations every month is all that’s needed. Do this, then go on about your life. No further thought required.
It’s an especially powerful question because it needs some decomposition. If you don’t know why you feel strongly about X, in concrete terms, you’re rather unlikely to feel as if you’ve made any progress with it, no matter what you do. How it should be decomposed it up to each person, of course. But it needs work.
PyBay! If you’re there, come and say hi!