I think it is commonly excepted(sic) that statically typed languages are less productive
Really! And when did we commonly accept that?
consiceness data from Rosetta Code
Oh come on. You can’t extrapolate length of real programs off of ten-line examples in Rosetta Code. If you could, why would anyone care about module systems? Also in the RC paper, and conveniently un-mentioned in the blog post: the statically typed code snippets were less buggy. Type errors only get harder to spot in large codebases, so the fact that they were even a problem in snippets should suggest a problem.
That “hours to solve a problem” graph is also about string processing, which is pretty specific and pretty tailored towards making scripting languages look good.
if a method or algorithm has the same asymptotic growth (or Big-O) as another, then they are equivalent, even if one is 2x as slow in practice.
That is absolutely not what big-O means. There are plenty of interesting properties of an algorithm besides worst-case growth rate.
You can almost always get the performance you need by just re-writing a couple methods in Cython.
Excuse you. Let’s talk about compilers for a sec: most compilers have a pretty flat profile, meaning there aren’t really particular hotspots to focus on. People also care a lot about compile times: if MSVC decided to cut their throughput in half for the sake of productivity in the compiler team, you know what every single user of MSVC would say? “Enjoy your productivity, we’ll be enjoying LLVM.”
I would contend that the intersection of “flat profiles” and “users care if their program is twice as slow” is pretty large.
The post links to this TCL article, which I think makes a much better argument. I’m still not a fan though because it seems like they’re admitting defeat: a language can’t both be fast enough to write components and flexible enough to connect them. Lisp is a good counter-example to this.
I agree with his main point (often “it’s slow” isn’t a serious problem), but I also strongly disagree with almost all of his arguments. Here we go…
It used to be the case that programs took a really long time to run. CPU’s were expensive, memory was expensive. Running time of a program used to be an important metric. … this is no longer true.
I’ve noticed this more and more in web dev. Things stop being important until we go so far the other way that suddenly it’s a Big Deal again. “Python is slow, but that doesn’t matter because developer speed is the most important thing” turns into “Python is too slow, we gotta switch to Go” eventually turns into “Crap no generics, let’s go back to python.”
When we focus entirely on dev speed, performance degrades until everything is terrible. When we only care about performance, dev speed degrades until everything is terrible. We like to swing between extremes.
It’s more important to get stuff done than to make it go fast.
Context dependent. Sometimes “go fast” is part of what “getting stuff done” means. And often “getting stuff done” is completely unrelated to “building a sustainable product.” Would you rather spend a week building a product that craps out after 100 users, or a month building one that scales to 10,000? Depends on what you want.
At the end of the day, the one thing that will make your company survive or die is time-to-market. I’m not just talking about the startup idea of how long it takes till you make money, but more so the time frame of “from idea, to customers hands.” The only way to survive in business is to innovate faster than your competitors.
A lot of my friends in business say the first mover advantage is way overrated. The iPhone wasn’t first mover, nor was the Google search engine, nor was StarCraft (rip Total Annihilation), nor was… innovation is nice, but it doesn’t replace quality, industry, performance, or marketing.
They [Amazon and Google] have created a business system where they can move fast and innovate quickly. Microservices are the solution to their problem.
Google has over 3 billion searches a day and 50,000 employees. You do not have Google’s problems.
This means you are taking what was a function call (a couple cpu cycles) and turning it into a network call.
I really hope you’re not turning every function in your code into a separate microservice :P
Microservices’ biggest con is performance, but greatest pro is time-to-market. By building teams around smaller projects and code bases, a company is able to iterate and innovate at a much faster pace.
You do not have Google’s problems.
Microservices are a bastard to organize. You’ve turned a simple monolith into a massive distributed system, and now you have to worry about each service’s devops, communication, apis, monitoring, etc etc etc etc etc. Microservices slow you down, not speed you up.
Why are microservices good? Here’s two of the reasons:
At Google, coordinating changes to a monolith would mean coordinating across thousands of engineers worldwide. At Generic Startup #6655321 it means yelling at the other coder from across the room (because you’re too trendy to have offices.)
When you’re trying to argue that performance isn’t a big deal, don’t say “Well Google has tens of thousands of engineers and really cares about performance, so”
Now imagine your program is very CPU intensive, it takes 100,000 cycles to respond to a single call. That would be the equivalent of just over 1 day. […] Well, compare that to our 3 month network call, and the 4 day difference doesn’t really matter much at all.
Is 100,000 cycles/sec considered CPU intensive? I have no idea; without anything to compare it to, that’s just a number. Not to mention it doesn’t tell me anything about how memory access and everything else a program does factors into it. If you want to argue that CPU lag << network lag, show me a benchmark.
((Not to mention his estimate of a network call inside a datacenter is an order of magnitude too large. He says it’d be about 3 ms, from my understanding it’d be <0.1ms on modern equipment). So instead of comparing 5 days to 3 months, we’re more likely comparing 5 days to 3 days.))
What this ultimately means is that, even if python is slow, it doesn’t matter. The speed of the language (or CPU time) is almost never the issue. Google actually did a study on this very concept, and they wrote a paper on it. The paper talks about designing a high throughput system.
The linked paper is on scripting MapReduce jobs across thousands of machines. Not doing the actual mapping or reducing, just gluing them together. Still an incredibly impressive accomplishment (afaict the paper is from 2003), but also a completely different domain from 99.9% of customer-facing products.
You might be saying, “That’s great and all, but we have had issues where CPU was our bottleneck and caused much slowdown for our web app”, or “Language x requires much less hardware to run than language y on the server.” This all might be true. The wonderful thing about web servers is that you can load balance them almost infinitely. In other words, throw more hardware at it.
I really hate this meme.
Sometimes “throw more hardware at it” is a simple short- or medium-term solution. Sometimes it leads to you not fixing serious performance issues because you can just “throw more hardware at it”, because it’s reasonably parallelizable even though that route takes 30% of your server resources when a few hours of debugging will get it down to 0.3%. And sometimes it’s not parallelizable and there’s no more hardware to throw at it, and then you have to accept that no, you can’t always wave away hardware problems with a c4.8xlarge.
I think it is commonly excepted that statically typed languages are less productive, but here is a good paper that explains why. In terms of Python specifically, here is a good summary from a study that looked at how long it took to write code for strings processing in various languages.
First paper is from 1998 and doesn’t actually explain that. As for the second… okay, where to begin.
tl;dr the “how long does this take” graph is completely unsupported.
Lines of code might sound like a terrible metric, but multiple studies, including the two already mentioned show that time spent per line of code is about the same in every language.
APL 4 lyfe
Okay not writing about the sections on optimization and using Cython because it’s 2 AM and I really should sleep, but I have complaints about them too. Python is often more than fast enough for most applications but I don’t think that this article makes that case very well.
Microservices are a bastard to organize.
This bears repeating. Microservices are a distributed system, and distributed systems are hard. Don’t build a distributed system unless you absolutely need a distributed system. Addendum: you probably don’t need a distributed system.
Speed isn’t why I prefer Go to Python; static typing is. I too used to think I was more productive in dynamically typed languages. Wow, was I wrong.
Yeah–projects like TypeScript and Flow (adding types to JS, with no perf boost) show that a decent number of folks value this. My experience is, it’s initially faster writing small bits of code without writing out static types. But eventually your work involves as much time tweaking and debugging and trying to make changes (sometimes in code used from many places) with high confidence you haven’t broken anything, which is where the navigation and other kinds of checking that tends to be associated with static typing can really help.
This really reminds me of the “optimising for developer happiness” spiel thrown around in the early Rails days - “hardware is cheap, throw more at it”. Sure, and you may as well leave those massive memory leaks in place too, because RAM is cheap, and finding leaks is, like, totally boring(/hard), dude. It all just seems a bit too childish.
It is childish, and short-sighted. Rails applications at a certain point of “maturity” can take an hour to run their tests, an hour to “precompile assets,” and 10 seconds to serve a basic page. “Developer happiness” is a weird metric if that sort of thing is optimized to enhance it.
From my own experience dealing with a >100K LOC Python codebase, CPU to serve requests isn’t really an issue (at our size for our app, I should say!), but some other things are. The test suite runtime (tens of minutes, even running tests in parallel) and process startup time (seconds) are a pain, and you don’t have useful threads and the memory costs of the multiprocess model are real. If we could go back in time and write everything with gevent or such that’d help, but currently the costs of making that work throughout the codebase would be higher than eating the RAM use.
The other thing, as tptacek notes, is that performance isn’t the only place other languages can give you something Python doesn’t. I get how it’s quicker to write out code initially when you don’t have to specify the types, but eventually a lot of effort shifts to fixes, and even a lot of new-feature work touches old code. For that, the checks, navigation, linting, etc. you get from, say, VS Code with the Go extension by default would be invaluable at work. (There are static analyzers for Python but they haven’t generally been super easy to use on this size codebase.)
Hard to be too dissatisfied, though; things work out. The time to the first useful product was great with Python+Django, and it’s still hard to get that elsewhere, harder if you want to avoid the downsides of a framework like Django whose pieces are closely tied to each other. I’m sort of curious what’s going on with stuff like gobuffalo.io; I don’t know as much about all that as I should.
Two caveats I’d add to that graph, “Median Hours to Solve Problem” is…..
So Perl wins “Median Hours to Solve Problem”
Having programmed in Perl, Ruby and D… I suspect Perl loses with my caveats… and D, as the library ecosystem matures, is starting to win.
And interestingly enough quite often wins in terms of speed too.
ie. Programmer productivity and speed are trade-offs you have to make.
I’d argue a solution isn’t a solution if it’s incorrect.
Oh I agree with you…..
….the rest of the software industry doesn’t.
But personally, I agree.
Wait, what? That goes to Common LISP. Almost all the benefits I learned about PERL over two decades ago when I tried it were already in LISP’s. The ecosystem benefit still isn’t there outside Clojure. The good LISP’s have FFI’s, though. :)
[Comment removed by author]
Proposal: Medium-hosted articles start at -2.
I keep seeing people say this. Has anyone done even a remotely-concrete post on the quality of their articles or why for this? I’m not sure if they all suck, the medium encourages them to suck, or people share click-bait posts that suck instead of some good ones.
It’s honestly a kind of techno-elitism I’m uncomfortable seeing here. Medium is free and zero effort: I don’t have to fiddle with a static site generator, or get Wordpress working, or worry about design and format. I can just sign up for Medium and start writing. It’s not a great solution (“why is it loading 2gb of css?!”), but it lets me focus on the part I care about, like yelling at people about concurrency.
The “problem” is that it’s so easy to write that Sturgeon’s Law kicks in, and you get a lot of low-quality material. But I don’t think that’s a strike against the website as much as acknowledging the fact that if you make things easier for people, you lose some degree of gatekeeping.
I agree with you. I’m normally of the “let a thousand flowers bloom” mindset, even. But Medium’s ease of publishing coupled with how easy it is to spread poor articles leads me to want a tiny amount of gatekeeping to be present. I agree that this paints Medium unfairly as a whole. They’re a victim of their own success.
I’ll admit that I have beef with Medium, partially because it purports to be a site for thoughtful articles, and it seems to have a ton of self-congratulatory-articles-disguised-as-tech-articles. A mild form of this are the, “how BIGCORP handles millions of reqs a day” articles, and the extreme of this are the occasional “how I made it,” wherein the author expounds on how they did a few things and found success. In effect, Medium does not push back on people turning themselves into brands, which we have in every other corner of the Internet.
Smaller gripes: proliferation of memes/super-animated images in technical articles, and the proliferation of life advice from 22-year-olds living in SF.
Appreciate the insight. That makes sense.
I have no idea whether medium articles are worse than average. Given the size of the platform I’m not sure if I could even find out.
However, when I do see a crappy article linked on lobsters, it’s almost always hosted on medium. This makes ‘avoid medium’ an easy (if unfair) filter to apply which improves the quality of my reading.
My theory is that Medium attracts people who will willingly pay a subscription fee in order to post text on the Internet. There are some articles that are okay, but… I don’t think you’re going to get much truly interesting content from such a medium (heh).
Medium is free to use.
Fair enough. Perhaps I got it confused with another, similar site – I remember checking one out and finding that they charged some amount to write on the site.
It was svbtle. $6 a month…
“My programming language is better than yours, and what my language is best at is what truly matters”
I get the message.
Python might not be the fastest, I recognize. But at the time, I learned this language for very specific reasons:
So for me, the “speed” factor was not important at all in my decision.
And I tend to agree with the point of the author; most projects are not performance critical, so arguing that Python is slow is completely irrelevant in those situations.