Every situation is different, and for me I am usually operating in a startup context where the value of any code has yet to be proven. In this context speed of iteration is the critical variable. So I generally use tests to 1) Avoid disaster, and 2) maintain speed of iteration.
In more detail: 1) There is usually a couple places in every codebase where it would be really bad for things to go wrong (handling money, user data, things that are hard to undo, etc) - test those pretty fully. 2) You can’t move fast if you’re worried about breaking lots of small other things, but that’s covered well enough by regression testing, usually with integration testing or user interaction scripts.
As you gain users, or value, or product market fit or whatever you want to call it, the value of the code goes up and so it’s worth adding in more detailed testing to lock in the functionality, but not at the start.
I find that I can get around 80% of the value of tests with around 20% of the number of tests (vs. what would be considered full coverage). Every project is different of course, but generally for me I do fairly detailed testing on sections of code that are complicated, or would have really unfortunate side effects if they had a bug. Everything else I cover with broad integration-style testing that catches regressions and stuff like that. My goal is to have “most” of the code executed in some way when the tests run, but it doesn’t have to be exhaustive to be worthwhile. Does that catch every bug? No, but it’s probably an order of magnitude less work that when I’ve attempted full coverage.
Disclaimer: My code rarely works with money and stuff like that, and is usually part of a startup or early product that still needs to be proven worthwhile, in those cases it’s usually the right call to trade some reliability for product iteration speed.
It is very cool to see this work, and I think it is interesting that she has (a) proved her point versus interpreted languages (this is a bigger number) and (b) is simultaneously attracting comments of “but of course it can be done better in X way” (this is a low number).
The described system (doing actual transport and message serialisation, doing some real work per request) seems a good approximation to real load to me. My only comment would be that it sounds like the client maintains persistent connections, so it isn’t measuring the connection setup/teardown costs.
The C10K article was 1999 - https://en.wikipedia.org/wiki/C10k_problem. By the time of the 2011 hardware, it was the C10M problem, so it does seem that other approaches might give higher numbers.
It would be a fun kind of golf to have the protocol and server as a spec to see how different approaches/languages could compare. (The random number generator could perhaps be based on a seed read from the request so the response becomes deterministic and a reference client could check for accuracy).
Additionally, having a running system like this and asking candidates to identify and optimise bottlenecks would be a fantastic devops interview flow :-)
I’ve done a fair bit of smart contract dev on Ethereum and at the root of the problem is an impossible to resolve conflict between immutability (which leads to provable decentralization) and the limits of software development as a discipline. People will claim they have a solution to this conflict, but they are really just trading one for the other.
Now, in addition to this core problem, Ethereum has gone and made things worse with their language being fairly unsuitable for the job. They also change the language rapidly, meaning battle tested code (the most valuable thing you have) is rendered unusable regularly.
The end result is a very limited system, with a tendency towards worst-case-scenario bugs. That’s not to say it’s worthless, there are a couple very cool things that you couldn’t do any other way, but I’m very skeptical of grand claims beyond those core use cases.
One fun option for super long term storage is a high density encoding printed to paper: http://ollydbg.de/Paperbak/ Since the data format can be printed in human readable format as well, any future person who needed to read the data would just need to be able to scan the paper and implement the decoder. With compression you can achieve ~3MB of text per double sided paper apparently.
I give total credit for someone talking about something that isn’t working that well for them anymore. Too often these trends just quietly fade away and it’s like no one was ever writing microservices, were we?
The possibly more important thing here than whether they’re moving away from them or not, is that they are a massive engineering org with problems that likely you don’t have. So you probably shouldn’t have been considering microservices anyway, whether or not Uber thought they were a good idea. Unless of course, you’re running a massive engineering org too, then do whatever you think is best.
I find the name “static site generator” kind of subconsciously promotes the idea of this just being all about some static files that move from here to there. What they usually come with though is a super complicated, fragile, and regularly updating toolchain that puts at risk your ability to generate the static part that was supposed to be simple. We have a couple “static” sites that are almost impossible to update now because the tooling that generates them is no longer being maintained, so it’s harder and harder to run that tooling successfully. They don’t feel like “static” sites very much anymore.
If the generation code is exercised on very web page visit it’s likely to degrade much more slowly than if it’s only exercised when there’s new content.
You’re not the first person I’ve heard say that. I know a few people who spend an inordinate amount of time administering issues on their static sites.
That was far more interesting than I’d have hoped. Especially because it was more about operating this at scale. For my non-petabyte-scale stuff I’ve always felt like mysql is easier to use as developer. The permissions system for example is confusing. But I was also bitten by things like using utf8mb4 over utf8 in mysql. (and I always recommend mariadb)
I’m a little stunned to hear anyone say they prefer MySQL over PostgreSQL as a developer. A couple things I’ve really disliked about MySQL over the years:
silent coercion of values (where PG, in contrast, would noisily raise a fatal error to help you as a developer) – it makes it a lot harder to debug things when what you thought you stored in a column is not what you get out of it
the MySQL CLI had significantly poorer UX and feature set than psql. My favourite pet peeve (of MySQL): Pressing Ctrl-C completely exited the CLI (by default), whereas, in psql, it just cancels the current command or query.
After spending three years trying to make MySQL 5.6.32 on Galera Cluster work well, being bitten by A5A SQL anomalies, coercion of data type sillyness et al, I’ve found Postgres to be a breath of fresh air and I never want to go back to the insane world of MySQL.
Postgres has it’s warts, incredible warts, but when they’re fixed, they’re usually fixed comprehensively. I’m interested in logical replication for zero downtime database upgrades, but being the only girl on the team who manages the backend and the database mostly by herself, I’m less than inclined to hunt that white whale.
the MySQL CLI had significantly poorer UX and feature set than psql
Hmm I’ve always felt the opposite way. The psql client has lots of single-letter backslash commands to remember to inspect your database. What’s the difference the various combinations of \d, \dS, \dS+, \da, \daS, \dC+, and \ds? It’s all very confusing, and for the same reason we don’t use single-letter variables. I find MySQL’s usage of Show tables, show databases, describe X to be a lot easier to use.
Yeah this is also bugging me. Sure “show databases” is long and something like “db” would be nice, but I know it and (for once at least) it’s concistent to “show tables” etc.
I grant you that, but \? and \h are 3 keystrokes away, and the ones I use most frequently I’ve memorized by now. But I just couldn’t stand the ^C behaviour, because I use that in pretty much every other shell interface of any kind without it blowing up on me. MySQL was the one, glaring exception.
Totally agree, this is almost exactly my situation too. I had always used mysql and knew it pretty well, but got burned a few times trying to deal with utf8, then got hit with a few huge table structure changes (something I think has improved since). Ended up moving to Postgres for most new stuff and have been pretty happy, but I do miss Mysql once and a while.
Yeah, I wish this article had a better title since it was a good read and clearly written by someone with experience. The article (+ the medium.com domain) made me think it was going to be clickbait though.
I thought it was going to be like the titular list in “10 Things I Hate About You” and the final point would be something like “I hate that I love you so much” or something like that.
This. I really wish that had been at the top, because some of these are pretty deep dives/issues at scale, and many people may not get to the end (especially if there’s an Oracle salesperson calling frequently).
I’m using a 2014 Macbook Pro since I’m trying to take a pass on the touchbar models for as long as possible. I’m still doing lots of Android and iOS dev and it works fine. All that being said, the most common choice now for devs is probably the 16” Pro, but the new Air looks pretty promising too. The next gen chips from Intel look like a legit step forward, so that might be a good option if you’re looking for something more portable. Whatever you go with, probably worth getting as much RAM as you can get/afford.
This seems to have some similarities to generational garbage collection (an ok description here: https://stackify.com/what-is-java-garbage-collection/). There’s at least some analogy between newly allocated memory and newly loaded files in the cache I think?
My advice from painful experience: Do not do this.
Keep code in your git repo in, hopefully, a single language. Keep your data in a database. Try not to mix the two. Then you can definitively answer questions like “What version of the app is running right now?” Yes it’s possible to version the code in the database via migrations, but why? The only true upside I’ve ever seen is performance, which is a valid one, but reserve this for critical sections that are proven to be too slow.
There are (rare) cases where it’s not only faster but also clearer - when you are correlating data at different levels of rollup at the same time.
For instance I have an app that tracks where in a store stock is kept.
When taking an order, you want to know how much stock the whole store has (minus outstanding orders). That’s a horrendous thing to compute application side (7-way join, N+1 queries are very hard to avoid). The equivalent sql is quite tidy and readable.
The other upside is to have a single image of code/data in production. Migrations and deployment of new servers is a easy as copying the database to the new server.
In some industries, like payroll, this facilitates migration of client data between multiple providers.
My advice from someone who used to think this was a bad idea over a decade ago, but now has been doing it for everything for the last decade (or so), learn what you were doing wrong, because you’re probably still doing it.
I don’t agree with any of your suggestions.This approach is is faster, more secure, easier to audit and review, easier to develop and scale. In every case that you’re not doing it Wrong™, so stop doing it Wrong™ instead of figuring out how to make the Wrong™ thing work.
I agree (I think; this post took me four tries to read and I’m still only 90% sure I understood), with the proviso that there are very real advantages to being able to use existing tools (ex: rails), even if they don’t support the Right way to do some bits.
If you’re a staff engineer at a bigco you can fix the framework (and probably get it merged upstream), but in agency/startup land you definitely do not have time for that.
It makes me sad that Java is not really in the conversation as an option right now. In the performance chart that this person references, the Java server is 97.6% as fast as the Rust one, so basically identical (https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=query). I actually like programming in Java, and to me it looks a lot easier and is a more mature platform than Rust. However, I acknowledge the frameworks are kind of crusty right now, and it’s not a shiny new thing like Rust, so people are generally not considering it.
No real conclusion, just wish more people realized how much Java has going for it despite not being the hot choice currently.
What matters is if has the performance on the fronts your app uses. It is not very hard to write reasonably quick Java code. Without any effort 30-50% of highly optimized native code can be achieved according to former studies. For optimized Java code performance can approach optimized native parformance (in the above example 97%).
The real problem with Java (or C#, or Go) is latency, as GC languages have problems providing predictable upper bounds for GC pause lengts, but Java is on the forefront of research in this topic. Also there are tricks in the high-performance Java toolbox for these problems.
(As a C# developer I have always envied the polished Java features)
I have never developed production Rust code, but have done for Java and similar C# code, and those languages are pretty productive for most business usecases, despite their verbosity, and there is substantial talent pool for them.
This seems like a very good direction to explore. The tooling complexity and rate of change is one of my main problems with the Javascript ecosystem. People who get used to it don’t realize just how poor the experience is compared to other toolchains.
Interesting to me: ARM is still a footnote in this writeup, but it appears the pain train is coming for both Intel and AMD in the datacenter. Amazon has the only chips where this is obvious so far (https://perspectives.mvdirona.com/2020/01/aws-graviton2/) but you gotta imagine that comparable chips will be widely available in the next couple years and it’ll be tough for x86 to keep up.
Ampere’s Quicksilver is coming this year, that’s probably gonna be the real “Graviton2 if you’re not called Jeff Bezos” :)
For now there’s only the first gen Ampere eMAG which is not powerful enough (though has an absolute ton of I/O at a relatively low cost), and the HPC-oriented Marvell ThunderX2 which is too expensive for general server use.
Interesting, thanks for the info. Do you see any road for x86 to be competitive even in the mid term with some of these new ARM chips? It seems between the architecture and production volume advantages it’s going to be really tough.
Side note: I wonder if a viable branch prediction attack mitigation would be to just give everyone a dedicated ARM machine. If they were cheap enough you might not even need to virtualize.
So far it’s been AMD who have all the advantages. Many people became really skeptical of the ARM servers when they saw EPYC Rome. But if Amazon is going all in, producing custom ones… there’s something there. I do hope that Ampere delivers something great this year.
I wonder if a viable branch prediction attack mitigation would be to just give everyone a dedicated ARM machine
Scaleway did that with some 32-bit Marvell thing a while ago. It is kinda interesting, but eliminates the flexibility of VMs where any VM can have any number of cores from 1 to however many the CPU has.
I submitted this not really because I agree with it but was curious if any lobsters out there have strong feelings. I have been using other languages for a while and not stayed up with recent developments. I generally still like ruby and use it fairly regularly and actually kind of like the constancy of it. I do, however, find myself reaching for other tools more often lately.
However, the claims here that ruby core is working against the general sentiments of the community at large, while examples were provided, seemed slightly exorbitant. So I was curious if others here felt this is the case or not.
I wish I could be more articulate, but it seems an inexorable conclusion of two forces. One, javascript is the
language of the browser. Two, people have been making javascript as easy as possible for decades now.
Notice there are a lot of kids on http://repl.it/talk … I wonder how many of them are genuinely trying to learn ruby rather than JS?
Look, I’m no JS zealot. I’m more of a “skate where the puck is going to be” kinda guy. And the ruby players on the rink have been skating toward other places.
Rails was incredible. I remember how magical that first demo was. You could do in ten minutes what took days in PHP.
Nowadays, doing that is sometimes as easy as running now.
Oh boy, there I go again expressing super strong confrontational opinions… For what it’s worth, if I’m wrong, you’ll get the satisfaction of rolling your eyes and watching Ruby eat the world over the next decade. But it just doesn’t feel like that’s the world we’ll end up in. Pull up any visualization of “let’s measure the popularity of programming ecosystems” and it’ll look a lot like JS and Python won. https://www.youtube.com/watch?v=wS00HiToIuc
As a ~10 year experienced engineer with the bulk of that time in Rails currently looking elsewhere, I will say: the job market would seem to indicate that Ruby is very much alive and well. I’m curious about the JS comment, as I feel like Ruby has never been the language of the browser.
That said, the mid-to-upper tier web companies these days seem to be doing server side development in Go more often than Rails. I’m looking to change only because I want to learn something new. Ruby/Rails if anything has slowed down its formerly frenetic rate of change, performance has increased to a respectable degree and the ecosystem is filled with robust libraries, a good testing culture… Rails is by no means a terrible stack to work in. YMMV of course. Edit: I do agree with the examples in the original article… the pipe operator in particular strikes me as an ugly solution looking in vain for a problem.
But I find the “Javascript is taking Ruby’s place” remarks very confusing, as Ruby is a server-side language, and Go seems also to have stolen the server-side market share from Nodejs.
as I feel like Ruby has never been the language of the browser.
Whoops. My point was, JS is the language of the browser. If you want to do much of anything with “webpage + browser,” you need to know JS.
That means if you know JS, you can largely get by without knowing anything else. Or rather, you can learn whatever else you need to learn as you go.
job market
We’re so lucky that the job market gives us so many options. I totally agree; I didn’t mean to imply that if you’re a ruby dev, you should worry about your career prospects. So much of the world was built on Rails that you’ll probably be able to find work for a long time.
All I meant was, younger people don’t seem to be interested in learning Ruby. When those younger people become older people, and the older people become less and less people, the world changes.
If that sounds grim, just be glad you’re not a Lisp guy like me. It’s almost painful to watch everything not use the power of compile-time macros. But at least I get to use it myself.
Go seems also to have stolen the server-side market share from Nodejs.
You’re right that Go has had some surprising momentum here. Much of repl.it is apparently built on Go. But the advantage of JS is the ten hundred million libraries that exist to solve all problems ever thought of. (More than a little bit of hyperbole here, but it’s almost not far from the truth.) If you need to do X and you happen to be using JS, you don’t have to read any docs. You can just type “do X in Javascript” into google, and google turns up an npm package for X, for all basic X. Other languages will always be second-place regarding convenience, for this reason.
Super Serious Projects will tend to be written by people who want absolute type safety and clearly-defined module boundaries and never to see an error. Hell, golang doesn’t even have a REPL. But anyone who’s missed a REPL will tell you that it’s a serious disadvantage not to have a REPL.
the advantage of JS is the ten hundred million libraries that exist to solve all problems ever thought of
As someone who has done quite a bit of both Go and JS, this is emphatically not a point where JS wins.
There’s a bigger number of packages, but bitter experience has not been kind to my trust in ‘this JS package has lots of downloads and github stars and a nice website so it probably works OK’.
Most people see a package and expect a solution. But each package solves N% of whatever problem you’re facing. (N is usually somewhere between -10 and 97.)
As much as I love to write code, I love getting things done quickly without introducing major problems. npm i foo tends to work pretty well for that.
Hey, cool trick. npm install sweetiekit-dom and then do require('sweetiekit-dom')(). Now window, document, etc all exist, just like node is a chrome repl. You can even do window.location.href = 'https://google.com' and it’ll load it! console.log(document);. Unrelated to the convo, but I can’t get over how neat it is.
So I decided to test “do soundex [1] in Javascript” just to see if it was true. Yup, second entry on the results page. I checked it out, and found an error—“Ashcroft” encodes to A261, not A226. And given the page it’s on is a gist, there’s no way to report an error.
[1] Why Soundex? Well, I used it years ago in an online Bible to correct Bible book names (such that http://bible.conman.org/kj/estor will redirect properly to http://bible.conman.org/kj/Esther).
Neither. When I originally registered for a domain back in the late 90s, I wanted conner.com but that one was taken. So were conner.net and conner.org. My backup choices, spc.com, spc.net and spc.org were also taken. I had a few friends that called me Conman (a play on my last name of Conner) so that’s what I went with.
In the 21 years I’ve had the domain, you are the first one to question it. No one else has (Weird! I know! [1]). The link is real, try it.
[1] It’s also weird how few people connect my name, Sean Conner, to the actor Sean Connery (one letter from stardom!) At least my name isn’t Michael Bolton.
That’s fine. I just reacted to the domain, and in these contentious times it’s not too hard to imagine a person setting up a Bible site with pointers to the “bad stuff” (depending on your view of what’s bad).
FWIW I”ve used https://www.biblegateway.com/ a few times (mostly because I’d be interested in how the text is presented in different Swedish editions) but that’s an altogether bigger operation.
Agreed. I would hypothesize that the Ruby community is largely being cannibalized by: Go, node.js, Elixir/Phoenix, Rust, Python (for special purpose work like tensorflow) – probably in that order (or maybe swap Rust and Elixir? unsure).
It’s not only due to new tech stacks emerging. Cultural and commercial factors play a massive role.
For instance: Ruby got very popular in the consulting space (it’s a good way to look good by quickly delivering some value, and it tends to generate a need for lots more consulting hours a year or so down the track).
Now that the ruby community has more-or-less settled on a few standard approaches, it’s no longer as profitable for consulting companies.
I don’t agree fully with that reading, Rails was always also very popular in the old-school agency space, as Rails is extremely quick in getting set up. It’s insistence on having a standard stack might lead to problems in the long run, but still makes it the best framework for quickly getting out of the door in a clean fashion.
It still remains very popular there.
Also, Rails is often used for internal management applications, I have tons of clients that “don’t do Ruby” until slowly, you figure out there’s tons of small applications running on their servers essentially providing some buttons and graphs.
The number of companies that “don’t do Ruby” officially, but actually do internally is huge, especially in enterprise.
Speaking from the perspective of someone who is both in the Rust project and on the board of one larger Ruby non-profit, I do not agree with the point that Rust cannibalises Ruby. Indeed, we’re still growing, even if the curve flattens.
I only have a limited set of data points for folks I know of that have moved (or are moving) from ruby to rust for a couple of projects (blockchain space). Sounds like you have more empirical evidence here for sure.
Rust is pretty popular for implementing blockchains, and Ruby isn’t, because you can’t write a competitive PoW function on top of the mainline Ruby implementation. Most Ruby projects don’t need that kind of performance, so your story probably isn’t very typical.
Experienced developers usually extend their toolchain at some point, coming with a shift of interest. There’s an effect where you have more insight into see experienced people picking up new stuff, but tend to ignore newcomers coming up.
I am of a certain generation in the Ruby community, which leads to the good effect that a) I meet more and more people that don’t know me, despite having a high profile, b) I tend to only see my circles and have a hard time keeping track of newcomers.
I agree, and I think they complement each other more than compete right now. Ruby is great at building architecturally slick webapps, which Rust is lousy at. Rust is great for building high-performance low-level stuff, which Ruby is lousy at. It seems like a good pattern, supported by several gems/crates, to build a webapp in Ruby/Rails, and refactor any parts that need top performance out into a Gem written in Rust.
The zen of Rust is using Ownership in most spaces. Lifetime problems usually arrive when you are trying convoluted structures that are better handled through cloning and copying anyways. Use clone() liberally until you are very sure of what you want to do, then refactor to lifetimes.
i know about mrusty but it seems to not be active; i’m just hoping that people are still working on this (i might even join in if i get some free time)
I have never used Ruby in anger, but gosh that Immutable Strings bug getting closed out as “not going to do it, don’t care you all want it, just use a magic comment” would make me think that the Ruby you’ve got is the Ruby you’ll ever get.
I don’t think that languages have to keep being developed (how many Lisp dialects are there that don’t change?), but if you think Ruby has deficiencies now, I wouldn’t expect them to change and that would make me worried too.
I am maintaining a ruby codebase that’s >10 years old.
I don’t want ruby to make backwards-incompatible changes! The language is established now; it’s far too late for that.
It sucks that you need a linter to check your files start with a magic comment in order to get sensible behavior, but not nearly as much as not being able to upgrade & benefit from runtime improvements/security patches just because they’ve changed the semantics of strings for the first time in 25 years.
This is an awful sentiment. How would you like being told that for a project you maintain, you can no longer make any big changes, ever? Because some user from 20 years ago doesn’t want to update their scripts, but wants bleeding edge Ruby.
The world doesn’t always work that way, and hopefully Ruby doesn’t listen to people like that.
I actually think it’s a pretty reasonable statement. One of my favorite things about Java is that it’s almost 100% backwards compatible. We just dusted of a 20 year old (!) game and it pretty much worked on that latest JDK. That’s awesome.
If you want to maintain a project where breaking things to make other things better, find one where the things you break don’t affect people. There’s no shortage of them and it’s even easy to start your own!
If you want to be the trusted steward of a large community, you have to consider how your choices affect the people who have placed their trust in you. There’s nothing wrong with not wanting that! It’s a lot of work. I don’t want it either. Thankfully, some people do, and that’s why communities have some people at the center and others at the periphery. The ones at the center are the ones doing the hard work of making it possible.
Hopefully they do. It’s great to have new language features and to advance the state of the art, but it’s also great to be able to run my code from a few years ago without having to rewrite it.
There are ways to have both, of course, which involve making compromises. For example, in the area of scientific computing I’m currently working in, there are a lot of Python 2 hold-outs who don’t want to migrate to Python 3, even though the changes are few* and Python 2 support is due to end. But many Python programmers are happy with Python 3 and have ditched 2 altogether already.
*few, but important in context: changing how division works is a big deal for numerical simulations.
This kind of thinking is how you get things like Python 3 being out for over a decade while some people still do everything in 2. If you intend for your language to be widely used, you have to come to terms with the fact that even minor changes that are highly beneficial will be massively painful, and might even destroy the language entirely, if they break old code.
Python 3 actually introduced breaking changes, which in hindsight were all really good. I had to convert dozens of projects over a couple of years, it was not that bad once I understood how things worked in Python 3. The biggest change was the fact that strings are now Unicode strings instead of ascii, and it was very confusing at first.
IMO python 3 is a great example of why I’m glad I don’t maintain any python codebases, despite loving the language.
In a maintainer-friendly world, developers would still have to write a bunch of from __future__ import X at the top of every file today, which sucks differently but IMO not nearly as much. If you were somewhat forward about it, files that don’t have those lines could emit deprecation warnings when loaded warning that those defaults will be enabled in a few more years time.
I’m sure that a lot of decisions in Ruby in the past were questionable, I just didn’t know about them before I started learning Ruby. However, now that I keep an eye out for programming languages in general, I feel like it’s made me a bit of a snob. I tend to agree with the author of the blog post that it puts a bad taste in my mouth for the language to be changing like it is (both the language itself as well as the process in which those changes are happening) but I’m not sure these things would have bothered me if I were coming to it as a new programmer like I did with Ruby 1.9.
I made a comment somewhere that lamented that Ruby was adding Enumerable#filter because it was ambiguous whether it was equivalent to #select or #reject. The response I got was that it was a good change because that’s the way that every other language did it. Ruby’s just kind of weird sometimes, and I think I have accepted the a lot of the legacy weirdnesses. So in that respect, what’s one more feature I won’t use?
In the end, I don’t have much stake in the game - if Ruby’s new path really starts to bothers me, there are plenty of other languages to pick up. But until then, it will be the first language I turn to for quickly translating thought into code, weird language design cruft aside.
Ruby 1.9, in hindsight, was extremely well managed. It was an opt-in to breakage for getting fundamental problems out. They handled that switch in a very good way, making Ruby 1.9 the clearly better version while releasing 1.8.7, which closed the gap in between both versions, making it feasible to write codebases that run on both with relative ease. Sure, there were issues and not every aspect was perfect, but comparing e.g. the Python 2.7/3.0 story, I’m sad that the Python community hasn’t been watching and learning from that.
Agreed, and I find Python’s rise in popularity comes in spite of the poor developer experience - compatibility and dependency management - so I wish Ruby had made more headway in non-Rails contexts.
I made a comment somewhere that lamented that Ruby was adding Enumerable#filter because it was ambiguous whether it was equivalent to #select or #reject.
Agreed. select and reject is a naming choice I have decided to steal, I wished filter just stopped existing (or returned (selectedElements, rejectedElements).
Every situation is different, and for me I am usually operating in a startup context where the value of any code has yet to be proven. In this context speed of iteration is the critical variable. So I generally use tests to 1) Avoid disaster, and 2) maintain speed of iteration.
In more detail: 1) There is usually a couple places in every codebase where it would be really bad for things to go wrong (handling money, user data, things that are hard to undo, etc) - test those pretty fully. 2) You can’t move fast if you’re worried about breaking lots of small other things, but that’s covered well enough by regression testing, usually with integration testing or user interaction scripts.
As you gain users, or value, or product market fit or whatever you want to call it, the value of the code goes up and so it’s worth adding in more detailed testing to lock in the functionality, but not at the start.
I find that I can get around 80% of the value of tests with around 20% of the number of tests (vs. what would be considered full coverage). Every project is different of course, but generally for me I do fairly detailed testing on sections of code that are complicated, or would have really unfortunate side effects if they had a bug. Everything else I cover with broad integration-style testing that catches regressions and stuff like that. My goal is to have “most” of the code executed in some way when the tests run, but it doesn’t have to be exhaustive to be worthwhile. Does that catch every bug? No, but it’s probably an order of magnitude less work that when I’ve attempted full coverage.
Disclaimer: My code rarely works with money and stuff like that, and is usually part of a startup or early product that still needs to be proven worthwhile, in those cases it’s usually the right call to trade some reliability for product iteration speed.
Keep in mind 30k/min is 500 per second, not nothing but certainly not something requiring exotic solutions.
It is very cool to see this work, and I think it is interesting that she has (a) proved her point versus interpreted languages (this is a bigger number) and (b) is simultaneously attracting comments of “but of course it can be done better in X way” (this is a low number).
The described system (doing actual transport and message serialisation, doing some real work per request) seems a good approximation to real load to me. My only comment would be that it sounds like the client maintains persistent connections, so it isn’t measuring the connection setup/teardown costs.
The C10K article was 1999 - https://en.wikipedia.org/wiki/C10k_problem. By the time of the 2011 hardware, it was the C10M problem, so it does seem that other approaches might give higher numbers.
It would be a fun kind of golf to have the protocol and server as a spec to see how different approaches/languages could compare. (The random number generator could perhaps be based on a seed read from the request so the response becomes deterministic and a reference client could check for accuracy).
Additionally, having a running system like this and asking candidates to identify and optimise bottlenecks would be a fantastic devops interview flow :-)
Related, I’ve always found the whatsapp numbers per server to be super impressive. Here’s 2M connections per machine back in 2012: https://blog.whatsapp.com/1-million-is-so-2011
I’ve done a fair bit of smart contract dev on Ethereum and at the root of the problem is an impossible to resolve conflict between immutability (which leads to provable decentralization) and the limits of software development as a discipline. People will claim they have a solution to this conflict, but they are really just trading one for the other.
Now, in addition to this core problem, Ethereum has gone and made things worse with their language being fairly unsuitable for the job. They also change the language rapidly, meaning battle tested code (the most valuable thing you have) is rendered unusable regularly.
The end result is a very limited system, with a tendency towards worst-case-scenario bugs. That’s not to say it’s worthless, there are a couple very cool things that you couldn’t do any other way, but I’m very skeptical of grand claims beyond those core use cases.
One fun option for super long term storage is a high density encoding printed to paper: http://ollydbg.de/Paperbak/ Since the data format can be printed in human readable format as well, any future person who needed to read the data would just need to be able to scan the paper and implement the decoder. With compression you can achieve ~3MB of text per double sided paper apparently.
This is pretty crazy, has anyone actually used the synchronized SQLite thing they’re talking about before?
I give total credit for someone talking about something that isn’t working that well for them anymore. Too often these trends just quietly fade away and it’s like no one was ever writing microservices, were we?
The possibly more important thing here than whether they’re moving away from them or not, is that they are a massive engineering org with problems that likely you don’t have. So you probably shouldn’t have been considering microservices anyway, whether or not Uber thought they were a good idea. Unless of course, you’re running a massive engineering org too, then do whatever you think is best.
I find the name “static site generator” kind of subconsciously promotes the idea of this just being all about some static files that move from here to there. What they usually come with though is a super complicated, fragile, and regularly updating toolchain that puts at risk your ability to generate the static part that was supposed to be simple. We have a couple “static” sites that are almost impossible to update now because the tooling that generates them is no longer being maintained, so it’s harder and harder to run that tooling successfully. They don’t feel like “static” sites very much anymore.
I agree with you on this, but surely these issues can happen to any CMS.
If the generation code is exercised on very web page visit it’s likely to degrade much more slowly than if it’s only exercised when there’s new content.
You’re not the first person I’ve heard say that. I know a few people who spend an inordinate amount of time administering issues on their static sites.
This gets easier if your tooling isn’t on a platform that gets old.
My static site generator is written in Clojure. Last commit, 2015. Going strong, no changes necessary to run it today.
That was far more interesting than I’d have hoped. Especially because it was more about operating this at scale. For my non-petabyte-scale stuff I’ve always felt like mysql is easier to use as developer. The permissions system for example is confusing. But I was also bitten by things like using utf8mb4 over utf8 in mysql. (and I always recommend mariadb)
I’m a little stunned to hear anyone say they prefer MySQL over PostgreSQL as a developer. A couple things I’ve really disliked about MySQL over the years:
psql
. My favourite pet peeve (of MySQL): Pressing Ctrl-C completely exited the CLI (by default), whereas, inpsql
, it just cancels the current command or query.After spending three years trying to make MySQL 5.6.32 on Galera Cluster work well, being bitten by A5A SQL anomalies, coercion of data type sillyness et al, I’ve found Postgres to be a breath of fresh air and I never want to go back to the insane world of MySQL.
Postgres has it’s warts, incredible warts, but when they’re fixed, they’re usually fixed comprehensively. I’m interested in logical replication for zero downtime database upgrades, but being the only girl on the team who manages the backend and the database mostly by herself, I’m less than inclined to hunt that white whale.
Hmm I’ve always felt the opposite way. The
psql
client has lots of single-letter backslash commands to remember to inspect your database. What’s the difference the various combinations of\d
,\dS
,\dS+
,\da
,\daS
,\dC+
, and\ds
? It’s all very confusing, and for the same reason we don’t use single-letter variables. I find MySQL’s usage ofShow tables
,show databases
,describe X
to be a lot easier to use.Yeah this is also bugging me. Sure “show databases” is long and something like “db” would be nice, but I know it and (for once at least) it’s concistent to “show tables” etc.
I grant you that, but
\?
and\h
are 3 keystrokes away, and the ones I use most frequently I’ve memorized by now. But I just couldn’t stand the^C
behaviour, because I use that in pretty much every other shell interface of any kind without it blowing up on me. MySQL was the one, glaring exception.Totally agree, this is almost exactly my situation too. I had always used mysql and knew it pretty well, but got burned a few times trying to deal with utf8, then got hit with a few huge table structure changes (something I think has improved since). Ended up moving to Postgres for most new stuff and have been pretty happy, but I do miss Mysql once and a while.
Don’t forget to read the ‘All That Said…’ at the end. It’s likely the most important advice in this whole list.
Yeah, I wish this article had a better title since it was a good read and clearly written by someone with experience. The article (+ the medium.com domain) made me think it was going to be clickbait though.
I thought it was going to be like the titular list in “10 Things I Hate About You” and the final point would be something like “I hate that I love you so much” or something like that.
I guess it’s pretty close, though.
This. I really wish that had been at the top, because some of these are pretty deep dives/issues at scale, and many people may not get to the end (especially if there’s an Oracle salesperson calling frequently).
I’m using a 2014 Macbook Pro since I’m trying to take a pass on the touchbar models for as long as possible. I’m still doing lots of Android and iOS dev and it works fine. All that being said, the most common choice now for devs is probably the 16” Pro, but the new Air looks pretty promising too. The next gen chips from Intel look like a legit step forward, so that might be a good option if you’re looking for something more portable. Whatever you go with, probably worth getting as much RAM as you can get/afford.
This seems to have some similarities to generational garbage collection (an ok description here: https://stackify.com/what-is-java-garbage-collection/). There’s at least some analogy between newly allocated memory and newly loaded files in the cache I think?
My advice from painful experience: Do not do this.
Keep code in your git repo in, hopefully, a single language. Keep your data in a database. Try not to mix the two. Then you can definitively answer questions like “What version of the app is running right now?” Yes it’s possible to version the code in the database via migrations, but why? The only true upside I’ve ever seen is performance, which is a valid one, but reserve this for critical sections that are proven to be too slow.
There are (rare) cases where it’s not only faster but also clearer - when you are correlating data at different levels of rollup at the same time.
For instance I have an app that tracks where in a store stock is kept.
When taking an order, you want to know how much stock the whole store has (minus outstanding orders). That’s a horrendous thing to compute application side (7-way join, N+1 queries are very hard to avoid). The equivalent sql is quite tidy and readable.
The other upside is to have a single image of code/data in production. Migrations and deployment of new servers is a easy as copying the database to the new server.
In some industries, like payroll, this facilitates migration of client data between multiple providers.
My advice from someone who used to think this was a bad idea over a decade ago, but now has been doing it for everything for the last decade (or so), learn what you were doing wrong, because you’re probably still doing it.
I don’t agree with any of your suggestions.This approach is is faster, more secure, easier to audit and review, easier to develop and scale. In every case that you’re not doing it Wrong™, so stop doing it Wrong™ instead of figuring out how to make the Wrong™ thing work.
I agree (I think; this post took me four tries to read and I’m still only 90% sure I understood), with the proviso that there are very real advantages to being able to use existing tools (ex: rails), even if they don’t support the Right way to do some bits.
If you’re a staff engineer at a bigco you can fix the framework (and probably get it merged upstream), but in agency/startup land you definitely do not have time for that.
Related: Does anyone here use KeyDB in production? Any thoughts, experiences?
It makes me sad that Java is not really in the conversation as an option right now. In the performance chart that this person references, the Java server is 97.6% as fast as the Rust one, so basically identical (https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=query). I actually like programming in Java, and to me it looks a lot easier and is a more mature platform than Rust. However, I acknowledge the frameworks are kind of crusty right now, and it’s not a shiny new thing like Rust, so people are generally not considering it.
No real conclusion, just wish more people realized how much Java has going for it despite not being the hot choice currently.
The thing is, can Java meet Rust’s performance on all fronts without herculean effort?
Why does that matter? It’s within 3% (I say, as a ruby developer by day).
The app I work on costs about 30k a year to host. In java or rust that would be more like 2k (if rust was $2000, java would be $2060).
Doesn’t seem like an amount that would factor into the decision.
What matters is if has the performance on the fronts your app uses. It is not very hard to write reasonably quick Java code. Without any effort 30-50% of highly optimized native code can be achieved according to former studies. For optimized Java code performance can approach optimized native parformance (in the above example 97%).
The real problem with Java (or C#, or Go) is latency, as GC languages have problems providing predictable upper bounds for GC pause lengts, but Java is on the forefront of research in this topic. Also there are tricks in the high-performance Java toolbox for these problems.
(As a C# developer I have always envied the polished Java features)
I have never developed production Rust code, but have done for Java and similar C# code, and those languages are pretty productive for most business usecases, despite their verbosity, and there is substantial talent pool for them.
This seems like a very good direction to explore. The tooling complexity and rate of change is one of my main problems with the Javascript ecosystem. People who get used to it don’t realize just how poor the experience is compared to other toolchains.
Interesting to me: ARM is still a footnote in this writeup, but it appears the pain train is coming for both Intel and AMD in the datacenter. Amazon has the only chips where this is obvious so far (https://perspectives.mvdirona.com/2020/01/aws-graviton2/) but you gotta imagine that comparable chips will be widely available in the next couple years and it’ll be tough for x86 to keep up.
Ampere’s Quicksilver is coming this year, that’s probably gonna be the real “Graviton2 if you’re not called Jeff Bezos” :)
For now there’s only the first gen Ampere eMAG which is not powerful enough (though has an absolute ton of I/O at a relatively low cost), and the HPC-oriented Marvell ThunderX2 which is too expensive for general server use.
Interesting, thanks for the info. Do you see any road for x86 to be competitive even in the mid term with some of these new ARM chips? It seems between the architecture and production volume advantages it’s going to be really tough.
Side note: I wonder if a viable branch prediction attack mitigation would be to just give everyone a dedicated ARM machine. If they were cheap enough you might not even need to virtualize.
So far it’s been AMD who have all the advantages. Many people became really skeptical of the ARM servers when they saw EPYC Rome. But if Amazon is going all in, producing custom ones… there’s something there. I do hope that Ampere delivers something great this year.
Scaleway did that with some 32-bit Marvell thing a while ago. It is kinda interesting, but eliminates the flexibility of VMs where any VM can have any number of cores from 1 to however many the CPU has.
FWIW I’ve tried switching to Firefox before and it didn’t stick, but the v70 Beta has held up for a couple weeks now. Worth a try!
I submitted this not really because I agree with it but was curious if any lobsters out there have strong feelings. I have been using other languages for a while and not stayed up with recent developments. I generally still like ruby and use it fairly regularly and actually kind of like the constancy of it. I do, however, find myself reaching for other tools more often lately.
However, the claims here that ruby core is working against the general sentiments of the community at large, while examples were provided, seemed slightly exorbitant. So I was curious if others here felt this is the case or not.
Ruby’s dead. How’s that for a strong feeling?
I wish I could be more articulate, but it seems an inexorable conclusion of two forces. One, javascript is the language of the browser. Two, people have been making javascript as easy as possible for decades now.
Notice there are a lot of kids on http://repl.it/talk … I wonder how many of them are genuinely trying to learn ruby rather than JS?
Look, I’m no JS zealot. I’m more of a “skate where the puck is going to be” kinda guy. And the ruby players on the rink have been skating toward other places.
Rails was incredible. I remember how magical that first demo was. You could do in ten minutes what took days in PHP.
Nowadays, doing that is sometimes as easy as running
now
.Oh boy, there I go again expressing super strong confrontational opinions… For what it’s worth, if I’m wrong, you’ll get the satisfaction of rolling your eyes and watching Ruby eat the world over the next decade. But it just doesn’t feel like that’s the world we’ll end up in. Pull up any visualization of “let’s measure the popularity of programming ecosystems” and it’ll look a lot like JS and Python won. https://www.youtube.com/watch?v=wS00HiToIuc
As a ~10 year experienced engineer with the bulk of that time in Rails currently looking elsewhere, I will say: the job market would seem to indicate that Ruby is very much alive and well. I’m curious about the JS comment, as I feel like Ruby has never been the language of the browser.
That said, the mid-to-upper tier web companies these days seem to be doing server side development in Go more often than Rails. I’m looking to change only because I want to learn something new. Ruby/Rails if anything has slowed down its formerly frenetic rate of change, performance has increased to a respectable degree and the ecosystem is filled with robust libraries, a good testing culture… Rails is by no means a terrible stack to work in. YMMV of course. Edit: I do agree with the examples in the original article… the pipe operator in particular strikes me as an ugly solution looking in vain for a problem.
But I find the “Javascript is taking Ruby’s place” remarks very confusing, as Ruby is a server-side language, and Go seems also to have stolen the server-side market share from Nodejs.
Whoops. My point was, JS is the language of the browser. If you want to do much of anything with “webpage + browser,” you need to know JS.
That means if you know JS, you can largely get by without knowing anything else. Or rather, you can learn whatever else you need to learn as you go.
We’re so lucky that the job market gives us so many options. I totally agree; I didn’t mean to imply that if you’re a ruby dev, you should worry about your career prospects. So much of the world was built on Rails that you’ll probably be able to find work for a long time.
All I meant was, younger people don’t seem to be interested in learning Ruby. When those younger people become older people, and the older people become less and less people, the world changes.
If that sounds grim, just be glad you’re not a Lisp guy like me. It’s almost painful to watch everything not use the power of compile-time macros. But at least I get to use it myself.
You’re right that Go has had some surprising momentum here. Much of repl.it is apparently built on Go. But the advantage of JS is the ten hundred million libraries that exist to solve all problems ever thought of. (More than a little bit of hyperbole here, but it’s almost not far from the truth.) If you need to do X and you happen to be using JS, you don’t have to read any docs. You can just type “do X in Javascript” into google, and google turns up an npm package for X, for all basic X. Other languages will always be second-place regarding convenience, for this reason.
Super Serious Projects will tend to be written by people who want absolute type safety and clearly-defined module boundaries and never to see an error. Hell, golang doesn’t even have a REPL. But anyone who’s missed a REPL will tell you that it’s a serious disadvantage not to have a REPL.
As someone who has done quite a bit of both Go and JS, this is emphatically not a point where JS wins.
There’s a bigger number of packages, but bitter experience has not been kind to my trust in ‘this JS package has lots of downloads and github stars and a nice website so it probably works OK’.
“worse is better, lol.” https://www.jwz.org/doc/worse-is-better.html
Most people see a package and expect a solution. But each package solves N% of whatever problem you’re facing. (
N
is usually somewhere between -10 and 97.)As much as I love to write code, I love getting things done quickly without introducing major problems.
npm i foo
tends to work pretty well for that.Hey, cool trick.
npm install sweetiekit-dom
and then dorequire('sweetiekit-dom')()
. Nowwindow
,document
, etc all exist, just like node is a chrome repl. You can even dowindow.location.href = 'https://google.com'
and it’ll load it!console.log(document);
. Unrelated to the convo, but I can’t get over how neat it is.So I decided to test “do soundex [1] in Javascript” just to see if it was true. Yup, second entry on the results page. I checked it out, and found an error—“Ashcroft” encodes to A261, not A226. And given the page it’s on is a gist, there’s no way to report an error.
[1] Why Soundex? Well, I used it years ago in an online Bible to correct Bible book names (such that
http://bible.conman.org/kj/estor
will redirect properly tohttp://bible.conman.org/kj/Esther
).Example URL, or satire?
Neither. When I originally registered for a domain back in the late 90s, I wanted
conner.com
but that one was taken. So wereconner.net
andconner.org
. My backup choices,spc.com
,spc.net
andspc.org
were also taken. I had a few friends that called me Conman (a play on my last name of Conner) so that’s what I went with.In the 21 years I’ve had the domain, you are the first one to question it. No one else has (Weird! I know! [1]). The link is real, try it.
[1] It’s also weird how few people connect my name, Sean Conner, to the actor Sean Connery (one letter from stardom!) At least my name isn’t Michael Bolton.
That’s fine. I just reacted to the domain, and in these contentious times it’s not too hard to imagine a person setting up a Bible site with pointers to the “bad stuff” (depending on your view of what’s bad).
FWIW I”ve used https://www.biblegateway.com/ a few times (mostly because I’d be interested in how the text is presented in different Swedish editions) but that’s an altogether bigger operation.
Agreed. I would hypothesize that the Ruby community is largely being cannibalized by: Go, node.js, Elixir/Phoenix, Rust, Python (for special purpose work like tensorflow) – probably in that order (or maybe swap Rust and Elixir? unsure).
It’s not only due to new tech stacks emerging. Cultural and commercial factors play a massive role.
For instance: Ruby got very popular in the consulting space (it’s a good way to look good by quickly delivering some value, and it tends to generate a need for lots more consulting hours a year or so down the track).
Now that the ruby community has more-or-less settled on a few standard approaches, it’s no longer as profitable for consulting companies.
I don’t agree fully with that reading, Rails was always also very popular in the old-school agency space, as Rails is extremely quick in getting set up. It’s insistence on having a standard stack might lead to problems in the long run, but still makes it the best framework for quickly getting out of the door in a clean fashion.
It still remains very popular there.
Also, Rails is often used for internal management applications, I have tons of clients that “don’t do Ruby” until slowly, you figure out there’s tons of small applications running on their servers essentially providing some buttons and graphs.
The number of companies that “don’t do Ruby” officially, but actually do internally is huge, especially in enterprise.
That’s a great perspective, thanks for brining it up!
Speaking from the perspective of someone who is both in the Rust project and on the board of one larger Ruby non-profit, I do not agree with the point that Rust cannibalises Ruby. Indeed, we’re still growing, even if the curve flattens.
I only have a limited set of data points for folks I know of that have moved (or are moving) from ruby to rust for a couple of projects (blockchain space). Sounds like you have more empirical evidence here for sure.
Rust is pretty popular for implementing blockchains, and Ruby isn’t, because you can’t write a competitive PoW function on top of the mainline Ruby implementation. Most Ruby projects don’t need that kind of performance, so your story probably isn’t very typical.
Experienced developers usually extend their toolchain at some point, coming with a shift of interest. There’s an effect where you have more insight into see experienced people picking up new stuff, but tend to ignore newcomers coming up.
I am of a certain generation in the Ruby community, which leads to the good effect that a) I meet more and more people that don’t know me, despite having a high profile, b) I tend to only see my circles and have a hard time keeping track of newcomers.
I agree, and I think they complement each other more than compete right now. Ruby is great at building architecturally slick webapps, which Rust is lousy at. Rust is great for building high-performance low-level stuff, which Ruby is lousy at. It seems like a good pattern, supported by several gems/crates, to build a webapp in Ruby/Rails, and refactor any parts that need top performance out into a Gem written in Rust.
I very much doubt Ruby devs are moving to a language as low-level as Rust.
Elixir I could very much believe.
Roughly 1/3rd of the Rust programming language community come from dynamic languages, mostly Ruby and Python.
How do they deal with lifetimes? Whenever I use Rust, I tap out at lifetimes because it just gets too confusing for me.
The zen of Rust is using Ownership in most spaces. Lifetime problems usually arrive when you are trying convoluted structures that are better handled through cloning and copying anyways. Use
clone()
liberally until you are very sure of what you want to do, then refactor to lifetimes.I wrote a glimpse into this here last year: https://asquera.de/blog/2018-01-29/rust-lifetimes-for-the-uninitialised/
Also, Edition 2018 made lifetimes a lot easier.
Thanks for the link! :)
You’re welcome!
i’m pretty excited about the mruby/rust integration, especially if i can eventually ship a single executable with embedded ruby code.
Is that being talked about anywhere? I’d love to follow that conversation as well
i know about mrusty but it seems to not be active; i’m just hoping that people are still working on this (i might even join in if i get some free time)
I have never used Ruby in anger, but gosh that Immutable Strings bug getting closed out as “not going to do it, don’t care you all want it, just use a magic comment” would make me think that the Ruby you’ve got is the Ruby you’ll ever get.
I don’t think that languages have to keep being developed (how many Lisp dialects are there that don’t change?), but if you think Ruby has deficiencies now, I wouldn’t expect them to change and that would make me worried too.
I am maintaining a ruby codebase that’s >10 years old.
I don’t want ruby to make backwards-incompatible changes! The language is established now; it’s far too late for that.
It sucks that you need a linter to check your files start with a magic comment in order to get sensible behavior, but not nearly as much as not being able to upgrade & benefit from runtime improvements/security patches just because they’ve changed the semantics of strings for the first time in 25 years.
This is an awful sentiment. How would you like being told that for a project you maintain, you can no longer make any big changes, ever? Because some user from 20 years ago doesn’t want to update their scripts, but wants bleeding edge Ruby.
The world doesn’t always work that way, and hopefully Ruby doesn’t listen to people like that.
I actually think it’s a pretty reasonable statement. One of my favorite things about Java is that it’s almost 100% backwards compatible. We just dusted of a 20 year old (!) game and it pretty much worked on that latest JDK. That’s awesome.
So is C, and C++ and other natively compiled languages. The advantages of a standardized lower layer!.
If you want to maintain a project where breaking things to make other things better, find one where the things you break don’t affect people. There’s no shortage of them and it’s even easy to start your own!
If you want to be the trusted steward of a large community, you have to consider how your choices affect the people who have placed their trust in you. There’s nothing wrong with not wanting that! It’s a lot of work. I don’t want it either. Thankfully, some people do, and that’s why communities have some people at the center and others at the periphery. The ones at the center are the ones doing the hard work of making it possible.
Hopefully they do. It’s great to have new language features and to advance the state of the art, but it’s also great to be able to run my code from a few years ago without having to rewrite it.
There are ways to have both, of course, which involve making compromises. For example, in the area of scientific computing I’m currently working in, there are a lot of Python 2 hold-outs who don’t want to migrate to Python 3, even though the changes are few* and Python 2 support is due to end. But many Python programmers are happy with Python 3 and have ditched 2 altogether already.
*few, but important in context: changing how division works is a big deal for numerical simulations.
maintainers who don’t want to be told that should not maintain languages
This kind of thinking is how you get things like Python 3 being out for over a decade while some people still do everything in 2. If you intend for your language to be widely used, you have to come to terms with the fact that even minor changes that are highly beneficial will be massively painful, and might even destroy the language entirely, if they break old code.
Python 3 actually introduced breaking changes, which in hindsight were all really good. I had to convert dozens of projects over a couple of years, it was not that bad once I understood how things worked in Python 3. The biggest change was the fact that strings are now Unicode strings instead of ascii, and it was very confusing at first.
IMO python 3 is a great example of why I’m glad I don’t maintain any python codebases, despite loving the language.
In a maintainer-friendly world, developers would still have to write a bunch of
from __future__ import X
at the top of every file today, which sucks differently but IMO not nearly as much. If you were somewhat forward about it, files that don’t have those lines could emit deprecation warnings when loaded warning that those defaults will be enabled in a few more years time.I’m sure that a lot of decisions in Ruby in the past were questionable, I just didn’t know about them before I started learning Ruby. However, now that I keep an eye out for programming languages in general, I feel like it’s made me a bit of a snob. I tend to agree with the author of the blog post that it puts a bad taste in my mouth for the language to be changing like it is (both the language itself as well as the process in which those changes are happening) but I’m not sure these things would have bothered me if I were coming to it as a new programmer like I did with Ruby 1.9.
I made a comment somewhere that lamented that Ruby was adding
Enumerable#filter
because it was ambiguous whether it was equivalent to#select
or#reject
. The response I got was that it was a good change because that’s the way that every other language did it. Ruby’s just kind of weird sometimes, and I think I have accepted the a lot of the legacy weirdnesses. So in that respect, what’s one more feature I won’t use?In the end, I don’t have much stake in the game - if Ruby’s new path really starts to bothers me, there are plenty of other languages to pick up. But until then, it will be the first language I turn to for quickly translating thought into code, weird language design cruft aside.
Ruby 1.9, in hindsight, was extremely well managed. It was an opt-in to breakage for getting fundamental problems out. They handled that switch in a very good way, making Ruby 1.9 the clearly better version while releasing 1.8.7, which closed the gap in between both versions, making it feasible to write codebases that run on both with relative ease. Sure, there were issues and not every aspect was perfect, but comparing e.g. the Python 2.7/3.0 story, I’m sad that the Python community hasn’t been watching and learning from that.
Agreed, and I find Python’s rise in popularity comes in spite of the poor developer experience - compatibility and dependency management - so I wish Ruby had made more headway in non-Rails contexts.
Agreed.
select
andreject
is a naming choice I have decided to steal, I wishedfilter
just stopped existing (or returned(selectedElements, rejectedElements)
.