One question which I didn’t see explicitly addressed there is, are they all using the same random numbers?
Ideally for benchmarking I think you want every impl to use the same deterministic RNG and seed value. Otherwise you could get confounding from, say, one impl with a bad RNG that generates the same few numbers very often causing the same entries to get hit repeatedly, making the cache hit ratio better
I didn’t. The reasoning behind it is that the standard deviation is very small for almost all cases and in TruffleRuby’s case the big stddev has to do with de optimizations. So, I didn’t deem it necessary as the recorded values are so stable which I guess is because the law of big numbers (every single iteration basically does 1000 small iterations/playouts) and it kind of evens out. Couple that with lots of iterations/time and it should be fine here I think, although I agree as a best practice in general.
On the other hand, if one locked down the rng seed to a specific value it also has the problems (imo) that it might never trigger some edge cases (which would potentially benefit the JITs).
So, overall I don’t think it’d make any discernible difference here.
First of all, awesome writeup and I’m glad PBT was useful to you!
Re your comment about reversing a list, @drmaciver has a great tweet on that:
Every time someone uses reversing a list twice to demonstrate property-based testing, I take a drink.
No, this isn’t a drinking game, I’m just being driven to drink by bad examples.
I agree with him. People have trouble seeing how to use PBT because all of the examples given are small trite toys! We need more stuff like what you wrote that shows it actually being useful in real production cases.
@owi also has some really good posts on this, if you want to read more. Check out his blog!
Yeah, the first time I encountered PBT it was in I believe the Appendix of an introductory clojure book years ago. Their example was was division and multiplying and yeah sure that makes sense but my examples are hardly ever so easy. I really like it as a property for serialization/deserializaion. That’s how jason does some of its testing.
@owi’s blog seems really good, I just took a sneak peek, but whenever people take the care to produce illustrative diagrams you know they care.
Currently I’m reading Fred Herbert’s PBT book to level up my skills in hopes of being able to effectively apply PBT a lot more.
PBT is for sure good at finding edge cases. I used it for some physics simulations and the number of times I shook my fist at the sky and angrily yelled “FLOATING POINT” is nonzero. Part of that was my own fault for not restricting the range of generated values though.
That’s what I think is so interesting: It makes you think in properties and what input data exactly does your function accept because it’s good at generating data that’s like “wait I’d never want my function to get this!” which I think is already good value in itself :)
Yeah maybe I should add record to the list. Someone over at reddit someone also just told me that I forgot to mention/use :array. I think I might have a blind spot for more erlangy data structures (although as I understand arrays are also rarely used there). Sorry for that - will see about it after breakfast :) Not sure how 4 more entries will affect graph readability though.
edit: after reviewing it again, ofc records are just tuples tagged with their type so performance wise it shouldn’t be different to an interesting degree. Would still be good to mention :)
We can define indexes on multiple columns and it’s important that the most limiting index is the leftmost one. As we usually scope by couriers, we’ll make courier_id the left most.
Also worth mentioning that range indexes like date/time should always be the last column in a compound index if you can afford it, so the range is densely stored.
Looks like you have a misunderstanding of EXPLAIN ANALYSE’s output. The first step in the query plan for DB view is the bitmap index scan, then bitmap heapscan, then the sort; not the other way around.
Whenever I arrive at a point where the answer to “why did it take so long” is “the underlying query took this much time”, the next step for me is see what query was generated exactly and what the plan is (EXPLAIN), with actual timings of the execution if possible (EXPLAIN ANALYZE).
I wonder if the newly created index will help when you want to fetch multiple courier_ids each with their most recent location. With a query like
SELECT cl.*
FROM couriers c
CROSS JOIN LATERAL (
SELECT id, courier_id, location, time, accuracy, inserted_at, updated_at
FROM courier_locatins cl
WHERE cl.courier_id = c.id
ORDER BY time DESC
LIMIT 1
) AS cl
WHERE c.id IN (...)
it might be even possible to use the old two indexes (perhaps with the condition that the index on column “time” be created in reversed order; CREATE INDEX courier_location_time_btree ON courier_locations USING btree (time DESC). The multicolumn index would likely benefit from descending order as well. Thinking about it further, a BRIN index might be better still).
There is a lot of guessing in this comment because I don’t have the data and I lack the intuition to know better how the query planner would work. There are people in #postgresql on Freenode who could tell just from looking at your case and after getting a few answers from you.
Hey, thanks for your comment! Haven’t investigated this case yet, as we mostly display single couriers or if we don’t we make multiple requests (at the moment either way). Doing the index on descending is pretty nice, I feel like I should try that out.
The different index types as well - true I didn’t investigate them here at all. I usually only do when my current solution won’t help anymore 😅 I should read up about them again!
It looks like various places in the code expected the transaction to silently fail, and relied on other checks to detect a problem and throw different exceptions. But when ActiveRecord::Rollback gets propagated, those checks never run, and ActiveRecord::Rollback is swallowed at the top level, resulting in test failures like:
ActiveRecord::RecordNotDestroyed expected but nothing was raised.
This is completely lunatic. Using ActiveRecord::Rollback is NEVER safe, since Rails / ActiveRecord code clearly uses unsafe nested transactions that will just swallow your exception in some situations. If they didn’t, there would be no test failures for that PR.
wow, great research I feel like I should have done that :D. I felt like opening a discussion up again about it at Rails but as other similar bugs have been discarded with “it’s not perfect but this would break too many apps” I sort of gave up on it before I even started :|
Yeah, that’s probably what would happen. I don’t think it’s a huge loss though. I believe all errors should be specific, detailed, with all useful context, so I’d never raise a ridiculously generic exception like ActiveRecord::Rollback. It’s a hokey code ergonomics trick that looks cute at first glance, but isn’t actually useful. Even if it worked properly, I can’t think of any situation where using ActiveRecord::Rollback would improve code quality. Raise an exception that actually means something.
I’m an on-again-off-again ruby coder. Right now I’m “on” again (for personal stuff), but I really like messing with any and all languages I can find. It’s not like I get bored with ruby, I still use it for things here and there. So where did I go? No where!
I see that my title implies something wrong - I don’t think Ruby is dying or people are frenetically leaving it by a long shot. Maybe, What do Rubyists look at/are interested in would have been a better title but also a bit less catchy :)
I definitely also think that being polygot is awesome and on the raise - you don’t have to “leave” or “go” somewhere - you have a toolbelt with many options to choose the most suitable one.
I’m a Rubyist who moved to Elixir. The BEAM seems to be fundamentally a better foundation for web development than Ruby can offer: concurrency, fault tolerance, and not having your service fall over because of one expensive request. There are fewer libraries (for now), but it’s easier to add libraries to Elixir than to build shared-nothing concurrency into Ruby.
Elixir is great and I feel like most of the major building blocks are there. It’s not just elixir itself though - especially these days I just feel like Ecto is so much better. ActiveRecord and triggering DB requests whenever along with all those validations can be hard toll to take. Today I had to make preloading an association while only selecting certain columns work. Not nice. Would be nicer in ecto as ecto is just a tool to work with the database.
Thanks for Saša’s talk - didn’t know that one yet. On the “to watch list” now :)
I think the survey as constructed overlooks the demographic of people like me who knew several languages, used ruby, and then went back to mostly using other languages that we already knew before learning ruby.
I was a haskeller who learned ruby mostly because of metasploit, and realized it was a fine language for quick scripts, and I still pick it up now and again, but I’ve gone back to mostly using Haskell because I liked it much better.
I tried to balance many things while aiming to still keep it short & sweet. Before I “set the survey free” I was adding a sentence about also checking the boxes if you did something before and then went back to it/renewed interest in it. I decided it might clutter it too much and lots of people don’t really read the text.
So yeah, definitely - maybe/hopefully I find another/better way next time.
That’s exactly me. I know a variety of other languages, but I learned them all prior to Ruby. The only new ones I’ve done anything with are Elixir and Go.
This is me also, sort of. I never started a real project in Ruby, but have contributed to Ruby projects. The reason I never did much else with it is that it isn’t a viable option for the things I enjoy doing.
Clojure. It felt like the natural progression, especially since I was interested in diving deeper into FP. Now I can’t not love s-exps and structural editing, as well as even more powerful meta-programming.
(Also notable that I saw Russ Olsen, author of Eloquent Ruby, moved to Clojure, and now works for Cognitect.)
I’m really interested in Clojure, but compared to Ruby there seems to be an order of magnitude fewer jobs out there for it.
I can’t swing a dead cat without seeing 4 or 5 people a week looking for senior Rubyists. I’ve seen maybe 2 major Clojure opportunities in the last 6 months.
Clojure never really got me personally - I would have liked but weirdly short names, friends telling me that for libs tests are more considered “optional” & others were ultimately a bit off putting to me. Still wouldn’t say no, just - switched my focus :)
Tests are definitely not considered optional by the Clojure community. However, you’re likely to see a lot less tests in Clojure code than in Ruby.
There are two major reasons for this in my experience. First reason is that development is primarily done using an editor integrated REPL as seen here. Any time you write some code, you can run it directly from the editor to see exactly what it’s doing. This is a much tighter feedback loop than TDD. Second reason is that functions tend to be pure, and can be tested individually without relying on the overall application state.
So, most testing in libraries tends to be done around the API layer. When the API works as expected, that necessarily tests that all the downstream code is working. It’s worth noting that Clojure Spec is also becoming a popular way to provide specification and do generative testing.
Didn’t know that elixir had doctests! I find them one of the most fascinating parts of python, the first draft of an incredible feature that just never got a second draft. Does elixir do anything different with them than Python? Seems so based on your positivity.
Hey, I haven’t written Python in a long time so I didn’t even know Python had doctests. What I find though (compared to how I’d see doctests if they existed in Ruby) is that it is easier to do due to the immutable nature. As the effects of methods aren’t side effects it lends itself better to doctesting as what you wanna see is jut the return value of the method.
Also, the increased usage of simpler data structure makes the session setup easier than I’d imagine it would go with most objects.
It’s certainly a fad, just like Ruby and JS. Which is to say something that is going to deliver a ton of business value over the next decade and foster its own pop culture in a feedback loop we’re all accustomed to.
As someone who learned a good bit of Erlang 10+ years ago, I was initially worried about added complexity. Especially after being burned by the CoffeeScript nightmare.
I started writing Elixir daily at work about 10 months ago. A couple weeks of using Elixir disabused me of that. Elixir is a really seamless implementation and provides valuable support for everyday programming. The only reason I might end up reading Erlang code is if I have a problem with a dependency.
If you’re a glutton for punishment you can call Elixir code from Erlang.
Saša Jurić wrote up some excellent points about why elixir. Not saying we all should do it, but there are some advantages, helpful features and superb Erlang interoperability.
Another thing that I enjoy about Elixir is the community. Not just the people, but the community is a “melting pot” different communities - Erlang and Ruby mainly but there’s also a good amount of people from Haskell, JavaScript and others. Together ideas meet and new concepts and ideas emerge.
But Erlang is not the same as OTP and BEAM, and Elixir is “just” another language the uses OTP and BEAM. Sure, it’s close to Erlang in some (many even) respects, but it’s not simply a “prettier Erlang”. If anything, it’s a better engineered and much faster Ruby.
These articles are really making me impressed with Elixir’s design from maintainability point of view. That MP3 parser looked close to the informal pseudocode and header definition. I also like how it lets you specify something while also saying to ignore it.
EDIT: @PragTob I just read the Bleacher Report article since that was new to me. It seems to be an exception to your claim next to the link that “if you re-read the articles, though, other benefits of Elixir take as much the stage as performance…” In the Bleacher Report, performance and resource efficiency is about all they talk about. It’s the main reason they switched and justify further investment. They even went on to explain how they had to invest in new ways of benchmarking performance due to the difference. So, maybe you might want to change it to not imply performance was a footnote in that one since it was about all they talked about.
Hey, yeah thanks - I think I rewrote/rearranged that portion late some night :| You are definitely right, the good code is just a minor part in that post. Will adjust.
I’m of the firm opinion that Elixir is the going to be, for me, the main language for backend production systems for the next decade of my career–having tried PHP, Ruby, JS/Node, Java, Python, C/C++…it just feels right. But.
Buuuuut.
The thing that makes Elixir good beyond the points mentioned in this article is a pervasive conservatism and desire for quality, mostly because of its adjacency to Erlang/OTP and that community of responsible engineers solving unsexy problems. Elixir has adapted the (often clunky) tooling of Erlang and has done a lot to bring it up to standards developers expect in modern projects, but without going whole-hog new-shiny as we’ve seen with, say, ActiveRecord or Rails or Meteor or whatever else.
Except, that doesn’t last. As more and more developers (looking at you, Rails folks) come streaming in to get into the Next Big Thing, expect that conservatism to give way to poorly-written libraries, to new frameworks to give conference talks, and to code written in complete ignorance of the performance characteristics in the underlying system.
I’m currently neck-deep in a legacy Phoenix system (yes, such things do exist!), and I’ve seen (in our and others’ projects):
Well-meaning developers using Verk (a port of Sidekiq/Resque, basically) for work queuing instead of just normal supervision trees
Excessive use of tooling for hot code swapping/reloading just because Elixir/Erlang/OTP supports it, regardless of the cost when things don’t work correctly. Simpler deployment makes sense
Pipeline operators used in place of bog-simple nested parens for arithmetic
Pervasive use of maps where structs would be better typed and more reliable
Pervasive use of string values where atoms would be more efficient
Use of blind exception throwing instead of god-fearing Erlang {:ok, ... }, {:error, ... } tuples that can be handled correctly
Overly-clever metaprogramming (I’m looking at you, Phoenix router)
Ignorance of core Erlang documentation and features (docs, for some reason, not reliably included by the Elixir folks…probably to discourage their use)
And outside of that, I’ve seen a (subjectively) massive increase in the number of me-too and one-off projects on Hex that show that people are sharing buggy, poorly-tested libraries and others are piling on because Elixir is TEH NEW AWESOME.
I fully expect somebody (maybe @355e3b) to write something like “The Gentrification of Erlang/OTP” to explore this troubling trend further.
The way to fix this is not to complain and grumble but to do the blogging, talking, and teaching to make things better. To that end I’d much rather see @355e3b teach us what he knows and help us all get better at Erlang/OTP and maybe even Elixir as a byproduct.
This is the problem with technologies that get HN/blog hype. Add to this mediocre learning materials written by people with no production experience to make a quick buck (“buy my book/course on Elixir for $5!!!”).
The issue is simple: People rushing to get experience with Elixir and not learning it or OTP properly. It’s all about being able to put it on your resume or GitHub instead of actually learning it.
–
I fully expect in five years to see people say that you don’t need OTP to be an Elixir programmer.
Personally I’ve also seen the OTP use go the other way - “There is this great OTP stuff so we gotta use it!”, where a simple function would suffice people try to use supervisors etc. for no reason other than to just do it. Or “I have to use OTP so I create a single GenServer which I’ll delegate all requests to”, which is basically you taking a parallel system (all requests in phoenix are their own process) and creating an artificial bottle neck by sending it all to a single process.
A serious question about your Verk remark (note I haven’t used so I don’t know what it does, more general about background job system): I see that I’m less likely to need a background job system in BEAM land. However, when I have it in the BEAM (Supervisors, Genservers maybe ETS etc.) and don’t do hot code upgrades the jobs get lost when I restart/stop&start the application, don’t they? Am I missing something? Same thing with maximum retries and exponential back off - should everyone re implement those themselves (like we do a lot of API calls to notoriously unreliable APIs of partners)? When I really need those, I’d happily use a library to achieve them. Am I missing something essential here?
My initial reaction for that would be to look at dets and even an Elixir-wrapped Mnesia. For the retries and exponential backoff, again there are Erlang libraries that have solved them for quite a while, and yet people are still kinda introducing new ones. It’d be nice if we got more of that standardized into the standard lib. :)
And yeah, excessive use of OTP is also a problem–people get really enamored with tools and may misapply them.
The code examples are really wonderful to back up the points he’s making. I’ve only dabbled with Erlang/Elixir but am bookmarking this to see how I can apply these techniques to current problems I’m trying to solve with small one-off scripts.
One question which I didn’t see explicitly addressed there is, are they all using the same random numbers?
Ideally for benchmarking I think you want every impl to use the same deterministic RNG and seed value. Otherwise you could get confounding from, say, one impl with a bad RNG that generates the same few numbers very often causing the same entries to get hit repeatedly, making the cache hit ratio better
Great question!
I didn’t. The reasoning behind it is that the standard deviation is very small for almost all cases and in TruffleRuby’s case the big stddev has to do with de optimizations. So, I didn’t deem it necessary as the recorded values are so stable which I guess is because the law of big numbers (every single iteration basically does 1000 small iterations/playouts) and it kind of evens out. Couple that with lots of iterations/time and it should be fine here I think, although I agree as a best practice in general.
On the other hand, if one locked down the rng seed to a specific value it also has the problems (imo) that it might never trigger some edge cases (which would potentially benefit the JITs).
So, overall I don’t think it’d make any discernible difference here.
First of all, awesome writeup and I’m glad PBT was useful to you!
Re your comment about reversing a list, @drmaciver has a great tweet on that:
I agree with him. People have trouble seeing how to use PBT because all of the examples given are small trite toys! We need more stuff like what you wrote that shows it actually being useful in real production cases.
@owi also has some really good posts on this, if you want to read more. Check out his blog!
Thanks for the nice words!
Yeah, the first time I encountered PBT it was in I believe the Appendix of an introductory clojure book years ago. Their example was was division and multiplying and yeah sure that makes sense but my examples are hardly ever so easy. I really like it as a property for serialization/deserializaion. That’s how jason does some of its testing.
@owi’s blog seems really good, I just took a sneak peek, but whenever people take the care to produce illustrative diagrams you know they care.
Currently I’m reading Fred Herbert’s PBT book to level up my skills in hopes of being able to effectively apply PBT a lot more.
PBT is for sure good at finding edge cases. I used it for some physics simulations and the number of times I shook my fist at the sky and angrily yelled “FLOATING POINT” is nonzero. Part of that was my own fault for not restricting the range of generated values though.
That’s what I think is so interesting: It makes you think in properties and what input data exactly does your function accept because it’s good at generating data that’s like “wait I’d never want my function to get this!” which I think is already good value in itself :)
Exactly! I think property based testing is a good teaching tool.
Great writeup!
Note that the Record type is also very useful for getting the speed of tuples with the readability of maps.
Thanks!
Yeah maybe I should add record to the list. Someone over at reddit someone also just told me that I forgot to mention/use
:array
. I think I might have a blind spot for more erlangy data structures (although as I understand arrays are also rarely used there). Sorry for that - will see about it after breakfast :) Not sure how 4 more entries will affect graph readability though.edit: after reviewing it again, ofc records are just tuples tagged with their type so performance wise it shouldn’t be different to an interesting degree. Would still be good to mention :)
This is truly next level. Well done!
Also worth mentioning that range indexes like date/time should always be the last column in a compound index if you can afford it, so the range is densely stored.
Interesting! Thanks - do you have a link with more explanation that I could read and link to?
Looks like you have a misunderstanding of
EXPLAIN ANALYSE
’s output. The first step in the query plan for DB view is the bitmap index scan, then bitmap heapscan, then the sort; not the other way around.The order is “inside out”.
To add to that, explain.depesz.com is really helpful.
Thanks, in fact I wondered about that because it seemed weird. 🤦♂
Too far inbetween EXPLAIN ANALYZE’s for me… perhaps luckily? :D
Whenever I arrive at a point where the answer to “why did it take so long” is “the underlying query took this much time”, the next step for me is see what query was generated exactly and what the plan is (EXPLAIN), with actual timings of the execution if possible (EXPLAIN ANALYZE).
I wonder if the newly created index will help when you want to fetch multiple courier_ids each with their most recent location. With a query like
it might be even possible to use the old two indexes (perhaps with the condition that the index on column “time” be created in reversed order; CREATE INDEX courier_location_time_btree ON courier_locations USING btree (time DESC). The multicolumn index would likely benefit from descending order as well. Thinking about it further, a BRIN index might be better still).
There is a lot of guessing in this comment because I don’t have the data and I lack the intuition to know better how the query planner would work. There are people in #postgresql on Freenode who could tell just from looking at your case and after getting a few answers from you.
Hey, thanks for your comment! Haven’t investigated this case yet, as we mostly display single couriers or if we don’t we make multiple requests (at the moment either way). Doing the index on descending is pretty nice, I feel like I should try that out.
The different index types as well - true I didn’t investigate them here at all. I usually only do when my current solution won’t help anymore 😅 I should read up about them again!
This is dumb. And it’s not even for a reason, the original commit that implements nested transactions with savepoints is just buggy. And in the original discussion about the diff no one seemed to notice. Searching around I’ve found numerous issues referencing this problem without the devs acknowledging it exists, until I found this issue that realizes this is a problem. Then the guy that recognized it was an issue opened a PR that tries to fix the bug, but the CI build for the PR had a bunch of failures and he just kinda gave up.
Lame.
It looks like various places in the code expected the transaction to silently fail, and relied on other checks to detect a problem and throw different exceptions. But when
ActiveRecord::Rollback
gets propagated, those checks never run, andActiveRecord::Rollback
is swallowed at the top level, resulting in test failures like:This is completely lunatic. Using
ActiveRecord::Rollback
is NEVER safe, since Rails / ActiveRecord code clearly uses unsafe nested transactions that will just swallow your exception in some situations. If they didn’t, there would be no test failures for that PR.wow, great research I feel like I should have done that :D. I felt like opening a discussion up again about it at Rails but as other similar bugs have been discarded with “it’s not perfect but this would break too many apps” I sort of gave up on it before I even started :|
Maybe it’s worth another try though…
Yeah, that’s probably what would happen. I don’t think it’s a huge loss though. I believe all errors should be specific, detailed, with all useful context, so I’d never raise a ridiculously generic exception like
ActiveRecord::Rollback
. It’s a hokey code ergonomics trick that looks cute at first glance, but isn’t actually useful. Even if it worked properly, I can’t think of any situation where usingActiveRecord::Rollback
would improve code quality. Raise an exception that actually means something.I’m an on-again-off-again ruby coder. Right now I’m “on” again (for personal stuff), but I really like messing with any and all languages I can find. It’s not like I get bored with ruby, I still use it for things here and there. So where did I go? No where!
I see that my title implies something wrong - I don’t think Ruby is dying or people are frenetically leaving it by a long shot. Maybe, What do Rubyists look at/are interested in would have been a better title but also a bit less catchy :)
I definitely also think that being polygot is awesome and on the raise - you don’t have to “leave” or “go” somewhere - you have a toolbelt with many options to choose the most suitable one.
I’m a Rubyist who moved to Elixir. The BEAM seems to be fundamentally a better foundation for web development than Ruby can offer: concurrency, fault tolerance, and not having your service fall over because of one expensive request. There are fewer libraries (for now), but it’s easier to add libraries to Elixir than to build shared-nothing concurrency into Ruby.
Saša Jurić’s talk “Solid Ground” explains well and has some nice demos. https://www.youtube.com/watch?v=pO4_Wlq8JeI
I also wrote a related post: http://nathanmlong.com/2017/06/concurrency-vs-paralellism/
Elixir is great and I feel like most of the major building blocks are there. It’s not just elixir itself though - especially these days I just feel like Ecto is so much better. ActiveRecord and triggering DB requests whenever along with all those validations can be hard toll to take. Today I had to make preloading an association while only selecting certain columns work. Not nice. Would be nicer in ecto as ecto is just a tool to work with the database.
Thanks for Saša’s talk - didn’t know that one yet. On the “to watch list” now :)
As for elixir - I also have a list of non performance reasons I like it: https://pragtob.wordpress.com/2017/07/26/choosing-elixir-for-the-code-not-the-performance/
I think the survey as constructed overlooks the demographic of people like me who knew several languages, used ruby, and then went back to mostly using other languages that we already knew before learning ruby.
I was a haskeller who learned ruby mostly because of metasploit, and realized it was a fine language for quick scripts, and I still pick it up now and again, but I’ve gone back to mostly using Haskell because I liked it much better.
Thanks for the criticism!
I tried to balance many things while aiming to still keep it short & sweet. Before I “set the survey free” I was adding a sentence about also checking the boxes if you did something before and then went back to it/renewed interest in it. I decided it might clutter it too much and lots of people don’t really read the text.
So yeah, definitely - maybe/hopefully I find another/better way next time.
That’s exactly me. I know a variety of other languages, but I learned them all prior to Ruby. The only new ones I’ve done anything with are Elixir and Go.
This is me also, sort of. I never started a real project in Ruby, but have contributed to Ruby projects. The reason I never did much else with it is that it isn’t a viable option for the things I enjoy doing.
ocaml for me, though I still turn to ruby when I just need to code something up fast, or want to use code to explore something.
Sure glad I added the OCaml option :D Must admit, never really looked at it - probably I should :)
I came to OCaml via Clojure and before that Python. So not exactly from Ruby but close enough.
Clojure. It felt like the natural progression, especially since I was interested in diving deeper into FP. Now I can’t not love s-exps and structural editing, as well as even more powerful meta-programming.
(Also notable that I saw Russ Olsen, author of Eloquent Ruby, moved to Clojure, and now works for Cognitect.)
I’m really interested in Clojure, but compared to Ruby there seems to be an order of magnitude fewer jobs out there for it.
I can’t swing a dead cat without seeing 4 or 5 people a week looking for senior Rubyists. I’ve seen maybe 2 major Clojure opportunities in the last 6 months.
What’s been your success rate when bringing carrion to job fairs?
The way the local job market is, I doubt it’d damage my chances that much.
Clojure is absolutely great and so is Russ. He still loves Ruby (as well) though :)
I still maintain that one of the best books I ever read for my coding skills is Functional Programming Patterns in Scala and Clojure.
Clojure never really got me personally - I would have liked but weirdly short names, friends telling me that for libs tests are more considered “optional” & others were ultimately a bit off putting to me. Still wouldn’t say no, just - switched my focus :)
Tests are definitely not considered optional by the Clojure community. However, you’re likely to see a lot less tests in Clojure code than in Ruby.
There are two major reasons for this in my experience. First reason is that development is primarily done using an editor integrated REPL as seen here. Any time you write some code, you can run it directly from the editor to see exactly what it’s doing. This is a much tighter feedback loop than TDD. Second reason is that functions tend to be pure, and can be tested individually without relying on the overall application state.
So, most testing in libraries tends to be done around the API layer. When the API works as expected, that necessarily tests that all the downstream code is working. It’s worth noting that Clojure Spec is also becoming a popular way to provide specification and do generative testing.
great article! so many times I caught myself saying ‘yes!’. thanks so much for writing this.
ha, thanks a lot for the nice words. Glad you enjoyed it! :)
Didn’t know that elixir had doctests! I find them one of the most fascinating parts of python, the first draft of an incredible feature that just never got a second draft. Does elixir do anything different with them than Python? Seems so based on your positivity.
Hey, I haven’t written Python in a long time so I didn’t even know Python had doctests. What I find though (compared to how I’d see doctests if they existed in Ruby) is that it is easier to do due to the immutable nature. As the effects of methods aren’t side effects it lends itself better to doctesting as what you wanna see is jut the return value of the method.
Also, the increased usage of simpler data structure makes the session setup easier than I’d imagine it would go with most objects.
I kind of feel like Elixir is a fad which adds complexity - if you want to use erlang, just write erlang.
It’s certainly a fad, just like Ruby and JS. Which is to say something that is going to deliver a ton of business value over the next decade and foster its own pop culture in a feedback loop we’re all accustomed to.
As someone who learned a good bit of Erlang 10+ years ago, I was initially worried about added complexity. Especially after being burned by the CoffeeScript nightmare.
I started writing Elixir daily at work about 10 months ago. A couple weeks of using Elixir disabused me of that. Elixir is a really seamless implementation and provides valuable support for everyday programming. The only reason I might end up reading Erlang code is if I have a problem with a dependency.
If you’re a glutton for punishment you can call Elixir code from Erlang.
Saša Jurić wrote up some excellent points about why elixir. Not saying we all should do it, but there are some advantages, helpful features and superb Erlang interoperability.
Another thing that I enjoy about Elixir is the community. Not just the people, but the community is a “melting pot” different communities - Erlang and Ruby mainly but there’s also a good amount of people from Haskell, JavaScript and others. Together ideas meet and new concepts and ideas emerge.
But Erlang is not the same as OTP and BEAM, and Elixir is “just” another language the uses OTP and BEAM. Sure, it’s close to Erlang in some (many even) respects, but it’s not simply a “prettier Erlang”. If anything, it’s a better engineered and much faster Ruby.
These articles are really making me impressed with Elixir’s design from maintainability point of view. That MP3 parser looked close to the informal pseudocode and header definition. I also like how it lets you specify something while also saying to ignore it.
EDIT: @PragTob I just read the Bleacher Report article since that was new to me. It seems to be an exception to your claim next to the link that “if you re-read the articles, though, other benefits of Elixir take as much the stage as performance…” In the Bleacher Report, performance and resource efficiency is about all they talk about. It’s the main reason they switched and justify further investment. They even went on to explain how they had to invest in new ways of benchmarking performance due to the difference. So, maybe you might want to change it to not imply performance was a footnote in that one since it was about all they talked about.
Hey, yeah thanks - I think I rewrote/rearranged that portion late some night :| You are definitely right, the good code is just a minor part in that post. Will adjust.
I’m of the firm opinion that Elixir is the going to be, for me, the main language for backend production systems for the next decade of my career–having tried PHP, Ruby, JS/Node, Java, Python, C/C++…it just feels right. But.
Buuuuut.
The thing that makes Elixir good beyond the points mentioned in this article is a pervasive conservatism and desire for quality, mostly because of its adjacency to Erlang/OTP and that community of responsible engineers solving unsexy problems. Elixir has adapted the (often clunky) tooling of Erlang and has done a lot to bring it up to standards developers expect in modern projects, but without going whole-hog new-shiny as we’ve seen with, say, ActiveRecord or Rails or Meteor or whatever else.
Except, that doesn’t last. As more and more developers (looking at you, Rails folks) come streaming in to get into the Next Big Thing, expect that conservatism to give way to poorly-written libraries, to new frameworks to give conference talks, and to code written in complete ignorance of the performance characteristics in the underlying system.
I’m currently neck-deep in a legacy Phoenix system (yes, such things do exist!), and I’ve seen (in our and others’ projects):
{:ok, ... }
,{:error, ... }
tuples that can be handled correctlyAnd outside of that, I’ve seen a (subjectively) massive increase in the number of me-too and one-off projects on Hex that show that people are sharing buggy, poorly-tested libraries and others are piling on because Elixir is TEH NEW AWESOME.
I fully expect somebody (maybe @355e3b) to write something like “The Gentrification of Erlang/OTP” to explore this troubling trend further.
Haskell’s policy of “avoiding success at all costs” is looking more sane by the year.
(I’m probably misusing that quote.)
misbracketing it at any rate (:
The way to fix this is not to complain and grumble but to do the blogging, talking, and teaching to make things better. To that end I’d much rather see @355e3b teach us what he knows and help us all get better at Erlang/OTP and maybe even Elixir as a byproduct.
This is the problem with technologies that get HN/blog hype. Add to this mediocre learning materials written by people with no production experience to make a quick buck (“buy my book/course on Elixir for $5!!!”).
The issue is simple: People rushing to get experience with Elixir and not learning it or OTP properly. It’s all about being able to put it on your resume or GitHub instead of actually learning it.
–
I fully expect in five years to see people say that you don’t need OTP to be an Elixir programmer.
Hey - thanks for your excellent remarks!
Personally I’ve also seen the OTP use go the other way - “There is this great OTP stuff so we gotta use it!”, where a simple function would suffice people try to use supervisors etc. for no reason other than to just do it. Or “I have to use OTP so I create a single GenServer which I’ll delegate all requests to”, which is basically you taking a parallel system (all requests in phoenix are their own process) and creating an artificial bottle neck by sending it all to a single process.
A serious question about your
Verk
remark (note I haven’t used so I don’t know what it does, more general about background job system): I see that I’m less likely to need a background job system in BEAM land. However, when I have it in the BEAM (Supervisors, Genservers maybe ETS etc.) and don’t do hot code upgrades the jobs get lost when I restart/stop&start the application, don’t they? Am I missing something? Same thing with maximum retries and exponential back off - should everyone re implement those themselves (like we do a lot of API calls to notoriously unreliable APIs of partners)? When I really need those, I’d happily use a library to achieve them. Am I missing something essential here?My initial reaction for that would be to look at dets and even an Elixir-wrapped Mnesia. For the retries and exponential backoff, again there are Erlang libraries that have solved them for quite a while, and yet people are still kinda introducing new ones. It’d be nice if we got more of that standardized into the standard lib. :)
And yeah, excessive use of OTP is also a problem–people get really enamored with tools and may misapply them.
The code examples are really wonderful to back up the points he’s making. I’ve only dabbled with Erlang/Elixir but am bookmarking this to see how I can apply these techniques to current problems I’m trying to solve with small one-off scripts.
Ha, thanks for the nice words! I hope it helps you!