Question: If gen_event ran handlers concurrently by default, how would you go about running handlers sequentially?
I don’t think it is a pretty common use case. However I very frequently need something like a gen_event that handles things concurrently and that doesn’t crash all the handlers if one handler crash. It is really easy to implement a sequential gen_event using a gen_server. Implementing each time something like José Valim wrote on his post is more difficult and error prone.
Unless a handler calls erlang:exit, it’s unlikely that the event manager will crash if a handler does. As I understand it, the default behavior is to remove (silently by default, or sending a message in the case of add_sup_handler) the failing handler and carry on.
That said, I think you have a point in that it’s reasonably simple to turn a gen_server into something almost identical to a gen_event, and avoid gen_event’s fiddly bits. My main point in the article is that I think that running handlers sequentially is a good default because of the affordances it offers developers.
I guess at this point I’m torn between whether or not it is better to have a design that is less broadly useful, but more flexible, versus one that is widely usable, but inflexible. Given this blog post, I’m currently leaning towards more flexible.
Thanks for reading!
Author here. Thanks for posting!
Short summary: When I feel stifled at the computer, I step away and write down my thoughts in sentence form on paper, without filtering or editing. I find that this practice helps me in two ways. First, it forces me away from my computer so I can give all my attention to a particular problem. Second, I become more resilient to distractions by having a concrete paper trail to re-read.
Erlang is a safe, functional language with high reliability. The NIF’s can boost performance but break safety or other properties. This looks like a good, use case for one of the safe, systems languages we have now. The code still stays safe even when breaking Erlang model a bit for performance. There might even be a way to use advanced features to encode whatever structure is necessary to get them to play nice with the scheduler. That part is purely speculative, though.
You might be interested to know that Rustler supports annotating Rust NIFs as dirty NIFs :)
NIFs are rad. I know they’re often used for performance reasons but I really like them for exposing specific libs, or OS SDK functions (e.g. Core* stuff on MacOS) to Erlang code. NIF/BEAM provides a mechanism for keeping data around on the C side and a “reference” type that lets you wrap & return a C pointer and pass it around in your Erlang code, so that can be super nice - you can do all the allocation/release nitty-gritty neatly in your C code, and leave yourself free to just think about the application structure & flow in Erlang, which for me is one of the areas where it really shines. Also NIF provides a lot of “interface”-style stuff that (AFAICT) ports just don’t - when asking about interfacing with C code, Erlang people often say “just write a port and communicate over stdin/out”, which is great, but then you basically have to define your own comms mechanisms - whereas with NIF it’s just a function call and all the necessary methods for converting types are included.
Oh, maybe you (or some other lobster!) can help me with a thing. To the best of my knowledge, Erlang (and hence Elixir) use bignums for integers…unfortunately, the docs don’t say anything about how to get that across over in C land. Any ideas?
It doesn’t appear as though the NIF interface gives you an easy way to get Erlang bignums into your NIF. The icky way to pass in a bigint might be to encode it as a binary or string and decode it in the NIF. If you’re interested, you could take a peek at big.c and big.h. There are functions big_to_bytes, which should return an array of digits, and big_to_double, which should return a double (if the bignum fits).
Sounds right. I’ve not had to share numbers bigger than 64-bit ints so I’ve just used enif_get_int64 (or enif_get_long, enif_get_uint64 etc) to do the conversion (and relevant enif_make_* the other way), and haven’t had any issues with that.
I don’t think so. I think you want to use ei functions like ei_encode_* and ei_decode_* for that.
Aha, sure enough:
int ei_decode_bignum(const char *buf, int *index, mpz_t obj) Decodes an integer in the binary format to a GMP mpz_t integer. To use this function, the ei library must be configured and compiled to use the GMP library.
Thanks for the direction.
NIFs are extremely useful, but they’re also a foot-gun with a hair trigger. In addition to being able to take down your Erlang node from a faulty NIF call, calling long-running NIFs (where long-running usually means >1ms) can have degenerate effects on VM performance.
Erlang scheduler threads are written to rapidly switch out Erlang processes and communicate with one another. When a scheduler runs a long-running NIF, that scheduler is no longer able to communicate with the other scheduler threads until the NIF finishes running. You can play around with this to get a better feel of how NIFs can misbehave and have large effects on the BEAM.
Please don’t take this as discouraging writing NIFs, though! If you are considering writing a NIF, please read this carefully. It will save you a lot of pain and misery down the road.
If memory serves, Erlang recently added better support for helping NIFs play nice with the scheduler. I haven’t had occasion to use that, though, since if I’m writing C I want it in a port for stability anyways. :)
I am currently reading The Design of Everyday Things and Whole Earth Discipline.
I’ve heard a lot of good things about TDoET, and so far it’s a a little bit underwhelming. On the other hand, Whole Earth Discipline has been extremely compelling, and has completely changed my perspectives around on dense cities and nuclear energy.
I really enjoyed The Design of Everyday Things, but I’ll admit I think the book could have been about a third as long and been just as useful. It was first published 29 years ago. Perhaps attention spans were better back then.
So, the code is creating a list of N elements by picking them out of a given list. I have to point out though that I am not fluent in erlang, so I probably missed the point. Can someone point me to the inherent beauty? While this surely is compact code, I keep looking for its cleverness.
I think part of the point is that you accurately described what the code does despite your not being fluent in Erlang. There are a couple of nice things (in my opinion) about the language and the implementation:
?CHARS is converted from a list to a tuple. Tuples in Erlang have O(1) lookup, with the trade-off that they’re expensive to update. In this case, using a tuple makes a lot of sense.Cs and CsLen are calculated at compile-time, so, despite appearances, they don’t wind up being recalculated for every call to new/1. This means you don’t have to compromise between efficiency and keeping ?CHARS looking clean.At first glance, I thought that list_to_tuple(?CHARS) would be called with every invocation of slug:new/1. After looking at the optimized assembler code, I’m happy to see that I was wrong! In fact, literals for both Cs and CsLen are moved into the code for executing the list comprehension.
A quote attributed to Joe Armstrong is “Make it work, then make it beautiful, then if you really, really have to, make it fast.” Turns out this code, which I find to be pretty elegant, is compiled down to basically optimal BEAM bytecode.
For work, I mostly write Erlang and C, while at home I write Lisp/Scheme, and try as many different languages as I have time for. My editor of choice is Emacs, though I did spend several years using vi/Vim, which I often use over ssh. I use tmux, st and zsh for all my terminal needs, and recently switched from i3 to dwm for window management. I use Arch Linux at home, and Ubuntu at work.
The name of the library mentioned in that post unfortunately clashes with another NIF-related project.
This is a pretty cool use of parse transforms! Unfortunately, from what I’ve seen, NIFs aren’t usually as small as in the example in the post, so it usually makes sense to have them in their own files. I also view the complications involved with writing a NIF as a test to see if I really want to use a NIF over a safer alternative. If writing NIFs were really simple, I might be tempted to use them more often than I should.
(Aside: I was the presenter in this story)
This is something I struggle with almost daily. I’m always afraid that by speaking up and asking a question, I will come across as uninformed to my peers.
When I think about it, though, some of the strongest impressions I have of people were made when they asked a really good question. Julia is a great example of this; some of her questions have sparked really great discussions on the internet.
It’s often the case that before a really great question is asked, simpler questions need to be asked and answered first. Because of this, I think it’s really important that cultures centered around knowledge work encourage asking ‘dumb’ questions. This is one of the reasons Papers We Love Montréal isn’t filmed; there have been some really amazing discussions that started with people asking questions they might otherwise have not.
I hope I am understanding this properly, because this looks amazing!
It looks like with FreeBSD and bhyve, one can install another OS to a partition on your computer, then boot/mount it with bhyve. I have always dreamed of being able to boot a separate partition on my disk using a hypervisor, and lately I’ve been looking to switch my desktop over to FreeBSD. I hope to spend some time this summer playing with this!
I’ve been working on an alternate translation scheme for list comprehensions in Erlang. The goal is to build as few intermediate lists as possible. I hope that my work will eventually turn into a pull request to Erlang/OTP; that’ll be a really nice feather in my cap.
Aside from that, I’ve been reading up on various compiler optimizations, as well as taking stock of what kinds of optimization the Erlang compiler does. Perhaps that will turn into a blog post in the near future!
They haven’t made a stable release in a while, but it looks like Factor is still quite active. Their Github repo is buzzing with activity.
I really liked Factor – it was the first open-source project I got involved in. I believe Slava is working on the Swift compiler team now though.
Fabulous video of a point that’s often mentioned, but not followed. The diversity of Picbreeder and its focus on individual autonomy (rather than convergent consensus) reminded me of ARPA’s model in the past of hiring top talent, and paying them top dollar to research/develop whatever they wanted. The result was a diverse range of immensely valuable products, which have served as crucial ‘stepping stones’ to many innovations today.
Now for a bit of a side-track…
Another interesting ‘stepping stone’ from this video lead me to explore the idea of genetic algorithms for composing music. A really fascinating development in this field is GenJam, an interactive genetic algorithm that professor John Biles uses as an improvisation partner at gigs.
Fitness tests are often not useful for composing original music, since it’s very difficult to express the idea “this musical phrase sounds nice” in code. As a result, some genetic algorithms for composition respond to human feedback, like Picbreeder. The result seems to be a much more broad exploration of the musical search space, usually resulting in innovative musical phrases that arise from ‘stepping stones’.
A plausible next step to human feedback is the co-evolution of musical ideas and music critics, where musical ideas that are highest rated by music critics generate more offspring. A prevailing idea for the heuristic used by these music critics is exactly what you might expect after watching this video: Novelty Search.
Here’s a link to Biles’s paper on GenJam if anybody is interested. Also; a paper on co-evolution of musical phrases and music critics.
I have a sort of multi-stage system for taking notes. I first write down notes in one of many notebooks (class notebooks, or a notebook I have for taking notes while I read for pleasure). Once I finish a book or course, I go through my notes, and transcribe the most salient information into emacs. I use the Deft extension for emacs, which monitors a folder of org-mode files in my Dropbox. Each project I work on, book I read, or course I take has its own .org file in this folder.
I like to write with fountain pens, and I found that the paper in Moleskine journals are not quite thick enough; sometimes ink bleeds through. When taking notes from books, I tend to take notes in a Clairefontaine journal or a Rhodia dot pad. If you think you might move toward fine writing instruments, you may want to consider getting a notebook with thicker paper. I’ve tried Moleskines, Leuchtturm, and Clairefontaine notebooks, and Clairefontaine is my favorite.
The timing of this series of articles with regards to my hobbies is perfect. I’ve been playing around in various composition/live-coding environments (CommonMusic, SonicPi, PureData), and I’ve recently started learning Erlang via Joe’s Programming Erlang book. I loved reading about Erlang’s bit syntax for describing/implementing protocols, and Joe’s osc.erl is a great example.
I’m going to have a lot of fun with this :-)
It might have been the late hour, but I found this article hilarious. The author clearly comes from a place of knowledge of functional programming, and she uses just the right blend technical concepts and dogma to be really funny!
There are a bunch of really silly jokes in this article but I think some points could merit further discussion:
Articles touting the merits of FP and explaining Monads are becoming very common and generic, and it may be argued that the general response to them hasn’t changed. Would a shift in focus from explanation (articles explaining FP concepts) to demonstration (case studies of successful projects where FP was the ‘secret ingredient’) help with this?
Different programming languages and paradigms offer different benefits and trade-offs to programmers, and choices about them should be dealt with on a project by project basis. I think too many programmers are stuck thinking that “X is the best programming paradigm”. Instead, I think it would benefit people to learn/be taught about several different paradigms, thereby fixing the “If all you have is a hammer, all problems look like nails” aphorism.
Every programming ingroup has some amount of dogma. I think that a lot of this dogma actually stems from certain groups trying to overcome the dogma from other groups, causing people to become further polarized. This is commonly referred to as the backfire effect.
There isn’t much right now, but I’m (slowly) working on changing that: http://rkallos.com/