1. 2

    I’m glad this was written. This is one of the mistakes that I see constantly with people new to Erlang and Elixir. People end up trying to use OTP because they hear how awesome it is, but still don’t truly understand how it is meant to be used (I’m sure most, if not all, people go through this phase while learning Erlang and Elixir. I know I did). Like this article says, it ends up with people creating artificial bottlenecks in their systems. I’m sure this has lead some individuals and teams to abandon the BEAM because they just didn’t put the time in to truly understand OTP, thinking they had.

    The other most common mistake I see people have with Elixir is the use of the pipe operator (|>). People will try to force the pipe everywhere, even when it shouldn’t be. Something as simple as

    def foo(bar) do
      bar |> baz()
    end
    

    would be better as

    def foo(bar) do
      baz(bar)
    end
    

    But then people also try to come up with complex scenarios just to be able to use the pipe. I had seen the following recently

    val |> foo() |> bar() |> (fn(a) -> {a} end).() |> Tuple.insert_at(0, :ok)
    

    when

    result = val |> foo() |> bar()
    
    {:ok, result}  
    

    is infinitely more clear.

    If you had a similar experience with OTP in the past, I would suggest taking another look at the BEAM language of your choice. In my experience of the last 3 - 4 years, the community is very friendly and willing to help with most issues you come across. I think the BEAM has great things to offer a number of projects.

    1. 5

      Is there a reason not to use something like kerl to install a more modern version of Erlang? v19 is two years old and there have been some significant performance boosts in the interim.

      1. 2

        I’ve had good results with asdf, but I haven’t tried it out on OpenBSD yet.

        1. 6

          For what it’s worth, asdf just uses kerl for it’s erlang builds.

      1. 4

        I just released the initial version of my new library havoc. It is chaos monkey style testing for the BEAM. It can randomly kill processes as well as TCP and UDP connections anywhere in your cluster, you select which nodes you want the killing to happen on. I am hoping to add some new features this week. In no particular order, some of them are:

        • Be able to target which application(s) you want to test
        • Add support for killing other types of ports
        • Give some sort of feedback about orphaned processes
        1. 1

          I am not entirely sure if they are actually doing this or if it is just a contrived example to show how you can add multiple endpoints to a single phoenix application. I personally do not understand why someone would do something like this in reality. It is incredibly easy to deploy multiple applications to one or more BEAM nodes that I don’t think I would ever do this.

          For example: Our load balancer is connected to port 80 where we expose public API however, in our app and we want to expose private diagnostic API on port 8080 as well - by doing it public traffic can only hit our public API(80). We get pretty good protection of diagnostic API without authentication and authorization - of course we may still want to do that - to restrict access, even to traffic from our private network.

          So the only thing they get from this is security through obfuscation? The majority of people would not think (or maybe even know how) to change the port to 8080. They even admit that you would still want to add authentication and authorization to it to prevent people from actually hitting it. So why not just skip the second endpoint and build it into the main application, or make a second phoenix or plug app that displays the diagnostic information they are looking for? If you deploy the two applications to the same BEAM node, you would still have access to all of the same diagnostic information.

          As an aside, I don’t really think this should have the distributed tag.

          1. 9

            It appears that the author’s main complaint about Actors, I think more specifically Erlang, is that it does not have a type system similar to Haskell. They want something that is able to track side effects. The only argument they make against Actors is their claim that they are not composable.

            So what makes actors not composable? Well, an actor (and an Erlang process) is constructed from a function A => Unit. That this function returns Unit rather than a meaningful type means that it must hardcode some action to take once the result (whose type is not even shown) is available. It is this unfortunate property that destroys any chance we have at combining actors in any sort of general way like we would in a typical combinator library.

            This is basically saying that any language that does not track side effects in the type system (which is currently the vast majority of languages) does not compose because something could, somewhere, return the equivalent of Unit. And yet people are still writing large software systems in languages without any real type system.

            I think this is an unfortunate piece written by someone that thinks the only way to write software is with an HM-style type system.

            I should also add that I don’t think Actors are the be-all and end-all of concurrent software systems. But as someone that has been using Erlang and Elixir almost exclusively at work for a few years, I think it is pretty great at what it does.

            Quick Edit

            I think the author should have made the title “Actors are not the best concurrency model”. They even say themselves “I don’t know exactly what the replacement looks like yet”. I’m all for finding better abstractions, but to say something isn’t good because of a single issue you have with it is, in my opinion, going overboard.