Threads for maladat

  1. 7

    Erlangers and their decades of collective experience seems like a much overlooked selling point for Elixir. I jumped onboard the Elixir train a few years ago after reading Thomas’ Seven Concurrency Models book. There wasn’t as much written then, so I looked to Hubert’s Learn You Some Erlang, Vinoski and Cesarini’s Scalability with Erlang, and watched several of Armstrong’s and Virding’s videos on Youtube. That helped me tremendously in understanding how to think through the actor model and build OTP applications.

    1. 2

      Thomas’ Seven Concurrency Models book

      Do you mean this book?

      https://pragprog.com/book/pb7con/seven-concurrency-models-in-seven-weeks

      Or a different one?

      I’d be interested in reading a good overview of different concurrency models.

      Thanks!

      1. 1

        Yep, thats the one… and now I realize I completely named the wrong author, it should be Paul Butcher. In any case it was a good read.

        1. 1

          Thank you.

    1. 0

      So far I’ve only found one solution that is actually robust. Which is to manually check that the value is not nil before actually using it.

      This seems reasonable to me. If anything, I’d consider knowing how and when to use this kind of check a part of language competency knowledge as it is how Go was designed.

      1. 9

        We expect people to be competent enough to not crash their cars, but we still put seatbelts in.

        That’s perhaps a bad analogy, because most people would say that there are scenarios where you being involved in a car crash wasn’t your fault. (My former driver’s ed teacher would disagree, but that’s another post.) However, the point remains that mistakes happen, and can remain undiscovered for a disturbingly long period of time. Putting it all down to competence is counter to what we’ve learned about what happens with software projects, whether we want it to happen or not.

        1. 9

          I wish more languages had patterns. Haskell example:

          data Named = Named {Name :: Text} deriving Show
          
          greeting :: Maybe Named -> Text
          greeting (Just thing) = "Hello " + (Name thing)
          greeting _ = ""
          

          You still have to implement each pattern, but it’s so much easier, especially since the compiler will warn you when you miss one.

          1. 3

            Swift does this well with Optionals

            1. 5

              You can even use an optional type in C++. It’s been a part of the Boost library for a while and was added to the language itself in C++17.

              1. 4

                You can do anything in C++ but most libraries and people don’t. The point is to make these features integral.

                1. 1

                  It’s in the standard library now so I think it’s integral.

                  1. 4

                    If it’s not returned as a rule and not as an exception throughout the standard library it doesn’t matter though. C++, both the stdlib and the wider ecosystem, rely primarily on error handling outside of the type-system, as do many languages with even more integrated Maybe types

              2. 2

                Yep. Swift has nil, and by default no type can hold a nil. You have to annotate them with ? (or ! if you just don’t care, see below).

                var x: Int = nil // error
                var x: Int? = nil // ok
                

                It’s unwrapped with either if let or guard let

                if let unwrapped_x = x {
                    print("x is \(x)") 
                } else {
                    print("x was nil")
                }
                
                guard let unwrapped_x = x else {
                    print("x was nil")
                    return
                }
                

                Guard expects that you leave the surrounding block if the check fails.

                You can also force the unwraps with !.

                let x_str = "3"
                let x = Int(x_str)! // would crash at run-time if the conversion wouldn't succeed
                

                Then there’s implicit unwraps, which are pretty much like Java objects in the sense that if the object is nil when you try to use it, you get a run-time crash.

                let x: Int! = nil
                
            2. 7

              Hey, I’m the author of the post. And indeed that does work, which is why I’m doing that currently. However, like I try to explain further in the post this has quite some downsides. The main one is that it can be easily forgotten. The worst part of which is that if you did forget, you will likely find out only by a runtime panic. Which if you have some bad luck will occur in production. The point I try to make is that it would be nice to have this be a compile time failure.

              1. 1

                Sure, and that point came across. I think you’d agree that language shortcomings - and certainly this one - are generally excused (by the language itself) by what I mentioned?

            1. 6

              The title seems misleading. I just quickly skimmed it and it’s pretty dense. So, do correct me if I’m wrong.

              It looks like it’s an O(N x M) algorithm (pre-processing) followed by an O(N) algorithm (actual sorting). Then, they say the O(N) part takes O(N x L) in worst case. So, a two-step, sorting algorithm that delivers O(N x M) + (O(N) or O(N x L)) performance?

              1. 6

                The other thought that has been rolling around in the back of my head is that the test inputs they use are generated according to nicely behaved, friendly probability distributions.

                I suspect that you could generate pathological inputs that would cause the neural network first step to fail to get the input “almost sorted” enough for the second step to work. That would invalidate the theoretical complexity claim they make, and then the question becomes, in practice, how hard is it to generate pathological inputs and how likely is it that a real-world input would be pathological?

                1. 5

                  I figured it was an elaborate prank to disguise a lookup table…

                  1. 4

                    They claim M and L are both constants. This is the part of the claim I find dubious… I suspect that as problem sizes grow it might turn out these aren’t actually constants.

                    They also don’t seem to include model training in the complexity, apparently because they use the same size training set for every problem size. This also might not be a valid assumption if input is big enough.

                    They did problem sizes from 10^3 to 10^7. The log2 difference is only about 10. If they were a little conservative in selecting their “constants”, their algorithm would work even if the “constants” were logarithmic.

                    1. 3

                      They claim M and L are both constants. This is the part of the claim I find dubious… I suspect that as problem sizes grow it might turn out these aren’t actually constants.

                      The thing that made me wonder is they said one of the constants was something that could change the effects of their analysis if they changed its size. That made me default on it wasn’t constant so much as constant in this instance of the method. Next set of problems might require that number to change. Given online nature of these algorithms, it might even have to change overtime while doing the same job. I don’t think we can know yet.

                      It was interesting work, though.

                      1. 2

                        I agree with you that’s it’s interesting work. It just feels like they really wanted their paper to stand out, and felt like doing sorting with a neural network wasn’t an exciting enough title, so they made a really big claim (that being, an O(N) sorting algorithm). The problem is that the claim they made has a specific, rigorous meaning and I don’t think they did the analysis to PROVE the claim is true (although it might still be true anyway).

                  1. 7

                    Programming Languages: Application and Interpretation (PLAI) is pretty good, and has the added benefit of being free online.

                    http://cs.brown.edu/~sk/Publications/Books/ProgLangs/2007-04-26/

                    Essentials of Programming Languages is another good intro PLT book.

                    Programming Language Pragmatics is a good book, and it’s useful. I have a copy. If I lost it, I’d replace it. I refer to it occasionally.

                    Whether it is a good choice as the primary text for a PLT class depends on the specific PLT class.

                    Programming Language Pragmatics is basically a large collection of small sections about specific programming language features. Each feature is introduced, described, and several code snippets in different languages are provided to illustrate the use of that feature (by the end of the book, dozens of languages have been mentioned). What is conspicuously absent is the theoretical basis for the feature and any real detail about how the feature is actually implemented. (TLDR: There’s a reason the book is called “Programming Language Pragmatics” rather than “Programming Language Theory.”)

                    If your PLT course is about “learn about using a bunch of different programming language features,” then Programming Language Pragmatics makes a lot of sense as a primary text.

                    Personally, I think that’s a perfectly reasonable subject for a course, but I wouldn’t call that course “Programming Language Theory.”

                    If your PLT course is about “learn the theoretical basis of programming languages and use that theory to implement a simple programming language and several variations of it,” or something similar, then I think Programming Language Pragmatics is a poor choice - that just isn’t what the book is about. It might be handy if you’re having trouble understanding what the pieces you’re building do, but it won’t really help you build them.

                    As an example, you mention type systems. Programming Language Pragmatics only has a few pages total on type systems, type checking, and type inference. There’s no mathematical description of types, no discussion of how to actually DO type checking, and no discussion of how to actually DO type inference. The entire section basically boils down to “some programming languages have types, and will make sure that the types match up - some languages will even figure out the types for you!”

                    1. 9

                      Please note that there’s also a second edition of PLAI, which is also available at the same link:

                      http://cs.brown.edu/~sk/Publications/Books/ProgLangs/2007-04-26/

                      I think the second edition is much better than the first. (Of course, I’m a bit biased!) It’s the result of teaching the first edition for about a decade, finding much better ways of explaining its concepts, and eventually transcribing those better ways back into the book.

                      The language of implementation is also slightly different. This has some advantages and disadvantages.

                      Incidentally, the second edition has as of a week or two ago just been translated into Chinese, though that may not be of must interest to people on an English-language thread. (-:

                      1. 3

                        This was what we used in our first level PL class (at Cal Poly), and I just want to say thanks for writing such an easy to approach book!

                        While there wasn’t much about types, I found it was perfect for the initial dip to get the context of types while making a basic PL.

                        1. 2

                          My pleasure — thanks! There isn’t much on types because I didn’t see the value in producing a watered-down version of TAPL. Rather, I show people the notation and what they need to know so that they can read TAPL.

                        2. 2

                          I’ll have to take a look at the second edition, I enjoyed the first.

                          Thank you for your generosity in making such a valuable resource available at no cost.

                          1. 1

                            Thank you kindly! It’s a delight.

                        3. 1

                          So I “think” the course is a bit of both. But I’ve only had the intro yet and I’m doing the first exercise tonight. So I’ve yet to have a full understanding of how the course will be.

                          For instance, most of the intro talked about BNF, programming paradigms and a short intro of different languages. The teacher did mention hoping that everyone would, at the very least, understand closures perfectly by the end of the course.

                        1. 3

                          I hadn’t heard of this, but it looks like a cool project.

                          I do a lot of LaTeX, and my initial thought was, “I can already do LaTeX->PDF, why would I do HTML-with-LaTeX->PDF?”

                          Then I thought, “Oh, yeah. LaTeX is fantastic for text and equations and stuff, but a few weeks ago I was trying to do some not-very-complicated layout stuff, and I got it done, but it was a huge pain in the neck.”