1. 38
  1.  

  2. 8

    For Go I’d do a plain loop. No need for range. You should avoid make and just use a literal and append.

    s := []int{}
    for i := 0; i < len(a); i++ {
        s = append(s, a[i] + b[i])
    }
    
    1. 3

      Another trick is to just use var s []int. The difference is that s is nil, but append with nil as the first argument is a thing that works.

      1. 2

        Am just some random person, not a style authority, but I’d write it his way in real life, for clarity. When I know the final size is sz, make([]int, sz) explicitly tells the reader that, and range tells them I’m trying to loop over a slice. Programmers can figure it out pretty easily from a C-style loop, but calling on the built-in facilities for sized alloc/loops over slices kind of spells the goal out.

        I’d do it that way in a sorta-pedagogical setting like Coda’s too, to try to teach an explicit style. (As a bonus, sized make avoids reallocation and copying, which can help if it’s big.)

        Again, not so sure I’m right, and it partly depends on how you read the goal of the exercise. But you can at least say some folks would do it his way.

      2. 6

        In Haskell there’s zipWith (+) :: [Int] -> [Int] -> [Int] but you have to explain sections, partial application, types, partial application with types, and why the actual type is

        Num a => [a] -> [a] -> [a]
        

        So you need type classes, constraints, and type variables. To add a third you need zipWith3 but it’s obvious that this is a hack. A good story for polyvariadic functions is hard to come by in the typed universe. I’m not sure if that’s honestly a good thing or a bad thing.

        But that all said I’m not actually confident

        1. 5

          I’ve accepted that dynamically-typed languages are always going to have a prettier user experience. Haskell has map, zipWith, zipWith3, …, zipWith7 while Clojure has variadic map. Clojure just wins on prettiness (and I’m a Haskeller). We could get something comparable in the static world using dependent-types inspired tricks, but the type signatures would be incomprehensible to the layman.

          I tend to think that dynamic typing is better for building “Swiss Army knives”, by which I mean end-user tools that do a lot of things and don’t require a lot of thought for that end user. Swiss Army knives are great tools, but they’re terrible infrastructure. You wouldn’t build a cathedral out of them. You’d use stones or bricks. Likewise, infrastructure is better served by static typing when possible.

          Static typing wins on multi-developer, production systems and especially when they’re performance-critical. Debugging runtime errors is a task that I want to spend as little of my life on as possible. That said, I worry about our ability to attract the world at large to the kind of work that we do.

          For example, I’d love to be wrong on this, but I think that dynlangs are always going to be out ahead on exploratory data analysis. The concept of the data frame (which is what a lot of people use R, a terrible language with some decent libraries, for) is either dynamically typed or “aggressively” (for lack of a better word) dependently typed. For example, the “remove linearly dependent columns” operation on a data frame is hard to put a fully-static type signature on, because the shape of the data frame (treating the data frame as a record/row type of vectors) is data dependent.

          Of course, one could use a simpler data frame model, and just the core data structure be M.Map String [Double] with interpretations (i.e. is this a numeric variable, or a category variable) at a metadata level. Then, however, you’re leaving some of the checking (i.e. is “sepal.length” a valid field?) to happen at runtime.

          1. 7

            I kind of disagree in that I think there “dependently typed tricks” which aren’t so confusing and don’t break inference. Record typing a la Elm, Purescipt comes to mind as something I fake with big hairy typeyology in Haskell but are really reasonable and simple when built in. The same could probably happen with polyvariadicity.

            I’ve honestly come to think that polyvariadicity is just a bad design though. I’m much happier with fixed arguments and partial application.

        2. 4

          For python there’s NumPy:

          >>> import numpy as np
          
          >>> a = np.array([1, 2, 3])
          >>> b = np.array([4, 5, 6])
          
          >>> a + b
          array([5, 7, 9])
          
          1. 2

            Or plain old map:

            >>> import operator
            >>> map(operator.add, [1, 2, 3], [4, 5, 6])
            [5, 7, 9]
            
          2. 4

            Julia:

            julia> [1,2,3] + [4,5,6]
            3-element Array{Int64,1}:
             5
             7
             9
            
            1. 4

              It seems like the Go and the Clojure examples (maybe among others!) represent the pervasive way to do stuff in their languages. In a Lisp you’re going to stick to map and functional transforms; in Go you’re going to be writing imperative code with loops. So when you grok the explanation in those languages, you have concepts that serve you pretty broadly if you’re going to be doing a lot with the language.

              On the other hand the Python and JavaScript that I write mixes the approaches, i.e., sometimes I’m maping or using list comprehensions, other times writing out imperative loops. (I can’t speak about all the langauges because, e.g. I don’t use Scala.) I’m not so sure if that says anything good or bad about the languages: on the one hand more ways to skin a cat can make code less uniform and more reflective of the individual author’s preferences; on the other hand, tasks certainly come up where one approach or the other just seems to fit really well, so maybe it’s good to have all those tools available.

              Anyhow, it’s maybe another dimension of variation between languages that stuck out at me looking at those examples.

              1. 4

                Minor quibble, but I think Clojure is not necessarily representative of lisps in general in that respect. I’d put most, especially Common Lisp, into your second category, as multi-paradigm languages where sometimes you mapcar and other times you loop.

              2. 4

                For the author’s definition (concepts that need to be explained for this to work), he may be right. But going outside of this basic example, there is a lot of special case that needs to be explained. The map is actually combining several concepts to get that to work. Specifically, this trick only works with binary operators.

                > (defn f ([a b c] (+ a b c)))
                > (map f [1 2 3] [1 2 3] [1 2 3])
                (3 6 9)
                > (map f [1 2 3] [1 2 3] [1 2 3] [1 2 3])
                clojure.lang.ArityException: Wrong number of args (4) passed to: sandbox13640$f
                

                map also doing a reduce, if the operator is binary and there are 2 or more lists passed is a funny case and a bit more than map does in other languages. It may be easy but it’s certainly not simple.

                1. 1

                  Can you explain what you mean about this only working for binary operators, and about reduce? I’m not sure what you’re demonstrating with your example, which makes it pretty clear that you can use map on more than two lists.

                  Also, since + is variadic in Clojure, you can easily use it to sum the same-indexed elements of an arbitrary number of lists this way:

                  > (map + [1 2 3] [1 2 3] [1 2 3])
                  (3 6 9)
                  > (map + [1 2 3] [1 2 3] [1 2 3] [1 2 3])
                  (4 8 12)
                  
                  1. 1

                    You’re right, I’m mistaken. I thought map was doing something cute when it was a binary operator, but map is just making use a variadic function.

                2. 3

                  APL/j/k languages generalise operations for atoms onto arrays making most map (called ' or each) unnecessary;

                  Because an operator like + (addition) simply doesn’t have any semantics for lists/arrays, instead of a java.lang.ClassCastException error, we could choose to assign it a semantic that would be useful.

                  Then we could just write:

                  (+ [1 2 3] [4 5 6])
                  

                  which is similar to what you’d write in APL/j/k languages:

                  1 2 3+4 5 6
                  

                  This “generalisation” is often called overloading to emphasise the negative aspects of overloading your brain, however I think this derails the conversation too fast: I don’t believe people will accidentally write (+ a b) when they mean something more like (if (= (type a) (type [])) (map + a b) (+ a b))

                  1. 3

                    an operator like + (addition) simply doesn’t have any semantics for lists/arrays

                    There is a well defined semantics in terms of concatenation, e.g. [1 2 3] + [4 5 6] = [1 2 3 4 5 6]. The empty list/array is an additive identity element, e.g. [1 2 3] + [] = [1 2 3].

                    So (list, concat, []) has the same structure as (integer, +, 0). Or (integer, *, 1).

                    (obligatory https://i.imgflip.com/ydt63.jpg)

                    1. 2

                      I don’t think concatenation is a form of addition.

                      When we say something is “well defined” we’re making a statement about the clarity of its definition, not about how exhaustive we can be: APL can define + as the addition of atoms, while Java defines + as something like if both arguments are numbers, the addition of those numbers; if both arguments are vectors of the same type, the concatenation of those vectors; otherwise a TypeError.

                      Furthermore, that the same symbol is used for both operators probably has more to do with not enough symbols on the keyboard, than anything else.

                      APL/j/k use , for concatenation. This does however, seem to conflict with Python and Java attempting to “look” like C/C++, i.e. f(a+b,c+d) is (apply f (list (+ a b) (+ c d))) in C/C++, but there are other solutions (e.g. Perl’s . and ML’s @) which are a much better compromise.

                      Python and Ruby are the worst: It completely punts on the definition of + and says that it’s the __add__ method of the left argument, which not only allows the meaning of + to change dynamically, but their culture actually encourages programmers to invent new definitions of +. This is the exact opposite of well defined, and whilst this approach does have some excellent benefits in domain programming, “well defined” isn’t a kind of general goodness either.

                      1. 2

                        I don’t think concatenation is a form of addition.

                        I reckon it’s even the ‘purest’ form of addition. This is obviously going off on a tangent, but + is typically defined as an associative binary operation on a set of things. So if I’m defining addition over a set S, I just require that +

                        • works on elements of S
                        • returns a value in S, and
                        • is associative on elements of S; i.e. that (a + b) + c = a + (b + c) holds for any a, b, c in S.

                        This holds for concatenation. As an example: ([1 2] + [3]) + [4] = [1 2 3] + [4] = [1 2 3 4], and you’ll get the same result if you start with [1 2] + ([3] + [4]) instead.

                        You’ll find addition on other things will obey additional properties. With addition on integers or real numbers or what have you there’s also commutativity - for example that 1 + 2 = 2 + 1. But commutativity is not actually specified by the definition of +, so addition on integers is already going beyond the requirements.

                        The situation is worse for floating point numbers, where addition isn’t even associative:

                        > 23.53 + 5.88 + 17.64
                        47.05
                        > 23.53 + 17.64 + 5.88
                        47.050000000000004
                        

                        Concatenation is the unique minimal case that has all the properties required of addition, but nothing more.

                        Note that it’s perfectly fine to have multiple definitions of addition for the same group of things; J’s ',' and elementwise + both fit the bill, for example - that’s why there are a couple of separate dyads for them.

                        1. 1
                          • is typically defined as an associative binary operation on a set of things

                          + is not typically defined that way.

                          This holds for concatenation.

                          I’m not convinced.

                          I see you started with 0=() 1=A and defined ()•A=A and A•A=AA as addition (although I assume you’re spelling as + to be illustrative). I observe we’re summing the cardinality of the sets. I followed a=[1 2] and b=[3] and c=[4]and I now see we need ordered sets, however I think you slipped an extra operation in there:[1 2]has two things in it, and the addition I'm familiar with gives us a tool to get the2from the1`.

                          I can appreciate that with some higher order function one could apply across the inner sets – say *, but then one might call •* addition2 and •** addition3, and I wouldn’t think this is more clear than Iverson’s definitions.

                          Furthermore, given how intuitive the commutative property is in the typical definition of addition, I wouldn’t be prepared to give it up so easily.

                          The situation is worse for floating point numbers, where addition isn’t even associative

                          In absence of underflow and overflow errors, addition remains associative. The real problem is that 23.53 isn’t a floating point number but the decimal representation of an approximate binary floating point number.

                  2. 3

                    Yes, clojure is terse, but even the documentation doesn’t really explain very well exactly how the variadic nature of map works

                    clojure.core/map
                    ([f] [f coll] [f c1 c2] [f c1 c2 c3] [f c1 c2 c3 & colls])
                      Returns a lazy sequence consisting of the result of applying f to
                      the set of first items of each coll, followed by applying f to the
                      set of second items in each coll, until any one of the colls is
                      exhausted.  Any remaining items in other colls are ignored. Function
                      f should accept number-of-colls arguments. Returns a transducer when
                      no collection is provided.
                    

                    the phrase “applying f to the set of first items of each coll, followed by applying f to the set of second items in each coll,” doesn’t seem to capture it. I feel like i’d have to run it a few times to fully grok what it’s doing.

                    1. 2

                      That docstrings seems perfectly clear to me, but I’m biased: I’d much rather puzzle out a terse explanation than go spelunking in a verbose one.

                      1. [Comment removed by author]

                        1. 1

                          I had no meaningful Lisp experience prior to Clojure. However, my eyes glaze over when I get an email longer than four sentences, so I really appreciate terse and precise prose.

                    2. 1

                      Obviously, the correct answer is c++:

                      (1, 2, 3) ++ (4, 5, 6) == list<int>(5, 7, 9)
                      

                      (HTML tags in code blocks aren’t escaped?)

                      1. 1

                        It’s a bit weird, honestly, that zip returns a list of tuples in so many languages when it could just return a list of lists. I’m not sure where that got started, but Elixir’s implementation of List.zip creates a list, then calls list_to_tuple. So to get the expected result, you have to convert back:

                        iex(1)> [a,b,c,d] = [ [1,2,3], [4,5,6], [7,8,9], [10,11,12] ]
                        [[1, 2, 3], [4, 5, 6], '\a\b\t', '\n\v\f']
                        iex(2)> for t <- List.zip([a, b, c, d]), do: t |> Tuple.to_list |> Enum.sum  
                        [22, 26, 30]
                        

                        This is okay, since tuple_to_list is a BIF, but it’s still odd to me.