Threads for mattrepl

  1. 3

    Lack of static type checking and checks for exhaustive pattern matching leave a lot of room for errors. Having worked on compiler-adjacent (program analysis) tools in both Clojure and Haskell, I would not recommend Clojure for the task. Before switching to Haskell for program analysis projects, I attempted to use spec (and before that Typed Clojure) and unit tests as a replacement for type checking. The result was an ad hoc, informally-specified, bug-ridden, slow implementation of not even half of Haskell’s type system.

    1. 2

      The result was an ad hoc, informally-specified, bug-ridden, slow implementation of not even half of Haskell’s type system.

      I see what you did there. 👏

    1. 2

      Others have mentioned the issue with determining boundaries when spreading a graph over multiple DB nodes. This means many people end up using a single DB instance for the graph. But at that point—for many use cases—loading the entire graph into memory in your process that’s doing some analysis/processing is simpler and more efficient than sending queries to a database.

      1. 14

        I’d say the term you look for is late binding. The bad thing is it violates the Open-closed principle.

        Unix shells also qualify. You can replace binaries in /bin and the behavior of lots of shell scripts changes.

        Dynamic libraries allow monkeypatching for C (and whatever). You can override any libc function with LD_PRELOAD, for example.

        1. 1

          Shell (especially using functions) and LD_PRELOAD are excellent examples I hadn’t considered, thanks!

          1. 1

            But LD_PRELOAD is only used to select which function definition is used at startup. Lisps, Smalltalk, etc. allow redefining functions in a running program.

            Depending on what you mean by “in the program”, code from dynamic libraries may not count.

          2. 1

            Do you know if some sort of dynamic scoping is required to implement this?

            1. 1

              Yes, you could say shells and dynamic libraries use dynamic scoping.

              No, it is not required in general. Smalltalk, Python, Ruby, and others allow you to modify lexically scoped namespaces by monkeypatching.

          1. 1

            And Haskell gets 1 step more complex for no gain, yet again. It’s rare to see Haskell code without copious numbers of language extensions these days. It’s becoming the new Perl, a write-only language.

            I think it’s time we went back to writing simple Haskell with few or no language extensions. It makes life a LOT easier.

            1. 1

              Eventually we would expect many extensions, likely this one included, merged into a new language standard. So if the concern is about there being too many language extensions, knowing a new standard is inevitable may assuage.

              As for this specific extension, records in Haskell are a mess. Namespacing issues and access/modification of nested fields are the most critical. This proposal provides a syntax which can be used to select fields—having that resolved enables other improvements. I can understand preferring a different syntax, but I’m surprised anyone is opposed to improving records.

              1. 1

                Records in Haskell are absolutely fine as they are in the base language. The namespacing issue is not a real issue in practice. Access to nested fields is not remotely difficult, it’s just the composition of functions. You cannot modify fields in Haskell, so I’m not sure why you’d say that ‘modification’ is a critical issue.

                1. 1

                  I can agree to disagree that namespacing is an issue.

                  By “modification” I mean a convenient way to generate a new, updated record with a different value for the selected field. If you’re familiar with the lens package then think of the set function.

                  1. 1

                    Almost all the use I see of lens just tries to duplicate imperative programming in Haskell, usually defeating the whole point of writing Haskell in the first place. It’s for people that haven’t learnt how to write functional code.

                    1. 2

                      I would argue the contrived example below actually demonstrates that lens enables code that is more functional-y. See how we can use over in operator form (%~) to update the value of a field with a function? This example isn’t complex enough that I’d necessarily reach for lenses, but if I was going to create an entire library for creating and manipulating shapes I would.

                      You might want to look more closely at optics. While I can understand the complexity-based arguments against them I have never heard someone dismiss them as being a crutch for imperative-minded Haskellers. (Do such people even exist?)

                      {-# LANGUAGE FunctionalDependencies #-}
                      {-# LANGUAGE TemplateHaskell #-}
                      
                      module Contrived where
                      
                      import Control.Lens ((%~), (&))
                      import Control.Lens.TH (makeFields)
                      
                      someFunc :: IO ()
                      someFunc = putStrLn "someFunc"
                      
                      data Point
                        = Point
                            { pointX :: Int,
                              pointY :: Int
                            }
                        deriving (Eq, Ord, Show)
                      
                      $(makeFields ''Point)
                      
                      data Threegon
                        = Threegon
                            { threegonA :: Point,
                              threegonB :: Point,
                              threegonC :: Point
                            }
                        deriving (Eq, Ord, Show)
                      
                      $(makeFields ''Threegon)
                      
                      -- Without lenses
                      shiftX :: Threegon -> Int -> Threegon
                      shiftX tgon val =
                        tgon
                          { threegonA = a {pointX = pointX a + val},
                            threegonB = b {pointX = pointX b + val},
                            threegonC = c {pointX = pointX c + val}
                          }
                        where
                          a = threegonA tgon
                          b = threegonB tgon
                          c = threegonC tgon
                      
                      -- With lenses
                      shiftX' :: Threegon -> Int -> Threegon
                      shiftX' tgon val =
                        tgon 
                          & (a . x) %~ (+ val)
                          & (b . x) %~ (+ val)
                          & (c . x) %~ (+ val)
                      
            1. 41

              This is not a fair comparison. Go 1.9.2 was released over 2 years ago. In that time they have fixed a lot of the GC stutter issues. Comparing rust nightly to a 2 year old compiler is unfair.

              1. 17

                I am very confused why they do not mention why they didn’t upgrade. The Go compiler is intentionally (and will probably always be) backwards compatible. At Google we dogfood the new compiler throughout the fleet before releasing to the public. I have never seen a rollback. The GC has consistently gotten faster through versions.

                I’m not saying what Discord did was wrong (I simply don’t know enough about the underpinnings of GC) but that they didn’t address such obvious low hanging fruit is strange.

                1. 37

                  From /u/DiscordJesse on reddit:

                  We tried upgrading a few times. 1.8, 1.9, and 1.10. None of it helped. We made this change in May 2019. Just getting around to the blog post now since we’ve been busy.

                  https://www.reddit.com/r/programming/comments/eyuebc/why_discord_is_switching_from_go_to_rust/fgjsjxd/

                  1. 27

                    It’s worth noting that go1.12 was released before May 2019, and the release notes include lines like this:

                    Go 1.12 significantly improves the performance of sweeping when a large fraction of the heap remains live. This reduces allocation latency immediately following a garbage collection.

                    I believe that directly addresses what they were observing.

                    1. 14

                      Seems like the release didn’t come in time for their needs, considering they probably started the work on the Rust port after trying 1.10.

                      1. 5

                        Sure, but the Rust version wasn’t complete or deployed for multiple more months, implying they had that long to upgrade to go1.12, let alone go1.11 or the betas and release candidates. I can count on one finger the number of times upgrading a version of Go caused any problems for me, and the one time it did, it was because I had a bunch of my own cgo bugs to fix, so it’s hard for me to imagine a reason a compiler upgrade would be anything harder than a normal deploy of any change, which they claim they did many of (tuning, etc.) Deciding not to do that because you were already underway on a Rust port is just a sunk cost fallacy, and it’s not like it’s much work or unreasonable to expect people with production Go services to keep up with the two major releases they do per year.

                        That said, I’m operating on incomplete information. I’d like to give the benefit of the doubt so I expect they made a good decision here. They seem happy with the outcome and the end result is better for them on a number of different metrics, so that’s great.

                        It’s just unfortunate that we’ll probably never know if go1.12 would have solved their latency issues for not only them, but perhaps many others. A program reproducibly showing failures (and latency spikes like that are a failure for Go’s GC) is a valuable thing.

                        1. 8

                          It’s likely that they weren’t aware of the fixes, and if they were, the rewrite might have solved other pain points and simplified the code in other ways. The issues they faced with the GC would then just be a catalyst for the rewrite. Not the cause itself.

                      2. 5

                        So in 2016, Pusher switched from Haskell to Go because Haskell had poor GC latency due to a large working set.

                        The fundamental problem was that GHC’s pause times were proportional to the size of the working set (that is, the number of objects in memory). In our case, we have many objects in memory, which led to pause times of hundreds of milliseconds. This is a problem with any GC that blocks the program while it completes a collection.

                        Enter 2020, Discord switches from Go to Rust because of poor GC latency due to a large working set.

                        We kept digging and learned the spikes were huge not because of a massive amount of ready-to-free memory, but because the garbage collector needed to scan the entire LRU cache in order to determine if the memory was truly free from references.

                        So, I guess it’s only a matter of time before Pusher rewrite their service in Rust?

                        1. 2

                          And with the recent improvements to garbage collection in GHC, back to Haskell after that?

                          http://www.well-typed.com/blog/2019/10/nonmoving-gc-merge/

                1. 30

                  One of the main benefits, at least to me, is that code written in a functional style is much easier to understand.

                  I have seen baffling code in FP (Clojure and Haskell). I have seen baffling code in OOP (C++, Java, and Ruby). It turns out, you can write baffling code in any language paradigm. The more experience I get, the more I think we should stop trying to umbrella ourselves under a single paradigm, and borrow ideas as needed from all of them.

                  I struggle to consider what we do software “engineering” because here’s how similar discussions would appear in other engineering fields:

                  • Using DC is better than AC!
                  • Helium is last year’s noble gas. Use neon instead!
                  • Only your grandma uses wheels. Everyone is using jet engines now!
                  1. 9

                    I have seen baffling code in FP (Clojure and Haskell). I have seen baffling code in OOP (C++, Java, and Ruby). It turns out, you can write baffling code in any language paradigm.

                    That is very true, especially with the terseness of some FP code.

                    However, the author was talking about the ease of understanding that comes from not needing to keep track of side effects in your head. That is, understanding the computational model—which is different than readability.

                    1. 18

                      Side effects are only part of complexity and state though. The convoluted FP code I’ve dealt with involved funneling data through layers and layers of operations, and keeping track of where you were in the data transform to tie in new program behavior. In FP state is still there, but it’s just ephemeral (data transforming into other data) and structural (composition of functions and the nature of how program flow emerges out of it).

                      1. 5

                        Those are good points. There are impediments to understanding code beyond side effects and I’ve seen code similar to what you’re referencing.

                        I’ve found that types (or even contracts) can provide most of the information needed to determine how some new functionality fits in an existing composed transform. Breaking apart a transform to run a portion of it just to see what the data looks like at that point should not be necessary and does feel similar to debugging OO code. I have only experienced that with dynamically-typed FP languages.

                        It’s also not unreasonable that grokking the transform takes a minute. In some cases, that transform essentially is the program—or at least a substantial feature of it.

                    2. 4

                      I struggle to consider what we do software “engineering” because here’s how similar discussions would appear in other engineering fields:

                      Using DC is better than AC! Helium is last year’s noble gas. Use neon instead! Only your grandma uses wheels. Everyone is using jet engines now!

                      Not sure if you are being sarcastic, are you not aware of the war of the currents? Google Tesla vs Edison. Your hypothetical examples are actually very real. Including the last one. Perhaps the lesson to learn from others fields is: each has its own advantages and disadvantages, it’s about knowing them. Although some technologies do become obsolete and objectively beaten by others. CFCs, medical Lead, bleeding as a treatment, etc. Come to mind.

                      1. 4

                        Note the lack of context or application in all those headline-like lines. I used extreme sarcasm to point out that the value of many technical solutions matter require understanding of their context and application.

                      2. 4

                        I have seen baffling code in FP (Clojure and Haskell).

                        On one of the programming forums someone wanted Scheme code for function:

                        f(0, y) = y + 2
                        f(x, y) = S(f(x - 1, S(y))
                        where
                        S(x) = x + 1
                        

                        So as this obvious homework I have given them correct functional code:

                        ((λ (f)
                            (printf "fun(1, 4) := ~s\n" (f 1 4))
                            (printf "fun(3, 2) := ~s\n" (f 3 2)))
                         ((λ (S Y) (Y (λ (f) (λ (x y) (if (= 0 x)
                                                          (+ y 2)
                                                          (S (f (- x 1) (S y))))))))
                          (λ (y) (+ y 1))
                          (λ (b) ((λ (f) (b (λ (x y) ((f f) x y))))
                                  (λ (f) (b (λ (x y) ((f f) x y))))))))
                        

                        I hope that they got good grade ;)


                        Jokes aside - closures and objects are equivalent

                        1. 2

                          I don’t think FP vs OOP is the dichotomy at all. FP is functions-are-data, and OOP is class/object-based multiple dispatch. You can have both or neither! That’s why we can pick and choose ideas.

                          1. 0

                            This reminds me of the following comment by Philip Greenspun:
                            Technical people have traditionally met these challenges … by arguing over programming tools. The data model can’t represent the information that the users need, the application doesn’t do what what the users need it to do, and instead of writing code, the “engineers” are arguing about Java versus Lisp versus ML versus C# versus Perl versus VB. If you want to know why computer programmers get paid less than medical doctors, consider the situation of two trauma surgeons arriving at an accident scene. The patient is bleeding profusely. If surgeons were like programmers, they’d leave the patient to bleed out in order to have a really satisfying argument over the merits of two different kinds of tourniquet.

                          1. 4

                            I thought codata was related to corecursion, in fact serving a similar purpose to Python generators. My notion of codata comes from the Idris construct of that name:

                            codata Stream : Type -> Type where
                              (::) : (e : a) -> Stream a -> Stream a
                            

                            This gets translated into the following by the compiler.

                            data Stream : Type -> Type where
                              (::) : (e : a) -> Inf (Stream a) -> Stream a
                            

                            This is similar to what Wikipedia has to say:

                            Coinductively defined types are known as codata and are typically infinite data structures, such as streams.

                            Are these two separate notions with the same name or am I missing a connection here? Granted I kind of skimmed both texts.

                            1. 3

                              codata is indeed related to corecursion. Let’s say you have a functor F and let’s look at two things in category theory. F maps the object X to F(X) and by inversion of the arrow we also have the mapping F(X) -> X.

                              Let’s look at a simple functor like F_A(X) = 1 + A x X. It defines the algebra for lists over A when we consider F_A(List_A) -> List_A, we either have an empty list (nil or 1 in the notation) or pair of an element and a list that we can cons, these are called the constructors and we use them for data. The other direction of the arrow is X -> F_A(X)and defines the stream over A with termination StreamT -> 1 + A x StreamT. In this direction we observe the stream ending (1) or we get the head and tail (A x StreamT), the cocostructors are used to observe the stream and indeed they are codata. The idea is that data and codata are the two directions of the functor relation!

                              1. 1

                                Great, now it makes more sense, thanks

                              2. 1

                                The paper abstract:

                                Computer scientists are well-versed in dealing with data structures. The same cannot be said about their dual: codata. Even though codata is pervasive in category theory, universal algebra, and logic, the use of codata for programming has been mainly relegated to representing infinite objects and processes. Our goal is to demonstrate the benefits of codata as a general-purpose programming abstraction independent of any specific language: eager or lazy, statically or dynamically typed, and functional or object-oriented. While codata is not featured in many programming languages today, we show how codata can be easily adopted and implemented by offering simple inter- compilation techniques between data and codata. We believe codata is a common ground between the functional and object-oriented paradigms; ultimately, we hope to utilize the Curry-Howard isomorphism to further bridge the gap.

                                Emphasis added.

                                1. 1

                                  Induction gives us a way to define larger and larger objects, by building them up; but we need to start somewhere (a base case). A classic example is the successor : Nat -> Nat which constructs a natural number that is one larger than its input; and zero : Nat is the base case we start with.

                                  Coinduction gives us a way to define objects by how we can “destruct” them into smaller parts, or alternatively by the ways that it can be used/observed. The interesting thing is that coinduction doesn’t need a base case, so we can define infinite objects like streams, or infinite binary trees, or whatever. We can still have base cases if we like, e.g. we can make a corecursive list with nil and cons, but in that case we can’t assume that every list will end in nil. As an example, recursive types in Haskell are coinductive: they might be infinite (they might also contain errors or diverge, since Haskell is unsound).

                                  Recursive (total) functions require a base case, since that’s the point where they return a value. Corecursive functions don’t require a base case, as long as they are “productive”, defining some part of the return value. For example, a corecursive function returning a stream can define one element and recurse for the rest; this will never end, but each part of the return value can be produced in some finite amount of time (i.e. it doesn’t diverge).

                                  AFAIK Python generators are inherently linear, like a stream; e.g. they can’t define an infinite binary trie like we could in Haskell (data Trie = Node Trie Trie). Another wrinkle is that Python uses side-effecting procedures, rather than pure, total functions; so we might want to consider those effects as part of the “productivity”. For example, we might have a recursive procedure which has no base case and never returns any value, but it causes some effect on each call; that can still be a useful thing to do, so we might count it as productive even though it diverges from the perspective of calculating a value. (I don’t think Python eliminates tail-calls, so maybe Scheme would be a better example for recursing without a base case, since it is then essentially the same as iteration)

                                1. 7

                                  Last year I had a short Minizinc gig. Mz is a constraint solving language that, for its problem domain, is even more declarative than haskell or prolog. The client had a beautiful spec that captured his complex problem in just a few dozen lines of Mz.

                                  The problem was performance. Solving for 26 params took a few seconds. Solving for 30 took ten minutes. Solving for 34 took days. He needed it to solve a 40 param input in under an hour.

                                  I kinda feel like it’s easier to control algorithmic code’s correctness than declarative code’s performance now.

                                  1. 1

                                    There are declarative languages that work very well for some problem space. SQL is mostly declarative, so are Mathematica and AMPL. If you apply these tools to problems for which they are not suited, they don’t work well. The problem with Haskell, as far as I can tell is that instead of just admitting that it needed an imperative mode, the designers came up with an elaborate excuse and claimed to have not compromised.

                                    1. 1

                                      Haskell has several “imperative modes”: IO, ST, and STM come to mind. All of them work superbly.

                                      1. 2

                                        I didn’t comment on how well they work, only on the layer of obfuscatory category theory nomenclature pasted on top of something relatively simple.

                                        1. 1

                                          I think most people would agree with you, but would differ on where the “obfuscatory category theory nomenclature” part stops and the “relatively simple” part begins.

                                    2. 1

                                      declarative code’s performance

                                      It’s always been a problem. If something is declarative, always watch out for and/or warn of that by default. I’m not sure how fundamental it is. Too much optimization put into imperative compilers and even verification with stuff like Why3 for a fair comparison. It’s worth further research if not already done.

                                      Declarative usually trades an intuitive understanding and control of performance for another black box that does the work for developers without knowing context code operates in. Performance and predictability might go down every time that happens.

                                      1. 4

                                        Ironically, it should be easier to analyze and optimize declarative code. But since it’s more difficult to understand a compiler and/or runtime, and the performance of a declarative program depends on those, it takes more effort to speed up a program.

                                        On the flip side, adding optimization to a compiler or runtime improves performance for all programs suffering from the same slowdown.

                                        1. 2

                                          SQL optimizers are hard to beat.

                                      1. 5

                                        As someone who has occasionally played with Haskell for years and is finally considering to use it for larger projects, this post concerns me. The complexity of monad stacks is a little scary, but I figure the type system makes it manageable. However, if it’s true that monad transformers end up being a source of memory leaks then I’m back to thinking Haskell should only be used for larger, production-level projects by those knowledgeable of GHC internals and edge-case language tricks to hack around inherent problems.

                                        Can someone with experience comment on the author’s claims? They do seem weak when no specific examples of memory leaks (or abstraction leaks) are provided.

                                        1. 4

                                          Do not use StateT or WriterT for a long running computation. Using ReaderT Context IO is safe. You can stash an IORef or two in your Context.

                                          Every custom Monad (or Applicative) should address a concern. For example a web request handler should provide some means for logging, to dissect request, query domain model and prepare response. Clearly a case for ReaderT Env IO.

                                          Form data parser should only access form definition and form data and since it’s short lived, it can be simplified greatly with use of ReaderT Form stacked with StateT FormData. And so on.

                                          https://www.fpcomplete.com/blog/2017/06/readert-design-pattern

                                          1. 3

                                            Yes it is know to never use RWST+ with a Writer Monad in the stack because of space leak.

                                            The choices about big-code organisation in Haskell is large. You have:

                                            • run like in imperative language, everything in IO
                                            • split using the Handler pattern https://jaspervdj.be/posts/2018-03-08-handle-pattern.html
                                            • use the MTL, in that case you should not use WriterT (if I remember correctly)
                                            • use the ReaderT Context IO Pattern
                                            • use Free Monads, the paint is still fresh here apparently.

                                            I used MTL-style to make a bot with long living states and logs (using https://hackage.haskell.org/package/logging-effect). It works perfectly fine for many days (weeks ?) without any space leak.

                                            I now started to go toward the simpler route of the Handler Pattern I pointed out. And, in the end, I tend to prefer that style that is very slightly more manual but more explicit.

                                          1. 3
                                            • Gradual typing
                                            • Dependent types
                                            • Row polymorphism

                                            I’d really like to see all three of these together. Row polymorphism can be implemented with dependent types. But not sure what dependent types look like in a gradual type system.

                                            I also wish Clojure specifically was moving towards this as a better version of their spec library for writing contracts and generating tests.

                                            1. 4

                                              M-expressions are just.. ugly. Though I’m surprised the mathematical notation is not more common.

                                              This is just gorgeous: x ↦ x*x I wonder why no programming language uses it.. It beats scrap out of haskell’s \x -> x*x -thing.

                                              1. 7

                                                I wonder why no programming language uses it

                                                …because it’s not particularly easy to type “” I think is the obvious answer. As far as I understand the backslash in Haskell’s lambda syntax is meant to resemble a lambda symbol, so as to approximate a lambda calculus term, and you can also get closer to the example you’ve highlighted using a font with ligatures like Hasklig.

                                                I think syntax using these kinds of special characters is more common in Agda as well, e.g. https://github.com/copumpkin/categories/blob/00e385d442073c2343145cfefa58dae63d58877e/Categories/Functor/Product.agda#L52h (random example I found by searching through github).

                                                1. 4

                                                  Church is reported to have intended to use ^ rendered as e.g. x̂ or ŷ but the typesetter couldn’t manage it and wrote ^x instead. The story goes that looked enough like λ that influenced the next typesetter to try something else…

                                                2. 1

                                                  Agda has support for some special characters.

                                                  Some examples: https://plfa.github.io/Lambda/

                                                  1. 1

                                                    k/q uses M-expressions, but allows operators to be used infix, so:

                                                    set[`square;{[n] *[n;n]}]
                                                    

                                                    can also be written as:

                                                    square:{[n] n*n}
                                                    

                                                    But a really cool trick is to observe “x” is the default first value:

                                                    square:{x*x}
                                                    

                                                    which is hard to beat!

                                                    APL (specifically Dyalog) also deserves some note. It doesn’t use M-expressions, uses ← for assignment instead of colon, and ⍵ as the right-hand argument, which is the same in some ways, but perhaps a little more beautiful:

                                                    square←{⍵×⍵}
                                                    
                                                    1. 1

                                                      I don’t like JS, but lambda arrows remembers this perfectly.

                                                    1. 4

                                                      Is Smalltalk the new Clojure, lacking only real world applicability?

                                                      I should explain. Two marvelous ideas entered the marketplace last century. One fell apart because it was seen as the domain of AI and academics, with implementations being too like forth (you learn one forth, and you’ve learned one forth) for people’s likings. The other was embroiled in legal battles and refused to acknowledge that it wasn’t the computer. This was quickly deemed, though not obvious at the time, unacceptable. Even C doesn’t care what the kernal is, so long as it has a standard library, and polyglot systems are just More Useful.

                                                      I speak, of course, about LISP and Smalltalk.

                                                      Clojure is LISP made Really Useful. It can be used to glue Java together with almost no code, runs on possibly the most popular and recognizable virtual machine in the world, and the functional twist took out a lot of the dark corners out of the language

                                                      Clojure made LISP usable.

                                                      As for Smalltalk, the Pharo VM does a great job of being modem, using the Cog JIT engine, and even (in an upcoming release?) being able to respect the world outside the VM. I love the language, one person describes it as an unhealthy fascination, and I would recommend people learn the lessons of Smalltalk.

                                                      As for me, I learned a lesson from the failures of Smalltalk, and it isn’t one people think about. I want to codify my ideas as a language, as soon as I have the time to do so. I want to make Smalltalk applicable to server operators and packagers. Blog posts on the topic to follow.

                                                      1. 4

                                                        Smalltalk had a resurgence some 10 years ago with interest in the Seaside web app framework and a few hyped Smalltalk web apps. I recall DabbleDB as one that seemed impressive at the time.

                                                        If you haven’t yet, I’d suggest looking at Strongtalk, Self, and Newspeak. And what the space between them and Objective-C looks like.

                                                        Good luck!

                                                        1. 2

                                                          To quite some extent I think the space between ObjC and Smalltalk-likes is occupied by Ruby. More Smalltalk-like than ObjC, easier C (or really, host platform code in general) FFI than a Smalltalk.

                                                          1. 1

                                                            It really is, but Ruby has a bad rap these days, no advantage Python doesn’t in the realm of writing usable code, and an unfortunate load average and dependency story.

                                                        2. 4

                                                          Clojure made LISP usable.

                                                          I have to disagree with that. Lisp is very usable (and quite nice) on its own. Perhaps Clojure made it usable for gluing Java together, but huge swathes of the software world don’t involve Java, and are still well suited to Lisp.

                                                          1. 3

                                                            I love Scheme, but I’ve tried to learn Common Lisp a three times times, and each time I got frustrated. Whether because the lack of Vim support (I can’t be bothered to learn emacs, and it just confuses me), or because quicklisp doesn’t do what I expect, I can never quite understand what’s going on under the hood. It’s a shame too, because I’m fascinated by some of the cool CL projects like CEPL.

                                                            However, if we’re including Scheme, I also disagree with the statement that Clojure is the first usable language in the Lisp family. Guile and chicken scheme are both very easy to get started with out of the box, and Racket has a full on IDE geared towards education, superb technical documentation, and tutorials that makes it easy to get started.

                                                            1. 1

                                                              you know there are perfectly good IDEs for Lisp that are not Emacs? Clozure Lisp (CCL) [Opensource], Lispworks personal edition [Free].

                                                              And there is nothing stopping you from using Atom, Visual Studio Code or Sublime text with Lisp plugins.

                                                            2. 2

                                                              I’m afraid I also disagree. Lisp users before Clojure are of a different breed entirely, a more skilled and arcane breed. Clojure is accessible to and used by the masses to write useful code and libraries.

                                                              1. 3

                                                                That’s flattering (I guess), but I’m not sure it’s true. By some measures Common Lisp is still more popular than Clojure.

                                                                TIOBE’s language ranking isn’t perfect, but it’s better than nothing, and it claims Clojure and Common Lisp are both down in the 50-100 group, behind generic “Lisp” at 32 and Scheme at 34.

                                                                However, “Lisp” is often taken to mean “Common Lisp” nowadays (for example the #lisp IRC channel is dedicated specifically to Common Lisp), so those rankings may be interpreted as meaning CL is more popular.

                                                                Also, I wold claim CL is at least as accessible as Clojure. It’s not as trendy, but it has more implementations, supports more platforms, isn’t tied to the JVM, and has a bunch of tutorials and books covering it.

                                                                And of course there are plenty of useful libraries and applications being written in CL today.

                                                                1. 1

                                                                  Originally learned about Lisp from AI books. That would make me agree with you. Then, later books like Practical Common Lisp and Land of the Lisp are way more approachable better things for mainstream audience to build. Important they get that IDE that allows incremental, per-function compilation. That by itself might win them over. Alternatively, they can start with Scheme using How to Design Programs trying Common Lisp if they get HtDP.

                                                                  I have a feeling you’ll see a lot more people in it if the educational resources, community, and tooling facilitate the learning process. In that order for Lisp.

                                                            1. 7

                                                              Author seems to miss the advantage of currying and partial application in FP that is enabled with positional arguments. You could adapt currying or partials with this message-based syntax, but I think it’d end up messy.

                                                              As for lack of IDE support, every FP language they list has good IDE support… Haskell in Emacs w/ Intero, F# in Visual Studio, etc.

                                                              I get Emacs seems unfriendly to some people, but the existence of decent tooling there shows IDE support exists and could be provided in friendlier (i.e., GUI-centric) IDEs too.

                                                              1. 3

                                                                Reminds me of this fun presentation at REcon 2014 on reverse engineering a furby: https://youtube.com/watch?v=Xm_RHOWcwOY

                                                                Write-up by the researcher, in case you don’t want to view the presentation: https://poppopret.org/2013/12/18/reverse-engineering-a-furby/