1. 76
  1.  

  2. 10

    One of the common complaints about Lisp is that there are no libraries in the ecosystem. As you see, five libraries are used just in this example for such things as encoding, compression, getting Unix time, and socket connections.

    Wait are they really making an argument of “we used a library for getting the current time, and also for sockets” as if that’s a good thing?

    1. 16

      Lisp is older than network sockets. Maybe it intends to outlast them? ;)

      More seriously, Lisp is known for high-level abstraction and is perhaps even more general than what we usually call a general purpose language. I could see any concrete domain of data sources and effects as an optional addition.

      In the real world, physics constants are in the standard library. In mathematics, they’re a third party package.

      1. 12

        Lisp is older than network sockets.

        Older than time, too.

        1. 1

          Common Lisp is not older than network sockets, so the point is moot I think.

          1. 1

            I don’t think so. It seems to me that it was far from obvious in 1994 that Berkeley sockets would win to such an extent and not be replaced by some superior abstraction. Not to mention that the standard had been in the works for a decade at that point.

        2. 5

          Because when the next big thing comes out it’ll be implemented as just another library, and won’t result in ecosystem upheval. I’m looking at you, Python, Perl, and Ruby.

          1. 4

            Why should those things be in the stdlib?

            1. 4

              I think that there are reasons to not have a high-level library for manipulating time (since semantics of time are Complicated, and moving it out of stdlib and into a library means you can iterate faster). But I think sockets should be in the stdlib so all your code can have a common vocabulary.

              1. 5

                reasons to not have a high-level library for manipulating time

                I actually agree with this; it’s extraordinarily difficult to do this correctly. You only have to look to Java for an example where you have the built-in Date class (absolute pants-on-head disaster), the built-in Calendar which was meant to replace it but was still very bad, then the 3rd-party Joda library which was quite good but not perfect, followed by the built-in Instant in Java 8 which was designed by the author of Joda and fixed the final few quirks in it.

                However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                1. 7

                  Common Lisp has (some) date and time support in the standard library. It just doesn’t use Unix time, so if you need to interact with things that use the Unix convention, you either need to do the conversion back and forth, or just use a library which implements the Unix convention. Unix date and time format is not at all universal, and it had its own share of problems back when the last version of the Common Lisp standard was published (1994).

                  It’s sort of the same thing with sockets. Just like, say, C or C++, there’s no support for Berkeley sockets in the standard library. There is some history to how and why the scope of the Common Lisp standard is the way that it is (it’s worth noting that, like C or C++ and unlike Python or Go, the Common Lisp standard was really meant to support independent implementation by vendors, rather than to formalize a reference implementation) but, besides the fact that sockets were arguably out of scope, it’s only one of the many networking abstractions that platforms on which Common Lisp runs support(ed).

                  We could argue that in 2021 it’s probably safe to say that BSD sockets and Unix timestamps have won and they might as well get imported in the standard library. But whether that’s a good idea or not, the sockets and Unix time libraries that already exist are really good enough even without the “standard library” seal of approval – which, considering that the last version of the standard is basically older than Spice Girls, doesn’t mean much anyway. Plus who’s going to publish another version of the Common Lisp standard?

                  To defend the author’s wording: their remark is worth putting into its own context – Common Lisp had a pretty difficult transition from large commercial packages to free, open source implementations like SBCL. Large Lisp vendors gave you a full on CL environment that was sort of on-par with a hosted version of a Lisp machine’s environment. So you got not just the interpreter and a fancy IDE and whatever, you also got a GUI toolkit and various glue layer libraries (like, say, socket libraries :-P). FOSS versions didn’t come with all these goodies and it took a while for FOSS alternatives to come up. But that was like 20+ years ago.

                  1. 2

                    However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                    GET-UNIVERSAL-TIME is in the standard. It returns a universal time, which is the number of seconds since midnight, 1 January 1900.

                    1. 2

                      Any language could ignore an existing standard and introduce their own version with its own flaws and quirks, but only Common Lispers would go so far as to call the result “universal”.

                    2. 1

                      However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                      Actually, it doesn’t support leap seconds so in that case the value repeats.

                    3. 1

                      Yeah but getting the current unix time is not Complicated, it’s just a call to the OS that returns a number.

                      1. 6

                        What if you’re not running on Unix? Or indeed, on a system that has a concept of epoch? Note that the CL standard has its own epoch, unrelated (AFAIK) to OS epoch.

                        Bear in mind that Common Lisp as a standard, and a language, is designed to be portable by better standards than “any flavour of Unix” or “every version of Windows since XP” ;-)

                        1. 1

                          Sure, but it’s possible they were using that library elsewhere for good reasons.

                      2. 3

                        In general, I really appreciate having a single known-good library promoted to stdlib (the way golang does). Of course, there’s the danger that you standardise something broken (I am also a ruby dev, and quite a bit of the ruby stdlib was full of footguns until more-recent versions).

                        1. 1

                          Effectively that’s what happened though. The libraries for threading, sockets etc converged to de facto standards.

                    4. 9

                      Nothing like at this scale, but I recently built a command-line tool (for importing custom CSV into Hubspot) in Common Lisp. Developed it on a FreeBSD system, switched to Linux half-way, then compiled and shipped from MS Windows.

                      The entire experience was a joy. Quicklisp and Roswell really have revolutionized working in Common Lisp, bringing library and management tooling up to the same standard as SLIME.

                      1. 2

                        This is my experience with CL too, for the most part. SLIME is a fantastic tool and Quicklisp is such a great way to get libraries; it’s effortless.

                        I’m more of a schemer myself and I wish we had the same level of tooling. That would truly make Scheme a fantastic way to develop stuff. Geiser just doesn’t do interactive development as well as SLIME.

                        1. 5

                          I’ve honestly never seen anything to compare with SLIME. For context, although I’m now largely ‘post-technical’ (the CL work was a small side gig to keep my coding eye in) I’ve worked professionally with C, Perl, JavaScript, Ruby on Rails (Emacs, RubyMine), C# (Visual Studio .NET + ReSharper), Java (IntelliJ). I’ve tinkered with many more.

                          And SLIME is still my favourite development environment by a long shot. And really the only option for serious cross-platform work. I’m busy teaching myself McCLIM to extend that to GUI and mobile work.

                          https://i.postimg.cc/pr5rnfKj/2020-06-07-20-47-Office-Lens.jpg

                          1. 1

                            Oh you can make mobile apps in Common Lisp? I have to check that out sometime!

                            Looks great, and I fully agree with you. SLIME is amazing.

                            1. 2

                              Welllll … sort of :) That photo is of my PinePhone, running Mobian GNU/Linux. YMMV on an Android or iOS device.

                      2. 6

                        I like lisp but macros should be a last resort thing. Is it really needed in those cases, I wonder.

                        1. 18

                          I disagree. Macros, if anything, are easier to reason about than functions, because in the vast majority of cases their expansions are deterministic, and in every situation they can be expanded and inspected at compile-time, before any code has run. The vast majority of bugs that I’ve made have been in normal application logic, not my macros - it’s much more difficult to reason about things whose interesting behavior is at run-time than at compile-time.

                          Moreover, most macros are limited to simple tree structure processing, which is far more constrained than all of the things you can get up to in your application code.

                          Can you make difficult-to-understand code with macros? Absolutely. However, the vast majority of Common Lisp code that I see is written by programmers disciplined enough to not do that - when you write good macros, they make code more readable.

                          1. 3

                            “Macros, if anything, are easier to reason about than functions, because in the vast majority of cases their expansions are deterministic, and in every situation they can be expanded and inspected at compile-time, before any code has run. The vast majority of bugs that I’ve made have been in normal application logic”

                            What you’ve just argued for are deterministic, simple functions whose behavior is understandable at compile time. They have the benefits you describe. Such code is common in real-time and safety/security-critical coding. An extra benefit is that static analysis, automated testing, and so on can easily flush bugs out in it. Tools that help optimize performance might also benefit from such code just due to easier analysis.

                            From there, there’s macros. The drawback of macros is they might not be understood instantly like a programmer will understand common, language constructs. If done right (esp names/docs), then this won’t be a problem. Next problem author already notes is that tooling breaks down on them. Although I didn’t prove it out, I hypothesized this process to make them reliable:

                            1. Write the code that the macros would output first on a few variations of inputs. Simple, deterministic functions operating on data. Make sure it has pre/post conditions and invariants. Make sure these pass above QA methods.

                            2. Write the same code operating on code (or trees or whatever) in an environment that allows similar compile-time QA. Port pre/post conditions and invariants to code form. Make sure that passes QA.

                            3. Make final macro that’s a mapping 1-to-1 of that to target language. This step can be eliminated where target language already has excellent QA tooling and macro support. Idk if any do, though.

                            4. Optionally, if the environment supports it, use an optimizing compiler on the macros integrated with the development environment so the code transformations run super-fast during development iterations. This was speculation on my part. I don’t know if any environment implements something like this. This could also be a preprocessing step.

                            The resulting macros using 1-3 should be more reliable than most functions people would’ve used in their place.

                            1. 2

                              What you’ve just argued for are deterministic, simple functions whose behavior is understandable at compile time.

                              In a very local sense, I agree with you - a simple function is easier to understand than a complex function.

                              However, that’s not a very interesting property.

                              A more interesting question/property is “Is a large, complex system made out of small, simpler functions easier to manipulate than one made from larger, more complex functions?”

                              My experience has been that, when I create lots of small, simple functions, the overall accidental complexity of the system increases. Ignoring that accidental complexity for the time being, all problems have some essential complexity to them. If you make smaller, simpler functions, you end up having to make more of them to implement your design in all of its essential complexity - which, in my experience, ends up adding far more accidental complexity due to indirection and abstraction than a smaller number of larger functions.

                              That aside, I think that your process for making macros more reliable is interesting - is it meant to make them more reliable for humans or to integrate tools with them better?

                              1. 1

                                “A more interesting question/property is “Is a large, complex system made out of small, simpler functions easier to manipulate than one made from larger, more complex functions?”

                                I think the question might be what is simple and what is complex? Another is simple for humans or machines? I liked the kinds of abstractions and generative techniques that let a human understand something that produced what was easy for a machine to work with. In general, I think the two often contradict.

                                That leads to your next point where increasing the number of simple functions actually made it more complex for you. That happened in formally-verified systems, too, where simplifications for proof assistants made it ugly for humans. I guess it should be as simple as it can be without causing extra problems. I have no precise measurement of that. Plus, more R&D invested in generative techniques that connect high-level, human-readable representations to machine-analyzable ones. Quick examples to make it clear might be Python vs C’s looping, parallel for in non-parallel language, or per-module choices for memory management (eg GC’s).

                                “is it meant to make them more reliable for humans or to integrate tools with them better?”

                                Just reliable in general: they do precisely what they’re specified to do. From there, humans or tools could use them. Humans will use them as they did before except with precise, behavioral information on them at the interface. Looking at contracts, tools already exist to generate tests or proof conditions from them.

                                Another benefit might be integration with machine learning to spot refactoring opportunities, esp if it’s simple swaps. For example, there’s a library function that does something, a macro that generates an optimized-for-machine version (eg parallelism), and the tool swaps them out based on both function signature and info in specification.

                          2. 7

                            Want to trade longer runtimes for longer compile times? There’s a tool for that. Need to execute a bit of code in the caller’s context, without forcing boilerplate on the developer? There’s a tool for that. Macros are a tool, not a last resort. I’m sure Grammarly’s code is no more of a monstrosity than you’d see at the equivalent Java shop, if the equivalent Java shop existed.

                            1. 9

                              Java shop would be using a bunch of annotations, dependency injection and similar compile time tricks with codegen. So still macros, just much less convenient to write :)

                              1. 1

                                the equivalent Java shop

                                I guess that would be Languagetool. How much of a monstrosity it is is left as an exercise to the reader, mostly because it’s free software and anybody can read it.

                              2. 7

                                This reminds me of when Paul Graham was bragging about how ViaWeb was like 25% macros and other lispers were kind of just looking on in horror trying to imagine what a headache it must be to debug.

                                1. 6

                                  The source code of the Viaweb editor was probably about 20-25% macros. Macros are harder to write than ordinary Lisp functions, and it’s considered to be bad style to use them when they’re not necessary. So every macro in that code is there because it has to be. What that means is that at least 20-25% of the code in this program is doing things that you can’t easily do in any other language.

                                  It’s such a bizarre argument.

                                  1. 3

                                    I find it persuasive. If a choice is made by someone who knows better, that choice probably has a good justification.

                                    1. 11

                                      It’s a terrible argument; it jumps from “it’s considered to be bad style to use [macros] when they’re not necessary” straight to “therefore they must have been necessary” without even considering “therefore the code base exhibited bad style” which is far more likely. Typical pg arrogance and misdirection.

                                      1. 3

                                        I don’t have any insight into whether the macros are necessary; it’s the last statement I take issue with. For example: Haskell has a lot of complicated machinery for working with state and such that doesn’t exist in other languages, but that doesn’t mean those other languages can’t work with state. They just do it differently.

                                        Or to pick a more concrete example, the existence of the loop macro and the fact that it’s implemented as a macro doesn’t mean other languages can’t have powerful iteration capabilities.

                                        1. 1

                                          One hopes.

                                  2. 2

                                    Talking about the garbage collector being worse than the jvm’s makes me wonder: why not use abcl?

                                    1. 1

                                      The JVM, even with jlink, can get huge

                                    2. 2

                                      Our application consumes 2–4 gigabytes of memory but we run it with 25G heap size

                                      Wondering if this means they are provisioning servers with more than 25Gb of RAM to run a 2–4Gb. If so, that seems… not ideal, and expensive.

                                      1. 3

                                        It’s possible they just tune the vm overcommit way up

                                      2. 1

                                        Unfortunately, there’s no way to influence [optimizations and compilation time] by turning them off or tuning somehow

                                        (declaim (optimize (speed 0)))?

                                        1. 2

                                          I do believe they’re referring to the ability to turn off specific optimizations. Obviously disabling optimizations will do that, but is not suitable for production.