1. 15

    Always good to see more folks getting into Common Lisp!

    As far as I can tell, new Quicklisp projects have to be located inside ${HOME}/quicklisp/local-projects. I am not particularly happy with it, but it is not really important.

    If you want to put your projects elsewhere on your disk, you can put symlinks into local-projects that point to them. I even have a llp shell alias (“Link in Local Projects”) to run something like ln -s "$(pwd)" "$HOME/.quicklisp/local-projects" to make it easy to add the symlink (note that I keep quicklisp in ~/.quicklisp so it doesn’t clutter the file listing), since I do it so often.

    I had already installed clisp, and it took me quite some times to understand my mistake.

    I will always be annoyed at clisp for choosing such a confusing name.

    Quicklisp packages (systems?) are defined through asd files.

    Common Lisp’s terminology is confusing because it’s old and uses a lot of words that we use now (like “package”) to mean different things than people mean today. Things get easier once you internalize what Common Lisp means by the terms.

    A package in Common Lisp is a container for symbols. They’re a way to group related names (symbols) together so you don’t have to do the miserable prefixing of every name with mylibrary-... like you do in Emacs Lisp or C to avoid name clashes.

    Unlike other languages like Python/Clojure/etc, packages and files on the hard disk are not tied strongly together in Common Lisp. You can have several files that all work with the same package, a single file that switches back and forth between packages, or even write code that creates/mutates packages at runtime. Files and packages are orthogonal in Common Lisp, which gives you maximum flexibility to work however you want.

    A system in Common Lisp is a collection of code (mostly) and a description of how to load that code (and some metadata like author, license, version, etc). For example: my directed graph library cl-digraph contains a system called cl-digraph. That system has a description of how to load the code, and that description lives in the cl-digraph.asd file. It creates a Common Lisp package (called digraph).

    The Common Lisp language itself has no knowledge of systems. ASDF is a Common Lisp library bundled with most (all?) modern implementations that handles defining/loading systems. (The name stands for Another System Definition Facility, so as you might guess there have been several other such libraries. ASDF is the one everyone uses today.)

    Systems and packages are orthogonal in Common Lisp. Some systems (like small libraries) will define exactly one package. Some systems will define multiple packages. Rarely a system might not define any new packages, but will use or add to an existing one.

    A project in Common Lisp is not an official term defined anywhere that I know of, but is generally used to mean something like a library, a framework, an application, etc. It will usually define at least one system, because systems are where you define how to load the code, and if it didn’t define a system how would you know how to load the code? My cl-digraph library mentioned above is a project that defines three systems:

    • cl-digraph which contains the actual data structure and interface.
    • cl-digraph.test which contains the unit tests. This is a separate system because it lets users load the main code without also having to load the unit test framework if they’re not going to be running the tests.
    • cl-digraph.dot which contains code for drawing the directed graphs with Graphviz. This is a separate system because it lets users load the main code without also having to load the cl-dot system (the graphviz bindings) if they don’t care about drawing.

    If I were writing this today I’d use ASDF’s foo/bar system naming notation instead of separating the names with a dot, because there’s some nice extra support for that. I just didn’t know about it at the time, and don’t want to break backwards compatibility now.

    We saw how Common Lisp has no concept of a system — that comes from ASDF. Similarly, ASDF has no concept of the internet, or reaching out to somewhere to download things. It assumes you have somehow acquired the systems you want to load, maybe by sending a check to an address and receiving a copy of the code on floppy disk, as many of my old Lisp books offer in the back pages.

    Quicklisp is another library that works on top of ASDF to provide the “download projects from the internet automatically if necessary” functionality that people want in our modern world. So when you say (ql:quickload :cl-digraph) you’re asking Quicklisp to download cl-digraph (and any dependencies) if necessary, and then hand it off to ASDF to actually load the code of the cl-digraph system. Unlike ASDF, Quicklisp is relatively new in the Common Lisp world (it’s only about eight years old) bundled with any modern Lisp implementations (that I know of) (yet?), which is why you need to install it separately.

    So to summarize:

    • Files are files on your hard drive.
    • Packages are containers of symbols. They are orthogonal to files.
    • Systems are collections of code, plus instructions on how to load this code. They are orthogonal to packages.
    • Projects are high-level collections of… “stuff” (code, documentation, maybe some image assets, etc). They are (mostly) orthogonal to systems (seeing a trend here?).
    • Common Lisp knows about files and packages. ASDF adds systems. Quicklisp adds the internet.

    This is a pretty high-level view that glosses over a lot, but maybe it’ll clear things up and help you dive into the details later.

    1. 3

      Thanks a lot for taking the time to write this. I learnt a lot by reading it. Common Lisp is definitely interesting, and I am glad I finally started my journey to learn it :).

      1. 2

        Like sjl said, I never put my own projects in ~quicklisp/local-projects but always put a symlink in there. I think that was the intent of that directory.

    1. 3

      This is starting to be exhausting. Like a painful agony which does not want to end (also Intel won’t die anytime soon).

      I stoped reading these findings for now. A good post mortem report in a few months would be really valuable.

      1. 7

        I couldn’t find the ChangeLog, thanks a lot for the additional link!

        1. 4

          The release’s commit message might be helpful here https://github.com/neovim/neovim/commit/44c6deb91ac917703c157d564eb3accbff4d37af

        1. 2

          I found this reading very interesting, and my favourite trick is the HKD type family.

          1. 5

            I will present the result of my research at a seminar this Wednesday, and my slides are not quite ready. So I am trying to catch up. On a more hobbyist side, I will try to continue to work on flux and potentially chain, two Haskell libraries I wrote recently. I still need to decide a license (probably GPL), write a tutorial and then I will be able to push that on hackage. At least, this is the plan.

            1. 2

              This isn’t funny anymore. :D That being said, I wonder if TXT will be the next Intel tech to fall. I mean, it probably could, but I fell like it has been less studied (it is harder to use, indeed).

              1. 2

                Almost all security research in CompSci that’s targeted for Intel is built on one or more of paging, segments, the virtualization extensions, TXT, and now SGX. That’s unless I’m forgetting something. Regardless, the people looking for Intel flaws should throw everything they have at those specifically. Especially at paging and virtualization given dependence of enterprises and clouds on them.

                Even when nothing is found, they should make automated probes or tests for whatever they were looking for that they can run on new processors from Intel. That’s because the errata (hardware bugs) of each might vary where one CPU that’s immune could be followed by another that isn’t.

              1. 8

                So, I am confused. I mixed up several versions of this article, and the version that was online when I submitted this story was not the good one; in particular, it was lacking several changes I made after receiving feedbacks from Mastodon.

                I apologise for the inconvenience, this is fixed now. In particular:

                1. No more confusion between ReturnT and ResultT, the correct name (ResultT) is used everywhere now
                2. I have added several footnotes to actually give pointers to read, related to some GHC pragmas I am using
                3. ResultT is an alternative to EitherT, not to mtl or Eff
                1. 3

                  Note there’s a Rust devroom

                  1. 2

                    This is a bit out of scope, but does anyone know what the colour code means?

                    1. 1

                      I used to run a devroom there… AFAIK, it’s just that you can easier map the table to the timeline, no specific meaning.

                  1. 1

                    If you highlight the REDACTED “black rectangles” in the text, you can see the name of the city.

                    So author is actually using the name of the city. Uh, oh.

                    1. 6

                      At the very end there’s also a button to un-redact those, so the author obviously doesn’t really think there’s an issue using the name.

                      1. 3

                        However, the chance that the city will actually go against a civilian for his usage of the city’s name still seems a little ridiculous, doesn’t it?

                        Except, this actually happened to an owner of a Facebook page with ~14k page

                        And, yes, there is a button, along with:

                        Since this decision defies all logic, and since I have already broken this decision unintentionally a lot of times, I have decided to add a button on the bottom of this article. By clicking on it, this article will no longer be censored, which will make me break the city’s decision exactly 38 times.

                        Completely aware of the possible legal consequences of me mentioning the word […], I encourage you to click that button. After doing so, this article will stop being censured.

                        Therefore, I think the author there is an issue with the law, and defies it at his scale.

                    1. -2

                      What I find “ridiculous” is the use of the term “ridiculously” in the title; I put it in the same level as “blazingly,” a term that has been used very lightly in the past years and means nothing anymore.

                      1. -1

                        ridiculously, adv. so as to invite mockery or derision.

                        I guess that means it’s actually really slow?

                      1. 4

                        Aha, glad to see more people thinking of replacing prelude, especially Snoyman!

                        The Foundation project aims towards the same goal, but I guess having a FP-complete backed alternative cannot hurt!

                        [edit] Right, posted this comment too soon! This is a different approach, they plan to actually reuse the existing libraries. This is definitely nice, hopefully the prelude problem will definitely be fixed in 2~3 years from now.

                        1. 3

                          What “Prelude problem” ?

                          1. 5

                            The project README more or less states the problem:

                            The RIO module works as a prelude replacement, providing more functionality and types out of the box than the standard prelude (such as common data types like ByteString and Text), as well as removing common “gotchas”, like partial functions and lazy I/O. The guiding principle here is:

                            • If something is safe to use in general and has no expected naming conflicts, expose it from RIO
                            • If something should not always be used, or has naming conflicts, expose it from another module in the RIO. hierarchy.

                            Snoyman and FP-complete are trying to move Haskell more in the direction of a batteries-included solution for software development. The Haskell Foundation project mentioned by @NinjaTrappeur above is attempting the same thing.

                            Many of the changes RIO makes as a Prelude replacement solve problems beginners don’t know they have until they’ve been coding a while in Haskell. Using String rather than Text or ByteString is one of the most common mistakes beginners make. And why shouldn’t they? It’s right there in the “base” of the language. And then you learn, often after months of coding, String performance is a disaster.

                            Whether RIO is the right solution, time will tell. That it’s a step in the right direction, is beyond doubt.

                            1. 5

                              I personally use Protolude. The problems it solves (for my projects, on my computer, for my use cases) are:

                              • head :: [a] -> a becomes head :: [a] -> Maybe a (and all the other partial functions that throw error "message", like tail and so on…)
                              • everything is Text
                              • convenience functions that I used to copy in all my project, for example: toS which convert from any string-like (Text, String, ByteString, …) to any other string-like.
                              • foldl, head, … are on traversable not just lists
                              • a lot of other stuff, that I’m missing at the top of my head
                              1. 2

                                afaik, it’s the issues connected with the standard prelude, either concerning inefficient data structures (String is defined as [Char], ie. a linked list) or simple lack of utilities, which are then afterwards commonly installed by many uses (eg. Data.Vector). Many other “alternative” preludes have tried to replace the standard, but most of them can’t manage to get any significant adoption.

                                That’s at least what I understand when someone says “Prelude problem”.

                                1. 2

                                  The Foundation README gives some information about this “problem”, RIO gives other arguments. The two main issues people have with Prelude is partial functions and Lazy IO, as fas as I can tell.

                                  1. 1

                                    @akpoff, @zge and @lthms pretty much summed up the problem.

                                    I would also come up with another problem class: “legacy semantics”.

                                    [EDIT] The following statement is wrong.

                                    The most notable offender is the Monad typeclass. As it is defined in base (prelude is re-exporting parts of the base library), Applicative is not a superclass of monad. Those two typeclasses are actually completely unrelated as it’s implemented. In other terms, you could end up with a Monad not being an Applicative. Some people are trying to fix that directly in base, some are trying to fix that in external libraries such as Foundation.

                                    In the end, it is not such of a big deal for an intermediate/experienced developer; however, it is quite confusing for newcomers. Not knowing what you can safely use from the standard library is not a really nice user experience in my opinion.

                                    [Edit] As a side note, I am saddened to see that the return keyword is preferred over pure. This keyword has a totally different meaning in procedural languages (ie. 90% of the languages), using it in this context is just a constant source of confusion for newcomers…

                                    1. 1

                                      Applicative has been a superclass of Monad for quite some time in GHC base. I disagree with the change (breaks compatibility) but understand why they did it.

                                      1. 1

                                        Oh yes, you’re right, my bad!

                                        See monoid and semigroup, the problem is quite similar.

                                1. 6

                                  I really like the idea behind the theme and the general feel.

                                  For my personal taste, as a text editor theme, the background is a little too light and the comments are barely visible. And I say that as someone who uses Solarized.

                                  These things are highly subjective of course, and also depend on what room you’re in, what time of day, and what display.

                                  1. 2

                                    For my personal taste, as a text editor theme, the background is a little too light and the comments are barely visible.

                                    The emacs theme deals with this by adding nord-comment-brightness.

                                    I think I’m finally going to replace Zenburn (my previous all-time favourite theme)!

                                    1. 1

                                      Personally, I’ve tweaked my nofrils-like theme for emacs using these colors.

                                  1. 2

                                    I am still a bit confused by the Nix vs. Guix thing. Not that I am against having two similar projects per se, but I don’t know.

                                    1. 11

                                      Guix took parts of the Nix package manager and retooled them to use Scheme instead of the Nix language. This works because the Nix build process is based off a representation of the build instructions (“derivations”) and has nothing to do with the Nix language. Guix is also committed to being fully FOSS, whereas the Nix ecosystem will accept packages for any software we can legally package.

                                      1. 9

                                        Also there is of course accumulated divergence as people with some particular idea happen to be in the one community and not the other.

                                        Nix has NixOps and Disnix, but there still is no GuixOps.

                                        On the other hand I believe the guix service definitions are richer, and guix has more importers from programming-language-specific package managers, but then on the third hand the tooling for Haskell and Node in particular is better in nix.

                                        Nix supports OSX, FreeBSD and cygwin, guix supports the Hurd.

                                    1. 2

                                      Great article!

                                      Also, a heads up that the two local links at the beginning of the post are broken.

                                      1. 2

                                        Thanks! I will fix that as soon as I can.

                                      1. 2

                                        The Management Engine (ME) in x86-based computers just rids myself of any desire to study the security of this architecture anymore. I mean, what is the point, if you play against the hardware designer?

                                        1. 2

                                          Protecting a massive amount of people and companies that are using x86. In case of legacy systems, they’re doing it possibly against their will since they can’t justify the risk of crashing the whole business totally changing all their internal systems and processes. So, a percentage of the security field works on x86-based solutions to mitigate what they can of that with another set doing monitoring and recovery solutions to pick up where first set leaves off.

                                          Same for mainframe-based stuff. Same for ARM TrustZone now even though it’s already been quite broken based on a link I just submitted.

                                          https://lobste.rs/s/ordicj/clkscrew_exposing_perils_security

                                        1. 3

                                          Of course there is an emacs mode for X!

                                          1. 2

                                            Thanks a lot for sharing that! I didn’t know about TypesApplication and was using Proxy myself to deal with this matter. Using the pragma, my code is now much more readable.

                                            1. 30

                                              Do not attempt to compete with Pinboard. —maciej on June 01, 2017

                                              Gotta give the man credit for a healthy ego :) Pinboard does rock though. Been a happy user for years.

                                              (Though I’m still waiting for my ‘Fetch Title’ button on the web UI :) )

                                              1. 12

                                                Maybe I’m giving Maciej too much credit, but my reading of that was that he wrote it with his tongue firmly in his cheek. :o)

                                                1. 5

                                                  Oh absolutely! I think Maciej should get all the credit in the world :)

                                                  Honestly, things like Pinboard are niche services mostly for us uber-geeks and others who care enough to want to wrangle huge bodies of articles and maintain essentially random access to all of it.

                                                2. 5

                                                  That date sure helped. For an entire minute, this blog post sounded like something you’d post on April 01.

                                                  1. 1

                                                    I don’t know the guy, so I have to say, when I first read the post, this particular sentence caught my attention!

                                                  1. 2

                                                    can one actually generate running code from Coq? I’ve never quite understood how one writes programs in the conventional sense with Coq.

                                                    1. 4

                                                      can one actually generate running code from Coq?

                                                      You can extract Haskell or OCaml programs from Coq:

                                                      https://coq.inria.fr/refman/Reference-Manual026.html

                                                      This was done for example when XMonad was partially verified using Coq:

                                                      https://github.com/wouter-swierstra/xmonad

                                                      I’ve never quite understood how one writes programs in the conventional sense with Coq.

                                                      Most of the time you don’t really. Proof extraction doesn’t always result in the best code. Sometimes there’s performance problems or integration problems. It’s possible, but Coq is often used to prove code then it’s ported to other platforms.

                                                      Maybe this will change in the future. Some recent ideas are going around at the moment:

                                                      In addition to using Conal’s work in Haskell, I’m also working on a Coq rendering of his idea, which I hope will give me a more principled way to extract Coq programs into Haskell, by way of their categorical representation.

                                                      1. 1

                                                        So most usage of Coq is really about writing proofs?

                                                        I’ve always been a bit interested in playing around with the language but I don’t really know what the use of the proofs are if the implementations can diverge from the proofs.

                                                        1. 3

                                                          So most usage of Coq is really about writing proofs?

                                                          I’d say so IMO. It’s been almost a decade but a huge breakthrough for Coq in the 2000s was Gonthier’s proof of the four color theorem in 2004. Many regard this as the most readable version proof of the four color theorem, even though it still requires trusting a computer to validate 663 cases.

                                                          People talk about validating the classification of finite abelian groups as one of the long term goals of this stuff, since the shear length of the proof makes some mathematicians wonder about its correctness.

                                                          A lot of people are attracted to Coq because of its use of dependent type theory. Idris may provide a compelling alternative, however it may not be as mature as a proof assistant.

                                                          Personally, I use a proof assistant in my spare time to explore some of my ideas in mathematics. I use Isabelle/HOL rather than Coq. This is because I’ve found Isabelle/HOL has syntax closer to conventional mathematics than Coq, which makes adapting arguments from papers and books easier for me. It also has more convenience features for non-constructive proofs than Coq last time I checked, as well as better integration with third party theorem proves like E and Z3 which makes it easier to quickly find automated proofs.

                                                          Exploring proofs in pure mathematics is all the utility I’ve found for computer proof assistants. If you are trying to solve an obscure integral or deal with symbolic z-transforms and their roots, I’ve had more luck with Mathematica. Whenever I try code generation for anything, I write unit tests. For real world correctness I think you are best served by generative testing and finding simple mathematical invariants to check (ie. left-right inverse functions).

                                                          That being said, if you love Aigner et al.’s Proofs From the Book, programming and trying out your own proofs then a computer proof assistant can be fun.

                                                          1. 2

                                                            Coq has a feature call the code extraction. Basically, all the Prop world goes away while the Gallina code is turned into either Ocaml, Haskell or Scheme. The one important thing to know is the extraction algorithm is not itself certified. There is no equivalence proof (similar to CompCert one) between the Coq semantic and the Ocaml semantic for instance.

                                                            So, the extractor is part of the TCB and, indeed, the proof link is broken between the “model” and the “concrete machine”. However, one can argue it raises the level of confidence anyway.

                                                            One thing to know and which Coq manuals and tutorials never stress out (imo) is that the code extraction is not straightforward. It is hard to extract a code in a safe and efficient manner. For instance, just consider the nat type. You really do not want to use the nat type in most of your program because it is highly inefficient. Yet, all the algorithmic proofs you can do in Coq have a great chance to rely on nat.

                                                            Also, having a readable extracted code is not that simple. Most of the time, it goes well. But sometimes, not so much. For instance, I had some trouble with dependent type, once.

                                                      1. 2

                                                        I really want to give pijul a try, but unfortunately I cannot install it using cargo ): (I get an error when it tries to compile sanakirja)

                                                        1. 3

                                                          You may have to use the nightly builds of rust. I was able to install pijul with cargo.

                                                          $ rustc --version 
                                                          rustc 1.17.0-nightly (b1e31766d 2017-03-03)
                                                          
                                                          1. 2

                                                            It does not actually require nightly.

                                                            1. 1

                                                              It appears I had previously built and installed rust manually, so… my mistake, sorry. And thanks for the hint!