1. 6

    Interesting. I’m pondering non-turing-complete languages myself. I believe they are under-estimated and should be used more often for anything that looks like configuration.

    Configurations usually start simple, like we only need “x = y” statements. Oh wait, sections would be nice, so use the ini-format. Oh wait, nesting stuff would be nice, so use JSON/XML/Lisp. Oh wait, we want to reduce duplicated stuff, so use a preprocessor. Oh wait, more abstractions would be nice, so use some scripting language already. Stop, we just skipped one level before scripting languages, the non-turing-complete languages!

    Concerning Dhall, I never met the “Oh wait, we want to annotate types” idea. Is that really desireable for configurations?

    1. 6

      should be used more often for anything that looks like configuration

      And perhaps not just that. It’s worth checking out Turner’s “Total Functional Programming” and The Little Prover if you’re curious and unfamiliar with the space/ideas.

      1. 6

        Yes, configuration languages escalate quickly to full-fledged languages.

        1. 4

          Oh wait, more abstractions would be nice, so use some scripting language already. Stop, we just skipped one level before scripting languages, the non-turing-complete languages!

          Jumping to a scripting language is also jumping from a pure language to an impure one. IMHO this may be a bigger problem than going from total to turing complete. When a scripting language is used for config, it’s very tempting to start reading in other files, checking the hostname, overriding behaviour using env vars, checking for something in a database, etc. and before long you’ve got a whole “configurator” application, complete with its own config; and ‘checking what the configuration says’ for your original application may (a) give different results depending on the phase of the moon, and (b) cause arbitrary side-effects. It also becomes tempting to put functionality in the “config script” which should really live in the actual application.

          With a pure config, the only thing that the code can do is to calculate some part of the result; it can’t even alter other parts of the result, due to immutability.

          I’ve been writing a lot of stuff in the Nix expression language recently, and its purity makes it quite nice. It’s only pure by convention though, since ‘derivation’ values can contain arbitrary code (usually Bash) which can be executed using ‘import’, if one were so inclined.

          1. 1

            There’s also the thing that “oh wait, we want to reduce duplicated stuff” can also be seen as “oh wait, we have our configuration model all wrong”

          1. 5

            Can’t help but notice that most of those success stories are 10-15 years old.

            I feel like what I want to know most is not being addressed: why, in 2016, would I choose Common Lisp over anything else? In contrast to 2000, most or all of what made Lisp special is available in many other languages today – homoiconicity (Clojure, LFE, Elixir), a strong object system (Scala, Ruby, Perl), native compilation (Go, Rust), etc.

            1. 11

              You listed 8 different languages. Part of the appeal of Common Lisp is that it has the advantages of all of those languages in one.

              1. 5

                Can’t help but notice that most of those success stories are 10-15 years old.

                Last time I checked, ITA (bought by google), Grammarly and SISCOG are in business today and use CL today.

                1. [Comment removed by author]

                  1. 5

                    If “what technology startups are picking” is our barometer than we’re never going to escape the pop culture. The options afforded to greenfield development aside, startups are not exactly incentivized to make good engineering decisions.

                    1. [Comment removed by author]

                      1. 1

                        I’ll add that OCaml is a language under active development. That fact shocked me!

                    2. 3

                      Can the lisp community mold itself into something that appeals to outsiders in today’s world? I think it can, but it hasn’t yet.

                      Well put!

                    3. 1

                      Right, “most” is a word that leaves room for a few counterexamples.

                      Even so, two of the three companies you mention wrote the bulk of their Lisp code prior to ten years ago. I also hear from people inside Google that most of the ITA Lisp code has been rewritten at this point.

                      My question remains: why use Lisp now?

                      1. 3

                        The examples I picked are the ones highlighted in the main page, not ones I cherry picked. I’ve taken the time to count them all. Of the 15 examples, 7 are current and 8 are old. The author has gone out of its way to not only list use cases from the ‘glory days’ of CL. Given that people mostly write in what they are familiar (the reason why C#’s GC was written in CL and then mechanically transformed, the author already knew CL) and that the Lisp community is small, it seems to me that the author done a fair job in listing current success stories of Lisp.

                        I don’t know people inside google, but given the fact that people at google are still contributing to SBCL features from their own fork, like the fast-interpreter ~5 months ago, or build their lisp code with Bazel, there is still lisp code there, don’t know how much.

                    4. 5

                      Common Lisp provides all of the above, for one thing. :-) So you don’t have sacrifice X for Y and struggle with the tradeoffs.

                      What Common Lisp provides, specifically, that the above struggle to, is a fully integrated system of interactive development in the tradition of Smalltalk. I’m tempted to believe that Erlang might provide such a thing as well, but I regret to say that I don’t understand it well enough.

                      Common Lisp provides, further, a strong cross-platform story; if you’re interested in old-school enterprise development, Franz and Allegro both provide enterprise integrations in their commercial offerings. CL also has a reasonably functional JVM port (ABCL) as well as a subset (mocl) compiling for iOS and Android. Parenscript is also on offer to compile to Javascript.

                      Simply put: if you want to invest in Common Lisp for your company and deploy it everywhere, it will take you through the entire modern stack of development without having to port functionality into new languages. You can build a company on it and deliver products with it, for platform after platform, product after product. I think out of all the other existing languages, only C provides the same level of cross-platform capability, and at a profoundly lower level of capability.

                      What Common Lisp does not provide, however, is strong compile-time type checks: for that, Rust and Scala (from the above list) stand out. Libraries can be an issue, if you have some genuinely complicated problems that you don’t care to address in-house.

                      What Common Lisp does not offer, further, is the “whipitupitude” that Perl & Perl’s children so prize. It and its community have valued studied and thought out solutions to problems over quick hacks. This has made it less than perfectly popular in the 2008+ Zeitgeist.

                      1. 2

                        If I wanted to get back into writing CL, what compiler would one use (OS X)? I had a license ages ago for Allegro, but am more interested in SLIME + ??? these days.

                        1. 3

                          Either CCL or SBCL. CCL will have more OSX-y integrations; SBCL is tuned finely for Linux and works well on OSX; SBCL is, AFAIK, what 80% of people in the open source community use, however.

                          I’d suggest seeing what your Allegro license will get you today. It’s the case that SLIME pretty much integrates with everything out there AFAIK.

                      2. 3

                        The purpose of this page is to sell you on CL. To do an honest comparison between other languages would take a considerable amount of effort, outside the scope of this project. That is why most of the ‘comparisons’ are mostly fluff, touching syntax and using weasel words/marketing slogans like ‘pythonic’. The goal of the page seems to be to sell you on CL and to provide clear, easy to follow instructions to set up your own CL development environment.

                        Some further comments:

                        native compilation (Go, Rust), etc.

                        that is a property of an implementation not of a language

                        a strong object system (Scala, Ruby, Perl)

                        What does strong mean in this context? Seems like a weasel word to me. I know Perl has a MOP but not much about it. Ruby does have hooks like to respond_to_missing? but afaik (I may be wrong) slots/attributes are not instances of classes themselves, and so the programmer is not able to extend and modify their behaviour.

                        Clojure

                        I don’t know much about Clojure, but someone who does and thinks CL is better designed

                        More importantly, a PL is more than a collection of features. It is also important how the features play of each other. Now, by no means is CL perfect. I think the MOP could be further improved. For example Pascal Costanza has written about the woes of make-method-lambda. But it does have a lot to offer even 20 years into the future.

                        1. 1

                          Everything you can refer to in ruby is an instance of class (including Class, Object etc). Methods are definitely instances.

                        2. 1

                          Clojure and cljs are very appealing to me for new projects

                        1. 3

                          I don’t know much about Erlang but I’m curious. What measures are taken to get a million processes to run together?

                          Do you just have a task pointer that iterates between all one million processes? Does each process have a pointer to the processes it calls? Is there any effort to avoid cache hits? What happens if some of the 1.5 gb of processes winds-up cached-out by the OS or is some effort made to avoid this?

                          1. 6

                            What measures are taken to get a million processes to run together? Do you just have a task pointer that iterates between all one million processes?

                            An Erlang “process” is not a thread or a process in the operating-system sense; they’re structs with a chunk of heap, a pointer to some code, etc. These processes are run on a set of schedulers (in the modern SMP BEAM), often mapped 1:1 with operating system thread; each scheduler maintains a queue of processes waiting to execute. Schedulers may steal processes from one another if work is unbalanced.

                            Does each process have a pointer to the processes it calls?

                            There aren’t any inter-process “calls”, exactly - processes communicate via asynchronous message passing. Each process has a “mailbox” of messages to process and can process them as it pleases. You can imagine that it’s pretty trivial to implement an RPC-like mechanism on top of such a system.

                            Is there any effort to avoid cache hits? What happens if some of the 1.5 gb of processes winds-up cached-out by the OS or is some effort made to avoid this?

                            I’m not totally sure what you’re asking here. If you’re asking if the BEAM actively attempts to manipulate the operating system’s page cache, I’m pretty sure the answer is no.

                          1. 26

                            I’m wholly unconvinced a single number type is a good thing. It’s true, you don’t need to think as hard if you unify your integer and floating-point types. But that just provides an opening to introduce subtle new bugs in cases where the semantic differences between the two matter.

                            1. 15

                              I can’t even slightly imagine a world where unifying naturals or integers with reals is anything but a giant explosion waiting to happen. I do think there’s something interesting in pretending that numbers are ideal (thus infinite) by default and having Int32, Int64, etc only as special cases.

                              1. 2

                                Ermm, don’t Lisps do that?

                                1. 8

                                  Scheme has something like that, but it’s a bit more finnicky than that. Instead of taking the full algebraic perspective Scheme distinguishes abstractly between exact and inexact values and computations. It’s dynamically typed, however, and so a little difficult to understand exactly whether or not it truly “unifies integers and reals” in any sense.

                                  Javascript is well-known for making that unification and it lacks any notion of exact/inexact. So, instead, you just face a bunch of issues.

                                  Real numbers probably just straight shouldn’t support equality. It’s a theoretically undecidable operation.

                                  1. 4

                                    Relatedly, does anyone know much about what it would take to replace “real numbers” with “real intervals”? It seems like something that has certainly been tried in the past. It would neatly solve equality issues by replacing them with more quantitative measures like “what is the (relative) measure of the overlap of these two intervals?”.

                                    Do real intervals form a field? (They ought to.) What does linear algebra look like here?

                                    1. 3

                                      Mathematically, I agree that real intervals should form a field (I haven’t looked into the corner cases,though). In the context of ‘Real numbers probably just straight shouldn’t support equality,’ I’m unsure how you’re going to define your identity values for + and *, since they depend on equality of real intervals which presumably depends on on equality of reals. There’s a similar argument for inverses.

                                      1. 2

                                        Why do they depend upon equality? You’d need some form of equality to prove their properties but their operation shouldn’t require it.

                                      2. 1

                                        Spire has an interval type but it relies on an underlying implementation (which might be “computable real”). I don’t see what you’re suggesting or how it would work - if you want to compute precisely with real intervals then you just do twice as much work. If you want to compute with single values and have strict control over the error margins, wouldn’t you just work with computable reals?

                                        Intervals form a semiring but not a field.

                                        1. 1

                                          I’m more spitballing than thinking hard about it—evidenced by the fact that I didn’t even bother checking whether they were a field myself.

                                          I just think it’s really interesting that real numbers are non-computable. Handling this sanely has roots in Brouwer and likely you could find even earlier ones. Andrej Bauer has some interesting papers on this as well, if I recall.

                                        2. 1

                                          I did some experiments with plotting formulas with interval arithmetic last year. (The formula parser is still broken because all the operators associate right.) Playing around with it may give you some ideas for what works well and what works badly in naïve interval arithmetic. There’s been some work that I don’t fully understand by Jorge Eliécer Flórez Díaz extending interval arithmetic to do super-efficient ray-tracing which I think may solve some of the problems.

                                        3. 1

                                          Real numbers probably just straight shouldn’t support equality. It’s a theoretically undecidable operation.

                                          Theoretically undecidable under what model? Tarski demonstrated real closed fields are complete and decidable.

                                          1. 2

                                            I think the difference is that while decidability pertains to equality of arbitrary formulae over reals, I’m talking about arbitrary equality of values. If you receive two real numbers knowing nothing more about them and wish to build an algorithm for testing equality then you’ll hit an infinite regress. You can easily show two reals are equal up to some pre-defined precision, though.

                                            1. 1

                                              Hm. I’m going to have to go back to my modern analysis texts, but I’m pretty sure Tarski-Seidenberg provides for equality: by constructing the reals through Dedekind cuts over the rationals, you can guarantee a unique representation for each real number.

                                              This is all mathematical, though - while I’m 99% sure Tarski has an algorithm to prove two reals are equal, I think the implementation is unreasonably slow. I don’t know what’s new on this front. But I know which textbook I’m cracking open over the weekend!

                                              1. 1

                                                I’d be very interested to see that. I believe it’s possible to prove that two reals constructed from some algebra of reals are equal (though I think the algorithm is superexponential), but I’m fairly sure there’s no constructive function (n m : Real) -> n = m. Uniqueness of construction is a little different, though. You probably would get away with proving that by stating that if there were two representations you’d derive contradiction.

                                    2. 7

                                      Agreed. In particular, many languages would benefit from the addition of first-class rationals, complex numbers, etc.

                                      1. 5

                                        Obligatory Guy Steele link: https://www.youtube.com/watch?v=_ahvzDzKdB0

                                    1. 8

                                      This is really just runtime assertions, it’s not type checking in any sense anyone familiar with type systems would use.

                                      • Type info itself is object, you can check it and even change it during run time.

                                      • Checking type run every time method call… it might be overhead, but it’s not big deal.

                                      • There is no static analysis.
                                      1. 5

                                        Yep. For reference, I believe the term “gradual typing” is derived from this paper, and there the authors specifically distinguish between runtime constructs vs static constructs. Words mean things, etc etc.

                                        1. 7
                                                                       user     system      total        real
                                          RubypeCommonClass        0.530000   0.010000   0.540000 (  0.566493)
                                          CommonClass              0.030000   0.000000   0.030000 (  0.035718)
                                                                       user     system      total        real
                                          RubypeDucktypeClass      0.590000   0.010000   0.600000 (  0.682504)
                                          DucktypeClass            0.030000   0.000000   0.030000 (  0.029856)
                                          

                                          ಠ_ಠ

                                          1. 2

                                            I’m not sure what you are responding to here. I’m not concerned with the performance, I’m concerned that this doesn’t actually do typing.

                                            1. 6

                                              I was just elaborating on the theme of “it does not do what they say.” A twentyfold decrease in performance does not sound like “it’s not big deal.”

                                              1. 2

                                                Ah, good point. Thank you.

                                            2. 2

                                              Out of curiosity, since this seems to be getting downvoted rather badly: why are you flagging this as “incorrect?” These are the benchmarks presented on the linked Github page.

                                            3. 3

                                              I have filed a GitHub Issue suggesting that Rubype change its terminology: Don’t say Rubype offers “gradual type checking”.

                                            1. 9

                                              There is no obligation to free labour. Every hour you put in working on your project for free is a gift to the world. If the world comes back to you and says “You are a bad person for not supporting this thing I need you to support” then fuck them. If they want that they should pay you for it, or do it themselves.

                                              Well, yes, but I would argue that below a certain threshold of quality, you should really consider whether it is worth publishing (or at least promoting) your library at all. There is such a thing as a library that is so shitty (or, more likely, so incomplete) that it costs its users more time than it saves them. If you must release such a library, I think you have an obligation to make the state of the library pretty clear upfront.

                                              I’ve wasted a lot of time dealing with libraries that purport to implement some spec or protocol, only to find out too late that they implement only those particular features which the author happened to need–but rarely are these limitations described upfront, and in fact these are often the same libraries promoted as the canonical implementation of $foo in language $bar.

                                              1. 11

                                                +1 to distinguishing between “publishing” and “promoting”. GitHub’s contribution to an “open by default” mentality is awesome, but I’d love to have a first-class indicator of whether or not the author of a tool actually thinks other people should use it. Open-source-as-a-marketing-tool can be pretty harmful if it’s not explicit.

                                                1. 6

                                                  Tangentially, it amazes me how little a big, bold “NOT MAINTAINED” tag in the readme does to discourage people from using the library (and subsequently harassing you with issues).

                                                  I have an older PHP library which I want nothing to do with anymore, and explicitly say “This is not maintained! Do not use in production!” (and below that, the old readme still says “Alpha status, do not use in production!”). Yet I still get emails about fixing XYZ, or asking why it imploded their mission critical production code or something. I just don’t even know how to go about answering those tbh. Its. Not. Maintained. :(

                                                  1. 1

                                                    Heh. Yeah, I get support requests (or people moaning on Stackoverflow) occasionally about 5 or 6 year old versions of a library I wrote that has seen 2 major version updates since the version they are using. Often they’ve obtained the source not from me, but from random person that wrote a tutorial and provided a downloadable archive of my code hosted on their blog and never linking to me at all. Often these people remove all “unnecessary” files like README, INSTALL and LICENSE. Grrr…

                                                    This library is an Objective-C one, and I released a new minor version almost exclusively devoted to conversion to ARC. This resulted in lots of moaning from people who upgraded without reading the documentation and thought my code had terrible memory leaks. I had to make it impossible to compile without ARC enabled to make those stop… So yeah, my experience is that people don’t tend to read documentation much… :-)

                                                  2. 4

                                                    I think the answer to this is essentially that not every release is for the purpose of trying to get people to use it. Also, authors are not necessarily aware that their reasons aren’t the same as everyone else’s, and that makes it harder to be proactive about making only the right promises given how people from other contexts will interpret them.

                                                    More than half the project pages I land on assume I know something about some highly specialized field like audio engineering or 3D modeling, and that I know that their project is in that field, when actually I’ve never heard its name before and they don’t clearly describe the project’s purpose anywhere on the front page.

                                                    I see this as the same problem. Quality expectations that an author doesn’t realize they’re exuding are a special case of all the expectations that an author doesn’t realize they’re exuding. Fix that first. :)

                                                    1. 2

                                                      This is a very interesting point. The question is how much “coverage” of a spec do you need and isn’t it OK as long as you document the limitations?

                                                      For example, I once wrote a library to load data from a particular file format. I handled some edge cases that I came across in my own work, but did not bother to cover many others. Notably, my library would only work with the version of the file format I was producing for my work, but not for earlier (and now later) versions.

                                                      My idea in releasing the library as open source was the hope that not only would some one be able to use it, they would be able to build upon it to add the edge cases they needed etc.

                                                    1. 4

                                                      This touches on stuff I’ve been thinking about in the last week. @james: Is there more context for this? Are other folks thinking along the same lines? How did you find it?

                                                      1. 4

                                                        Author here. Full writeup at: http://pointfree.uk

                                                        1. 1

                                                          Have you spent any time with Apache CouchDB? There are a number of folks in that community that share opinions similar to those expressed in your writeup, and CouchDB’s feature set reflects that.

                                                        2. 2

                                                          I started thinking about it since I read the Telekommunist Manifesto

                                                          It, along with what was happening at the time, inspired me to create Fire★.

                                                          sadgit’s versioning distributed state machine is a fascinating idea. I am trying to think of all the primitives it requires.

                                                          1. 1

                                                            Frontend: Webserver configured to serve a GIT branch. Provides an API so that clients can commit.

                                                            Backend: engine that synchronises between GIT branch endpoints according to user specified validation policies. Could also do it manually, merging is pretty routine for many.

                                                            Peer location: Kademlia, GitHub, others, hopefully at the same time.

                                                            Edit: Thanks btw, glad you like it!

                                                        1. 2

                                                          Why would someone use a language in 2014 with such a restrictive license? Is it magic or something?

                                                          1. [Comment removed by author]

                                                            1. 2

                                                              My question wasn’t rhetorical. I am wondering what’s special about this language, and why I would use it over anything else (especially considering the weird license).

                                                            2. 3

                                                              That’s of the point of the submission. The wacky license has been a barrier to adoption (see decent HN discussion here), and the author is finally considering FOSS.

                                                              1. 1

                                                                Right; as far as I can tell, no one is using it with the current license, despite it having a fairly interesting type system.

                                                            1. 3

                                                              There’s a mailing list post discussing the campaign: https://groups.google.com/forum/#!topic/qilang/HBBjtIxegFY

                                                              1. 4

                                                                Guessing there’s no video to go along with this, since it’s from a course. Anyone know of other resources for learning more about GHC’s runtime?

                                                                EDIT: Slides 72 and 73 have some further reading, including this page.

                                                                1. 3

                                                                  x-post'ing my comment from HN, because this topic is really interesting to me:

                                                                  For the tl;dr crowd, here’s the key takeaway from the article: “Most companies promote workers into managerial positions because they seemingly deserve it, rather than because they have the talent for it. This practice doesn’t work.”

                                                                  Before I started a startup, I was a software engineer at a large firm, and it was clear they were grooming me for management because I was a strong individual contributor and had “put in my time”: 3 years as an engineer. Advancement at this firm was measured by “how many reports” you had, as in “direct reports”, or people managed by you, and if you just did superior individual work but had no one “under you”, you weren’t advancing. So they sent me to a couple of training courses about management and started prepping me for the path. This was one of the many reasons I quit this BigCo to start my own startup.

                                                                  I am now the co-founder & CTO of Parse.ly (http://parse.ly). In our first two years after starting up, I spent all my time building stuff – which is exactly what I wanted. Ironically, because the company has grown and now has a 13-person product team, I am now technically “managing” my engineering team with 13 “direct reports”. But at our company, we have completely decoupled management from individual contribution – certainly, if a strong individual contributor shows an interest in management, we’ll consider it. But becoming a “manager” is not how you “advance” here – you advance by doing great work.

                                                                  Our first employee who joined in 2009 is a great programmer and he is still with the company, but he’s still doing what he loves: building & shipping stuff. Based on our frank conversations on the topic, I think he would quit if I forced him to be a manager. The appropriate reward for doing great work isn’t a “promotion to management” – that’s actually a punishment for a great individual contributor. The right reward is to ensure you continue to provide an environment where that great work can continue for that contributor, and where they can continue to grow their skills and apply themselves productively in the role.

                                                                  1. 2

                                                                    It’s interesting that you point out your experience at a BigCo, since there are often career tracks there for ambitious engineers who want to remain engineers - IBM Fellow, for example - that startups generally lack. It seems a lot of engineers join startups and see two major ways “up”:

                                                                    1. Start a company and be the CTO

                                                                    2. Get promoted to VP of Engineering

                                                                    Success in either of those roles means you’re not engineering anymore. They’re both career changes, not promotions. I don’t think there’s any easy solution to this problem with regard to startups, but the industry needs something better than “Work at BigCo”.

                                                                    As software engineering matures I hope we gain a deeper understanding of what it means to grow as an individual engineer. Widespread ability to perceive the differences between a mature engineer and an immature one seems like a good place to start.

                                                                  1. 2

                                                                    Interesting idea, but focusing the discussion entirely on data loss and ignoring the other potential operational impacts of failures (especially non-catastrophic ones) seems a bit short-sighted.

                                                                    For example, consider a single machine failure in a simple quorum-based system using copysets. The replica set “partners” of that machine will experience a greater load increase than they would in a system that distributed data via random replication, since there is by definition more overlap between the individual machines' workload. That is, the operational impact of non-catastrophic machine failures scales inversely with the “scatter width” of the machines in question.

                                                                    At large (10e3) scale this isn’t going to matter so much, but most people deploying these systems (including the authors' customers,) are probably operating at much smaller scales, where non-catastrophic failures are a much bigger part of day-to-day systems management.

                                                                    N.B.: I haven’t read the paper.

                                                                    1. [Comment removed by author]

                                                                      1. 5

                                                                        We (I’m one of the authors on the linked article) introduced the notion of predecessor width precisely to enable capacity planning. If catastrophic events are not likely, or not a problem, you can set a high predecessor width, and increase the number of nodes from which each node recovers in the event of failure. By decreasing predecessor width, you decrease the chance of catastrophic loss, and decrease the number of nodes each node may use for recovery.

                                                                        HyperDex will automatically heal itself when a node goes offline (and comes back), so it (hopefully) won’t wake you up at 3am.