I wonder if the author has heard of Shen
Given its dubious licensing history and bizarre current licensing statement, I’d advise them to avoid even reading any of the Shen sources, just in case.
It’s currently BSD licensed, so even if future versions change licenses, the current version is permanently free.
Mark Tarver in his ivory tower never really “got” FOSS culture, but his initial license was free-ish even if it was weird. I’m glad he was talked into BSD.
The current licensing statement is a BSD license coupled with an explicit non-grant of permission to distribute under the GPL. Since the BSD license includes permission to distribute under the GPL the statement is incoherent and as such I wouldn’t want to rely on it - who knows what a court might say?
Ugh, really? It seems he still doesn’t get it.
Mark Tarver actually makes a reasonable case against the GPL here. I’m not sure if Tarver is necessarily correct, but his ideas at least aren’t “bizarre” or “dubious”.
Is Shen ready for usage? I couldn’t even find out how to install it.
It’s certainly ready for tinkering.
tl;dr of the layers:
You basically just answered my question about a clean-slate implementation. As in, such a minimal LISP using things like Prolog should be really easy to code for a LISP developer. Based on hga’s comments, I’d say just create a new one that’s full BSD or especially GPL.
Easy enough conceptually. The stack is a cool innovation which deserves to be mimicked, but the greater part of Mark’s work with Shen is with the type checker, based on something called Sequent Calculus which I do not understand in the slightest. Reading Shen’s history, the story of Shen is really a story of the sequent calculus-powered type checker.
You can find more detail in The Book of Shen. Here is the pdf table of contents. The type checker is discussed in chapters 24 and 25.
[Comment removed by author]
Yeah. I think language implementations need to be non- or weak-copyleft because rightly or wrongly corporations are scared of the GPL and unwilling to contribute to GPLed projects. Of course that leaves open the question of who stands to make any money from implementing languages, and how.
We can also consider a LGPL with static linking allowed. They contribute any changes to library itself back to it but are otherwise isolated from the problems. Heard MPL does that but I haven’t read it carefully. Might get the companies used to that.
I haven’t met a viral copyleft license that wasn’t complicated.
At the top of the page there’s a download link to several flavors. http://shenlanguage.org/download_form.html
Apparently downloading it is complicated because it has so many possible implementations.
I wonder if Shen finally checks pattern matching exhaustiveness during type checking.
The Shen type system is designed to be open, not closed (which is necessary for exhaustive checks). But can write your own algebraic type system on top of Shen which does pattern matching exhaustive checks, see this thread
There are three problems with your linked thread:
My use case is as follows: I’m writing a library of data structures and algorithms, together with (manual) proofs of correctness of their implementations. I have the following design constraints:
Unreachable control flow paths are forbidden. Inexhaustive pattern matching, raise ThisWontHappen, calling partial functions (List.hd, List.tl), returning dummy values and deliberately looping forever in a routine supposed to terminate are all symptoms of this problem. Enforcing this policy requires a strict division of labor between the type checker and myself: the type checker ensures the exhaustiveness of my case analyses, and I prove that each case has been dealt with correctly.
Unrelated concerns must be dealt with separately. For example, in a search tree implementation, there are two unrelated concerns: how to prevent trees from becoming too unbalanced, and how to ensure that the in-order traversal of any tree produces a monotonic sequence. The former depends on the internal representation of trees, but the latter only depends on the ability to perform step-by-step binary search and insert/update/delete an element at the current position. Thus, the latter must be implemented once and only once, in a manner agnostic to the details of the internal representation of trees.
Finally, economy of effort. Ceteris paribus, simpler programs and proofs are preferable to complex ones. Also, the aforementioned division of labor must be tailored to exploit the strengths and avoid running into the weaknesses of each party. Computers are much better than humans at exhaustive enumeration, but humans are better than computers at equational reasoning (computers can get stuck exploring an infinite space of applicable rewriting rules, where human insight would let you pick the right one).
These constraints have pretty much forced me to pick between Standard ML and OCaml. (But note that Hindley-Milner type inference was not a factor in the decision!) I ultimately picked Standard ML because its treatment of equality and recursive values is less wonky.