Am I reading this accurately?
Something does not add up well.
I upvoted this comment tree but I’ll still offer a semi-counter. The language has already established itself to have reasonable performance vs mainstream, Web-oriented languages plus more safety than many. The common gripes about it are a lack of one, good, standard library along with tooling. 2-4 fit the common gripes. These are even expected since it’s a language developed by academics that wrote compilers and do formal verification for use by those types extended for convenience by them and a thriving community. Yet, these main problems remain since… apathy, people not adopting Jane St’s stuff, or outside of original scope? Idk.
The starting point isn’t countered by the rest as it applies to just the foundation the language provides. That’s good. It just needs work on library and tooling side to make it more suitable for real-world applications that it wasn’t necessarily designed for. Whereas, high-assurance sector is getting a lot of mileage out of languages like Ocaml by using it where its attributes are strong. Esterel’s report is a nice example with Section 3 being enlightening:
When I read,
OCaml was initially introduced for its execution speed and ease of refactoring.
I wonder if those characterizations of OCaml were based on their own experience and measurements against a defined target or simply restatement of commonly held beliefs. I’ve certainly read both statements elsewhere, but I rarely read of people or companies running in-house experiments to make such decisions. Google and disk performance and lifespan, yes. Jane Street and OCaml, probably. Here?
I used OCaml for a proof-of-concept on a large networked application many years ago, but I found unpredictable performance under high load with difficulty resolving the source prohibitive. Concept proved, I rewrote it all in C and lived happily ever after.
So they gladly would trade 10x perf for debug
The article doesn’t say it would trace a 10x performance decrease in general, only a performance decrease when handling an exception. Here is the article quote:
We would happily accept an order-of-magnitude performance decrease in exception-handling performance if we could have better stack traces instead.
Stack overflows are relatively easy to debug even with a truncated stack trace.
(and as /u/ngrilly says, the 10x perf was specifically about exception throwing/catching, not in general)
Also the sad reality of the industry is that the bar for “safe” is ridiculously low.
But the standard library stack overflows
The compiler standard library does but it is trivial to write your own and in fact many people do. When you compile your code, it is fast. And I think the reason for the pervasives to be non-tail recursive has to do something with performance in the average case of inputs not overflowing if I remember it correctly.
If it’s “trivial” to write, why not include a decent version in the first place?
“Most people rewrite the standard library” is not a very great endorsement of a language.
Agreed - that’s exactly the kind of fragmentation of the community that Common Lisp is often criticized for. It means people can’t read each others' code, and integrating is much harder. It seemed perfectly fine for a while, but it’s a core reason that Clojure is far more talked-about today.