This makes me very very thankful that the Rust team places such an emphasis on good documentation. It is so easy to let documentation fall by the wayside, and once you have commercial training providers pop up, it’s a lot harder to get documentation efforts going (because now you have businesses with a vested interest in the documentation remaining bad). Good documentation doesn’t just happen. It takes serious work and real buy-in from stakeholders, where they actually believe it is important to invest the time, money, and energy to do it right.
It has certainly been a pleasure to have the support of the rest of the organization (both Mozilla and non-Mozilla) here. They’ve always said “we need good docs, and that means paying someone for them,” and it certainly would go much, much slower if I had some other job.
Thanks for all the work! Your dedication really shows.
Nothing advances language growth like good documentation (perhaps a fabulous, welcoming community, but Rust also has that).
This has been my observation as well. As soon as you get a for-profit-company who makes their money by selling “professional services” running a project, the documentation always falls by the wayside. Typesafe (or, Lightbend, as they prefer to be called now) is no exception here - and why should they be? Their business model depends on them pumping out ShinyNewThings as fast as possible and then selling consulting services. It really shows in the Scala ecosystem, so many Lightbend projects with flashy webpages touting their Reactive Big Data Synergy, and then the UX for them is terrible.
Meanwhile, the not for profit communities behind Rust, Clojure, Python, Elixir, etc put much more emphasis on delivering a smaller set of composable building blocks with thorough documentation.
I know the author of the post in the google-groups thread says he/she doesn’t believe this is the case, but I’ve yet to see an exception here, it absolutely is not specific to scala.
i think that’s the best explanation - not so much having a vested interest in bad documentation so they can sell training, but that features will always have a higher return on investment than documentation will, so that’s where time and energy gets focused.
It also helps that Rust’s design is a lot cleaner than Scala’s. While both Rust and Scala are larger languages than most, in Rust, every language feature has a clear unique purpose, and it would be very hard to achieve all of Rust’s design goals with a smaller language. On the other hand, Scala is full of features that were thrown in just because they initially seemed like a good idea.
Scala is full of features that were thrown in just because they initially seemed like a good idea
Could you mention a few?
Subclasses, traits and implicits: They all serve overlapping purposes (variations on ad-hoc polymorphism), which suggests they should be merged into a single feature. (Please don’t bring Java compatibility as an excuse.)
Case classes make it easier to manipulate value objects by value, but their physical object identities are still there, waiting to be accidentally used. Instead, Scala could and should have provided actual value types. (Again, please don’t use Java compatibility as an excuse.)
Extractors are inelegant: They hard-code support for a very specific use case into the core language, and they make pattern matching exhaustiveness checking unnecessarily difficult. If you actually want to enhance the expressive power of pattern matching, Haskell’s pattern guards are a superior solution.
Though I think we could link some of these features together, Java compatibility is a major design goal of Scala. Ignoring the reason for features to exist makes it easy to call them not good.
I also think that Scala follows a very C++-style design philosophy. Throwing in a huge amount of features gives people flexibility so long as they know what they’re doing.
As to whether this is a good idea… depends on who you ask ;)
Subclasses, traits and implicits: They all serve overlapping purposes (variations on ad-hoc polymorphism), which suggests they should be merged into a single feature.
Classes and traits offer a clean distinction between classes that have initialization and classes that don’t. Having used a language without it, this distinction is essential to having practical multiple inheritance; I wish other languages would adopt it.
Implicits alone couldn’t offer the same functionality as inheritance. I hope there’s a better design “out there” - something that offers the functionality of both - but I’ve never seen it.
Case classes make it easier to manipulate value objects by value, but their physical object identities are still there, waiting to be accidentally used.
Where? What’s the difference? I mean sure you could call System.identityHashCode on a case class and get unpleasant behaviour, but you wouldn’t do that by accident.
Classes and traits offer a clean distinction between classes that have initialization and classes that don’t.
Why do you need this distinction in the first place? In OCaml, heck, in C++, a class without initialization is just… a class without initialization. Going even further, in Eiffel, all effective classes have creation procedures, it’s just that some classes have empty ones.
Having used a language without it, this distinction is essential to having practical multiple inheritance
You’re conflating issues here. The linked article describes the unfortunate consequences of Python’s superclass linearization strategy for modularity: embedding one class hierarchy into another breaks the chain of superclass methods reached by repeatedly calling super. But this isn’t specifically related to initialization: it causes problems for normal (non-constructor) method calls as well.
A good starting point would be dissecting inheritance into multiple features, each of which does one thing and does it well.
Where? What’s the difference?
You can call eq on Options and Lists. How does this make sense?
The linked article describes the unfortunate consequences of Python’s superclass linearization strategy for modularity: embedding one class hierarchy into another breaks the chain of superclass methods reached by repeatedly calling super. But this isn’t specifically related to initialization: it causes problems for normal (non-constructor) method calls as well.
In theory yes. In practice __init__ is where the problem happens, 99.9% of the time. Many languages feel these problems are severe enough to ban multiple inheritance outright; I find the Scala approach strikes the best balance (a class may inherit from multiple classes, but from at most one class that requires initialization), and the class/trait distinction is the simple way to implement that.
A starting point isn’t enough - Scala is a production language, not a research language. Choosing a mature approach over a supposedly better but unproven one is not bad design.
It makes exactly as much sense as calling eq ever does.
It isn’t often that I say C++ makes sense, but, in this particular regard, it does: when an object of a base class is being constructed, no object of the derived class exists yet, so virtual member function calls inside a base class constructor are resolved to the implementation provided by the base class: http://ideone.com/Ytr6xm . Even if you use virtual inheritance: http://ideone.com/zvUbI5 .
On the other hand, Java and Scala take the position that, even inside base class constructors, method calls must resolve to the implementation provided by the derived class: http://ideone.com/zv7iOq , http://ideone.com/uw1F43 . This is awkward precisely because it creates the initialization issues you mention - you could be calling a method of a class whose initialization logic hasn’t yet run.
To summarize: In C++, the constructor is what creates an object in the first place. In Java and Scala, the constructor is what runs immediately after the object has been created. The latter is an inferior design, because there exists a point in time, between object creation and initialization, in which the object is in a bogus state.
when an object of a base class is being constructed, no object of the derived class exists yet, so virtual member function calls inside a base class constructor are resolved to the implementation provided by the base class: http://ideone.com/Ytr6xm . Even if you use virtual inheritance: http://ideone.com/zvUbI5 .
This is very confusing behaviour too. There’s no perfect answer here (except perhaps the checker framework with @Raw); I don’t think I’d call one approach inferior to the other.
An object is a collection of methods that operate on a hidden data structure. There are two important things about a data structure: its invariants and the asymptotic complexity of its operations. Leaving the latter aside, the role of an object constructor is to establish the object’s internal invariants, which all other methods must preserve. Viewed under this light, the behavior of virtual member function calls inside constructors in C++ is the Right Thing ™.
Subclasses, traits and implicits
I think lmm gave a good answer already.
On top of that, I think that making up the requirement of merging typelclasses with dynamic dispatch is kind of a tall order given that Haskell itself can’t even manage to get type classes working in isolation.
Scala could and should have provided actual value types
Scala does provide value types. They are completely orthogonal to case classes.
If you actually want to enhance the expressive power of pattern matching, Haskell’s pattern guards are a superior solution.
This looks like the for-comprehensions Scala had since day one.
merging typeclasses with dynamic dispatch is kind of a tall order given that Haskell itself can’t even manage to get type classes working in isolation.
I have no idea what you mean by “get type classes working in isolation”, but I’m pretty sure that, if you create an existential package where the existentially quantified type variable has a type class constraint, the methods of said type class are dynamically dispatched.
Okay, then the question is - why aren’t Option, List, etc. value types, when they clearly only make sense when used as value types?
Pattern guards have nothing to do with monads. All a pattern guard does is produce values that can be used in the right-hand side of a pattern matching arm:
insert x (t:u:ts) | Just v <- node x t u = v : ts
insert x xs = leaf x : xs
If node x t u evaluates to Nothing, then insert x (t:u:ts) evaluates to leaf x : t : u : ts.
node x t u
insert x (t:u:ts)
leaf x : t : u : ts
(because now you have businesses with a vested interest in the documentation remaining bad)
I’ve seen this sentiment a bunch, is it really that common of a thing, or are people just getting angry at documentation and rationalizing it as EvilCompany trying to sell services?
E.g. I work at Elastic, and from time to time people complain about our documentation. And sometimes they claim it’s bad on purpose, because we want folks to buy services (these allegations almost always correlate with rageful tweets who ignore active attempts at help, fwiw).
I can 100% say that’s not the case for us… it’s just a part of the documentation that’s bad, or old and poorly worded. Our docs are in our github repo, we wrote a book and OSS’d it, and we have several full time technical writers on staff (which are distinct from our education/consulting teams). We recently added checks that run code snippets in the docs, and fail the build if you break them. Etc etc.
So I wonder if this sentiment is really justified, or if perhaps software just often has crappy documentation in places, entirely unrelated to offering services? Writing good documentation is hard, and good presentation of those docs often spans multiple departments (engineering for technical accuracy, marketing/web for proper integration into the site, infra if it requires special features like online REPL, etc).
I dunno, having been on the sharp end of the documentation stick, I can appreciate it isn’t as simple as “your docs suck because you make money on services”. I think people underestimate the work that goes into good documentation. And how quickly good docs turn bad due to bitrot.
Note: I know nothing about Scala, so it may really be the case :)
So I wonder if this sentiment is really justified, or if perhaps software just often has crappy documentation in places, entirely unrelated to offering services?
I think it’s entirely likely that there’s no actual human from the business who looks at the docs situation and says “well, better not improve those; that would go against the best interests for the company”. That doesn’t mean there aren’t emergent factors at play which subtly incentivize other things over documentation which wouldn’t be there if the revenue was structured a different way. You don’t need ill intentions in order for this sentiment to be justified.
Looking on from afar, it feels to me like the Scala community can’t figure itself out sometimes.