1. 4

    My sense from using Scala professionally for the past approximate five years has been that most of the interests in building out tooling, infrastructure, etc. in that community are predominantly business-driven. That is, the Scala community tends to produce accelerators as a side effect of business need, not ars gratia artis, as many other communities do: Haskell, Rust, Go, etc. Or, somebody produces something once and doesn’t update it and doesn’t have the resources or skills to build a community around a much used project. This is not a condemnation of maintainers who don’t maintain, but rather a failure of the Scala community to produce a group of curators — people who will take on the load when maintainers abdicate —as well formed and intentional as many other communities have, or a culture of curation. “Disarray” is too strong of a word, but as someone who moves between four languages daily, I can compare communities and their resources pretty well and conclude that my technical decision to use Scala is sound while the political decision frustrates me at times.

    1. 4

      I think it couldn’t be farther from the truth.

      The main source of the embarrassing cycle of “Hype – Failed Promises – Abandonment – The Shiny New Thing – Hype – …” is in fact coming from the academic side.

      To be clear: I’m not blaming students that they drop their work and completely disappear the minute they handed in their thesis, and let other people deal with the consequences. That’s just the way it is.

      The problem is the ease with which they can get things added to the language/library, especially compared to the scrutiny outside contributions regularly receive.

      This pattern has repeated over and over, and I think it’s one of the unchangeable parts of the language/community.

      If you care about a quality, documentation, tooling, then Scala isn’t the right language for you. Simply because “we managed to ship a new version of Scala without breaking every IDE” is not a topic you can write a paper of.

      1. 1

        This is a very important point and I’m glad that you wrote it out.

        Would you say that the Scala ecosystem has a continuity problem because businesses and academia are focused primarily on the now and not necessarily building the road ahead for the community and then maintaining those roads once built?

        1. 6

          Think of it like this:

          The core open-source community is the pizza dough that provides long-term stability and maintenance, and the academia/businesses provide the toppings.

          Some languages have a large base of long-term, open-source contributors – they are family-sized pizzas which can accommodate a lot different toppings. People can get their favorite piece, everyone is happy.

          Scala is different. It has barely any substantial open-source contributors remaining – and everyone is fighting over the toppings to be placed on that coin-sized piece of pizza dough. As a result the kitchen is a complete mess, and nobody is happy.

          Scala’s problem is not the focus of businesses or academia, but that it doesn’t have any focus on it’s own.

          There is literally no one left (since Paul walked away) who is able to establish or uphold any kind of technical standards, or tell people “no, we are not adding another 6 new keywords to the language”. Heck, they couldn’t even get their compiler test suite green on anything newer than Java 8, but they kept publishing new versions anyway since 2017!

      2. 2

        conclude that my technical decision to use Scala is sound while the political decision frustrates me at times.

        Of course you would claim it’s sound, but is that objective? What do you base that decision on? Genuinely curious here.

        1. 2

          Thanks for asking. I based my decision mostly on architectural analysis with a healthy dose of personal experience. We’re building a new product with cloud and on-premises components. Originally, I’d intended to build everything from scratch but my team identified some OSS that did 100% of we needed with some extra complexity in managing it. The trade-off was a much faster time to delivery in exchange for playing in someone else’s sandbox and by their rules. That aspect was worth it but I lament not being able to write it all in Scala!

          However, our cloud services are written mostly in Scala. We have a few stateless microservices for which Scala was a great fit: high-performance OOTB for what within six months will be ~500 req/s and within two will likely exceed 4,000 req/s for one component with another being far more I/O bound. We’re integrating heavily with ecosystems that have Java libraries and we’ve found decent Scala wrappers that give us idiomaticity without building and maintaining them ourselves. We could have used just about any stack for these couple of services but Scala’s enabled us to express ourselves in types and exploit the advantages of functional programming. I’m chasing the holy grail of “it’s valid because it compiles” but we’ve got enough unit tests to complement our design that I’m pretty sure we’re on the right track.

          One notable failure was on service that is primarily a user-interactive web app. We had two false starts with Scalatra, which we’re using for the other services, and Play! before switching to Ruby on Rails (temporarily) out of frustration with documentation, lack of examples, lack of drop-ins like what are commonly available in the Rails ecosystem, from whence I came many moons ago. We chose Rails because of the component owner’s experience with it as well as my experience with it and JRuby, knowing that if we started to implement any shareable logic, we could do so in a way that all of our apps could consume. We learned about Lift too late and http4s and some others didn’t give us the right impression for a web app. I just learned about Udash last week and it may be a candidate for replacement. However, it’ll be several months: even uttering R-E-W-R-I-T-E would be kiboshed from on high at the moment as the component does what we need right now.

          Moving forward, we’ll be looking at moving some of these services to http4s, etc. once more of my team is comfortable with more hardcore Scala FP. Writing AWS Lambda functions in Rust is also on my radar, as a part of our on-prem product is written in Rust.

          1. 1

            Out of curiosity, did you evaluate Elixir? The obvious draw is the Ruby-like syntax but it seems that the Phoenix framework has a great concurrency story on top of being a Rails-inspired ‘functional MVC’ framework.

            1. 1

              There was a running joke on my team and another about us throwing everything away and doing it all in Elixir! I mused about it a little but decided that we were better off in RoR because we can hire for it more easily if we ended up committing to the RoR implementation long-term and because we could run RoR on JRuby as a transition step back to Scala, should we have enough business logic to merit a shared implementation. So far, the latter hasn’t been the case since the app is 95% CRUD.

              I’d really like to see an analysis of the websocket concurrency story of Elixir, Scala, and Rust, with perhaps some others for more general applicability. Our app is using good ol’ fashioned HTTP requests right now but we’ve identified some opportunities to shave some transfer overhead by switching to websockets eventually.

              1. 1

                That’s funny. I guess Elixir has some visibility in this space. I’m planning to use it to build a proof-of-concept. For me it’s the developer experience, combined with the performance and concurrency profile. Here are some comparisons: https://hashrocket.com/blog/posts/websocket-shootout

                More results available here: https://github.com/hashrocket/websocket-shootout

                1. 1

                  Thanks for those. I’m sad that they’re out of date but the info is useful nonetheless.

        2. 2

          I don’t think that’s fair at all. There are a number of actively maintained projects that are at the center of the community.

          I do think there are relatively few well maintained libraries outside of that core, but I think that’s a result of the community being relatively small, and the escape hatch of having java libraries available to do almost everything.

          Community (non corporate) projects and organizations:

          • All typelevel projects
          • monix
          • Scalaz
          • Sbt
          • Ensime

          All widely used, all with large contributor bases.

          Additionally the alternative compilers, scalajs and scala native.

          1. 4

            At least from my perspective it seems that:

            • Typelevel is more busy in dealing with US politics than writing software,
            • Monix is largely a one-man show,
            • Scalaz is one of the main departure points for many people that move to Haskell,
            • Sbt is so great that everyone is trying to replace it, and
            • Ensime …? You just literally read the article by the Ensime creator.
            1. 2

              I do think there are relatively few well maintained libraries outside of that core

              The Scala community does have a solid core and near-core extension community. I consider libraries like Monix, Cats, and Scalaz to be nearly a part of the standard library because of how often they are used. sbt and ensime are important but they’re not exceptional: every stack needs a build tool and editor integration. These are solid now, and I appreciate the work that goes into them. Frankly, it wasn’t until sbt hit 1.0.0 that I considered it ready for widespread use because of its obtuseness/unergonomic interface prior to then. I’m eager to see what Li Haoyi’s mill will become.

              Things I’ve noted in the past that I’ve found in a less-than-desirable state compared to other stacks:

              • Internationalization - fragmented ecosystem, sbt plugins outdated
              • Authentication & Authorization - no nearly drop-in solution like Devise in the RoR ecosystem
              • Project websites being down for weeks because someone forgot to re-up the TLS cert, even in a LetsEncrypt automation world
              • Out of date documentation to the point of being dangerous practices with little more than a “someone submit a PR to fix that”. I get that maintainers get busy but when safety is the topic, is it acceptable to wait for a drive-by contributor to get it right and contribute the correction? What if they do that and then no maintainer merges it for years?

              The Scala Center has the promise of addressing much of it but I speculate that it’s insufficiently funded to be the ecosystem plumbers and teachers it aspires to be. I’ve been impressed with its work so far, though.

          1. 3

            Obvious connection with REST and why it was proposed.

            1. 3

              A+ issue A- article. Please read this and if it seems ridiculous or far fetched please think about why that might be your reaction.

              1. 11

                Well done. This is so important, and it is on all of us right now to choose to do the right thing.

                “Right now” because injustice is clearly being perpetrated by our government in ways that we know about, and on a scale that is growing, and threatens to grow into another stain on history.

                1. 8

                  While I think a lot of this advice is fine as far as it goes, this still hews to the model of candidates running a gauntlet of interviews. These have very little in common with real work no matter how much you try to make the questions “work like”.

                  If you’re a very large company - a google or amazon - it makes sense to use a series of exercises where top performance in the exercises correlates with good job performance, then try to interview every engineer in the world. Those companies have the scale to spend a lot of effort on interviewing, and cream off people who are not likely to be turkeys. Indeed, at their scale this is probably the most reliable way to do things.

                  For the rest of us, we have a different problem - how do we find and attract good people without trying to encounter every engineer, how do we in a tractable amount of time fill the positions we need to fill, and how do we avoid bad people while not filtering out good people (if you’re small you can’t really afford to make a bunch of type I or type II errors). So, the problem to solve for is how do you figure out what it would be like to work with each candidate, having first filtered them in as possibly having the attributes you need, and then based what it would be like to work with them, decide if they still look like the person you need.

                  These are some exercises I’ve seen that address this problem relatively directly:

                  • Take home exercise: some groups of engineers (e.g. web developers) won’t do them (mostly), while other engineers like data engineers and devops engineers seem to prefer them. These can run unnecessarily long unless the interviewers take the time to time trial it repeatedly internally. This can be done very well by the interviewers, if they actually take time testing and scoping seriously.
                  • Ask the engineer how they should be assessed. This has the advantage that the candidate should be at their best, and barring bad administration of the assessment, should be very fair to them. You get information both about how they think from the nature of the exercise and how they carry it out
                  • Ask the candidate to explain some code of their own choosing. In my own experience, this is the single best way to figure out if a candidate actually understands what they do, or if they just sort of muddle through things. I’ve seen plenty of candidates who aced all other rounds reveal a fundamental lack of understanding. This also allows the whole team to join in the session. It also allows the whole team to potentially nudge or help out the candidate - this controls for the biases of a single or couple of interviewers

                  The one thing I will say is that a single live coding session should occur at some point in the process, as there are a very small number of candidates who will cheat on a take home, and in theory they could prep extensively with a coach for the explanation session.

                  1. 2

                    Your 3 suggestions are very good. In fact, I think they’d improve things at big tech companies too. There’s no such thing as “a series of exercises where top performance in the exercises correlates with good job performance”, evidently :(

                  1. 7

                    This is fud. I won’t speculate as to why OSIA is doing this.

                    The relevant text (not having regard to the amendments in the CPTPP “wrapper” treaty, which I don’t think are relevant):

                    Article 14.17: Source Code

                    1. No Party shall require the transfer of, or access to, source code of software owned by a person of another Party, as a condition for the import, distribution, sale or use of such software, or of products containing such software, in its territory.
                    2. For the purposes of this Article, software subject to paragraph 1 is limited to mass-market software or products containing such software and does not include software used for critical infrastructure.
                    3. Nothing in this Article shall preclude: (a) the inclusion or implementation of terms and conditions related to the provision of source code in commercially negotiated contracts; or (b) a Party from requiring the modification of source code of software necessary for that software to comply with laws or regulations which are not inconsistent with this Agreement.
                    4. This Article shall not be construed to affect requirements that relate to patent applications or granted patents, including any orders made by a judicial authority in relation to patent disputes, subject to safeguards against unauthorised disclosure under the law or practice of a Party.

                    The real meat is 14.17.1: No Party [signatory state] may require the transfer of, or access to, source code of software owned by a person of another Party [person of another party = person domiciled (the actual nature of the connection may be different) in another signatory state] as a condition for the import, distribution, sale or use of such software, or of products containing such software, in its territory.

                    That bolded phrase again: No […] access to[,] source code […] as a condition for the import, distribution, sale or use of such software, or of products containing such software.

                    This is specifically a prohibition on requiring sellers of software from having to show anyone else the source in order to be allowed to “import” their software. It does not preclude any private party from including any license term, including one requiring the sharing of source code as a precondition for the distribution of derivative works (which is what the GPL seeks to protect). Even if this did somehow make the GPL unenforceable, I cannot see how it would in any way affect non-viral licenses like the Apache License.

                    14.17.2 confirms that reading: states may impose such a condition (of releasing the source) on the importation of software “used for critical infrastructure” (whatever that may mean).

                    Honestly, I think the only parties who should be worried about this provision are closed source makers of infrastructure items like routers or operating systems, as this specifically authorises the imposition of source disclosure requirements on such items.

                    The trend with such arrangements is that signatory states tend to align under relatively common approach. Perhaps Windows will have to be open sourced in the TPP zone; or perhaps there will be a series of side letters and agreements that prevent any forced source disclosure except under very defined circumstances.

                    The OSIA documents linked in the article do not include any legal analysis as to why they think 14.17 prohibits the enforcement of source code sharing provisions in licences. (http://osia.com.au/f/osia_sub_201805_sscfadt.pdf and http://osia.com.au/f/osia_cptpp_pr6a.pdf).

                    Academic disclaimer: I have not performed a full analysis of the whole text

                    Legal disclaimer: this is not legal advice, this is an off-hand academic analysis by an internet random. This does not indicate any course of action for any person, and is not in any way intended to apply to your, or anyone else’s situation.