1. 9

    I think this was a pretty good comparison; often these articles end up with “yeah, but you didn’t write it idiomatically in anything so the whole thing is flawed”. All of these looked pretty good to me.

    I agree with Rust being fun, but sometimes it feels like the fun is in doing mental gymnastics rather than actually getting things working. It’s unclear to me whether that results in healthier code or not. I’d appreciate any replies from Rustaceans on that one :)

    One thing that the article touches on is async/await. Just today I was fiddling with running a container on k8s and I needed it to listen on two sockets: one HTTP for health checks, one something else for GRPC. I knew how to do this in Go just fine: use go httpServer(); go grpcServer(), and all is done. Not having this in Rust made this seemingly trivial task require a lot of hoops. Rust dearly needs async/await IMHO. I hope they can land it soon.

    1. 11

      I agree with Rust being fun, but sometimes it feels like the fun is in doing mental gymnastics

      The gymnastics pay off in refactoring. That’s hard to show in blogpost-sized codebases, but the rigidness of Rust helps to keep invariants together in larger codebases (including traditionally difficult things, such as thread safety).

      I find this leads to qualitatively different kind of bugs. My Rust bugs are, for lack of a better term, domain-specific. When something goes wrong, it’s because I haven’t finished implementing it, or it works, but not according to the spec. While in Go, C and JS I also get tons of boring language-level bugs, such as “nil/null/undefined is not a function”.

      1. 2

        That sounds a great deal like the way people describe Haskell, fwiw.

      2. 4

        I tend to think of async as mostly being a heavy-handed latency-throughput tradeoff favoring latency that is really only worth the severe ergonomic hit if you’re building a load balancer. But pretty much everyone else in the rust ecosystem tends to disagree with me :P

        I agree with Rust I spend more time indulging in complexity. It’s sometimes a problem in the ecosystem, where it’s not uncommon for projects to require pulling in hundreds of dependencies for a relatively simple thing. I really appreciate how Go has a strong emphasis on avoiding dependency, although I really dislike why it got there due to tooling indifference from the controlling entity. I really hate how I type thousands of lines of Go to get anything written with medium or higher complexity. In Rust I can sometimes type extremely little to get a lot done, without a ton of soundness gaps. But it took me a while to get there. I think as more people get a few years of Rust experience there will be more emphasis on tightening things down. Python and Go have had a few nice cultural movements emphasizing parsimony, and I don’t think Rust has gotten there yet.

        I’m a human who appreciates complexity (inherent complexity, I say arrogantly to myself as I busily type out a lock-free storage engine for dubious performance benefits lol) and I do better with languages that let me indulge this aspect of myself. I use Python for prototyping things, Go for bit shovels and simple services, and Rust for performance intensive systems. Rust is the one that makes me feel the best to write. If you love your tools, you’ll be more productive.

        1. 3

          I agree with Rust being fun, but sometimes it feels like the fun is in doing mental gymnastics rather than actually getting things working. It’s unclear to me whether that results in healthier code or not. I’d appreciate any replies from Rustaceans on that one :)

          Pretty much agree, and I had a similar experience with Haskell. I think it does result in healthier code, but not by much. Sometimes (not very often) you will want to optimize on code health though, and in those cases these tools are a good match.

          I’m drifting towards Nim myself. It seems to be a better compromise between pragmatic safety features, performance, stability and code clarity.

        1. 2

          I submitted this not really because I agree with it but was curious if any lobsters out there have strong feelings. I have been using other languages for a while and not stayed up with recent developments. I generally still like ruby and use it fairly regularly and actually kind of like the constancy of it. I do, however, find myself reaching for other tools more often lately.

          However, the claims here that ruby core is working against the general sentiments of the community at large, while examples were provided, seemed slightly exorbitant. So I was curious if others here felt this is the case or not.

          1. 10

            Ruby’s dead. How’s that for a strong feeling?

            I wish I could be more articulate, but it seems an inexorable conclusion of two forces. One, javascript is the language of the browser. Two, people have been making javascript as easy as possible for decades now.

            Notice there are a lot of kids on http://repl.it/talk … I wonder how many of them are genuinely trying to learn ruby rather than JS?

            Look, I’m no JS zealot. I’m more of a “skate where the puck is going to be” kinda guy. And the ruby players on the rink have been skating toward other places.

            Rails was incredible. I remember how magical that first demo was. You could do in ten minutes what took days in PHP.

            Nowadays, doing that is sometimes as easy as running now.

            Oh boy, there I go again expressing super strong confrontational opinions… For what it’s worth, if I’m wrong, you’ll get the satisfaction of rolling your eyes and watching Ruby eat the world over the next decade. But it just doesn’t feel like that’s the world we’ll end up in. Pull up any visualization of “let’s measure the popularity of programming ecosystems” and it’ll look a lot like JS and Python won. https://www.youtube.com/watch?v=wS00HiToIuc

            1. 10

              As a ~10 year experienced engineer with the bulk of that time in Rails currently looking elsewhere, I will say: the job market would seem to indicate that Ruby is very much alive and well. I’m curious about the JS comment, as I feel like Ruby has never been the language of the browser.

              That said, the mid-to-upper tier web companies these days seem to be doing server side development in Go more often than Rails. I’m looking to change only because I want to learn something new. Ruby/Rails if anything has slowed down its formerly frenetic rate of change, performance has increased to a respectable degree and the ecosystem is filled with robust libraries, a good testing culture… Rails is by no means a terrible stack to work in. YMMV of course. Edit: I do agree with the examples in the original article… the pipe operator in particular strikes me as an ugly solution looking in vain for a problem.

              But I find the “Javascript is taking Ruby’s place” remarks very confusing, as Ruby is a server-side language, and Go seems also to have stolen the server-side market share from Nodejs.

              1. 6

                as I feel like Ruby has never been the language of the browser.

                Whoops. My point was, JS is the language of the browser. If you want to do much of anything with “webpage + browser,” you need to know JS.

                That means if you know JS, you can largely get by without knowing anything else. Or rather, you can learn whatever else you need to learn as you go.

                job market

                We’re so lucky that the job market gives us so many options. I totally agree; I didn’t mean to imply that if you’re a ruby dev, you should worry about your career prospects. So much of the world was built on Rails that you’ll probably be able to find work for a long time.

                All I meant was, younger people don’t seem to be interested in learning Ruby. When those younger people become older people, and the older people become less and less people, the world changes.

                If that sounds grim, just be glad you’re not a Lisp guy like me. It’s almost painful to watch everything not use the power of compile-time macros. But at least I get to use it myself.

                Go seems also to have stolen the server-side market share from Nodejs.

                You’re right that Go has had some surprising momentum here. Much of repl.it is apparently built on Go. But the advantage of JS is the ten hundred million libraries that exist to solve all problems ever thought of. (More than a little bit of hyperbole here, but it’s almost not far from the truth.) If you need to do X and you happen to be using JS, you don’t have to read any docs. You can just type “do X in Javascript” into google, and google turns up an npm package for X, for all basic X. Other languages will always be second-place regarding convenience, for this reason.

                Super Serious Projects will tend to be written by people who want absolute type safety and clearly-defined module boundaries and never to see an error. Hell, golang doesn’t even have a REPL. But anyone who’s missed a REPL will tell you that it’s a serious disadvantage not to have a REPL.

                1. 9

                  the advantage of JS is the ten hundred million libraries that exist to solve all problems ever thought of

                  As someone who has done quite a bit of both Go and JS, this is emphatically not a point where JS wins.

                  There’s a bigger number of packages, but bitter experience has not been kind to my trust in ‘this JS package has lots of downloads and github stars and a nice website so it probably works OK’.

                  1. 1

                    “worse is better, lol.” https://www.jwz.org/doc/worse-is-better.html

                    Most people see a package and expect a solution. But each package solves N% of whatever problem you’re facing. (N is usually somewhere between -10 and 97.)

                    As much as I love to write code, I love getting things done quickly without introducing major problems. npm i foo tends to work pretty well for that.

                    Hey, cool trick. npm install sweetiekit-dom and then do require('sweetiekit-dom')(). Now window, document, etc all exist, just like node is a chrome repl. You can even do window.location.href = 'https://google.com' and it’ll load it! console.log(document);. Unrelated to the convo, but I can’t get over how neat it is.

                  2. 3

                    So I decided to test “do soundex [1] in Javascript” just to see if it was true. Yup, second entry on the results page. I checked it out, and found an error—“Ashcroft” encodes to A261, not A226. And given the page it’s on is a gist, there’s no way to report an error.

                    [1] Why Soundex? Well, I used it years ago in an online Bible to correct Bible book names (such that http://bible.conman.org/kj/estor will redirect properly to http://bible.conman.org/kj/Esther).

                    1. 1

                      bible.conman.org

                      Example URL, or satire?

                      1. 1

                        Neither. When I originally registered for a domain back in the late 90s, I wanted conner.com but that one was taken. So were conner.net and conner.org. My backup choices, spc.com, spc.net and spc.org were also taken. I had a few friends that called me Conman (a play on my last name of Conner) so that’s what I went with.

                        In the 21 years I’ve had the domain, you are the first one to question it. No one else has (Weird! I know! [1]). The link is real, try it.

                        [1] It’s also weird how few people connect my name, Sean Conner, to the actor Sean Connery (one letter from stardom!) At least my name isn’t Michael Bolton.

                        1. 1

                          That’s fine. I just reacted to the domain, and in these contentious times it’s not too hard to imagine a person setting up a Bible site with pointers to the “bad stuff” (depending on your view of what’s bad).

                          FWIW I”ve used https://www.biblegateway.com/ a few times (mostly because I’d be interested in how the text is presented in different Swedish editions) but that’s an altogether bigger operation.

                  3. 6

                    Agreed. I would hypothesize that the Ruby community is largely being cannibalized by: Go, node.js, Elixir/Phoenix, Rust, Python (for special purpose work like tensorflow) – probably in that order (or maybe swap Rust and Elixir? unsure).

                    1. 10

                      It’s not only due to new tech stacks emerging. Cultural and commercial factors play a massive role.

                      For instance: Ruby got very popular in the consulting space (it’s a good way to look good by quickly delivering some value, and it tends to generate a need for lots more consulting hours a year or so down the track).

                      Now that the ruby community has more-or-less settled on a few standard approaches, it’s no longer as profitable for consulting companies.

                      1. 4

                        I don’t agree fully with that reading, Rails was always also very popular in the old-school agency space, as Rails is extremely quick in getting set up. It’s insistence on having a standard stack might lead to problems in the long run, but still makes it the best framework for quickly getting out of the door in a clean fashion.

                        It still remains very popular there.

                        Also, Rails is often used for internal management applications, I have tons of clients that “don’t do Ruby” until slowly, you figure out there’s tons of small applications running on their servers essentially providing some buttons and graphs.

                        The number of companies that “don’t do Ruby” officially, but actually do internally is huge, especially in enterprise.

                        1. 1

                          That’s a great perspective, thanks for brining it up!

                        2. 5

                          Speaking from the perspective of someone who is both in the Rust project and on the board of one larger Ruby non-profit, I do not agree with the point that Rust cannibalises Ruby. Indeed, we’re still growing, even if the curve flattens.

                          1. 1

                            I only have a limited set of data points for folks I know of that have moved (or are moving) from ruby to rust for a couple of projects (blockchain space). Sounds like you have more empirical evidence here for sure.

                            1. 3

                              Rust is pretty popular for implementing blockchains, and Ruby isn’t, because you can’t write a competitive PoW function on top of the mainline Ruby implementation. Most Ruby projects don’t need that kind of performance, so your story probably isn’t very typical.

                              1. 1

                                Experienced developers usually extend their toolchain at some point, coming with a shift of interest. There’s an effect where you have more insight into see experienced people picking up new stuff, but tend to ignore newcomers coming up.

                                I am of a certain generation in the Ruby community, which leads to the good effect that a) I meet more and more people that don’t know me, despite having a high profile, b) I tend to only see my circles and have a hard time keeping track of newcomers.

                              2. 1

                                I agree, and I think they complement each other more than compete right now. Ruby is great at building architecturally slick webapps, which Rust is lousy at. Rust is great for building high-performance low-level stuff, which Ruby is lousy at. It seems like a good pattern, supported by several gems/crates, to build a webapp in Ruby/Rails, and refactor any parts that need top performance out into a Gem written in Rust.

                              3. 2

                                I very much doubt Ruby devs are moving to a language as low-level as Rust.

                                Elixir I could very much believe.

                                1. 8

                                  I very much doubt Ruby devs are moving to a language as low-level as Rust.

                                  Roughly 1/3rd of the Rust programming language community come from dynamic languages, mostly Ruby and Python.

                                  1. 1

                                    How do they deal with lifetimes? Whenever I use Rust, I tap out at lifetimes because it just gets too confusing for me.

                                    1. 4

                                      The zen of Rust is using Ownership in most spaces. Lifetime problems usually arrive when you are trying convoluted structures that are better handled through cloning and copying anyways. Use clone() liberally until you are very sure of what you want to do, then refactor to lifetimes.

                                      I wrote a glimpse into this here last year: https://asquera.de/blog/2018-01-29/rust-lifetimes-for-the-uninitialised/

                                      Also, Edition 2018 made lifetimes a lot easier.

                                      1. 2

                                        Thanks for the link! :)

                                        1. 1

                                          You’re welcome!

                                    2. 1

                                      i’m pretty excited about the mruby/rust integration, especially if i can eventually ship a single executable with embedded ruby code.

                                      1. 1

                                        Is that being talked about anywhere? I’d love to follow that conversation as well

                                        1. 1

                                          i know about mrusty but it seems to not be active; i’m just hoping that people are still working on this (i might even join in if i get some free time)

                            2. 8

                              I have never used Ruby in anger, but gosh that Immutable Strings bug getting closed out as “not going to do it, don’t care you all want it, just use a magic comment” would make me think that the Ruby you’ve got is the Ruby you’ll ever get.

                              I don’t think that languages have to keep being developed (how many Lisp dialects are there that don’t change?), but if you think Ruby has deficiencies now, I wouldn’t expect them to change and that would make me worried too.

                              1. 7

                                I am maintaining a ruby codebase that’s >10 years old.

                                I don’t want ruby to make backwards-incompatible changes! The language is established now; it’s far too late for that.

                                It sucks that you need a linter to check your files start with a magic comment in order to get sensible behavior, but not nearly as much as not being able to upgrade & benefit from runtime improvements/security patches just because they’ve changed the semantics of strings for the first time in 25 years.

                                1. 6

                                  This is an awful sentiment. How would you like being told that for a project you maintain, you can no longer make any big changes, ever? Because some user from 20 years ago doesn’t want to update their scripts, but wants bleeding edge Ruby.

                                  The world doesn’t always work that way, and hopefully Ruby doesn’t listen to people like that.

                                  1. 8

                                    I actually think it’s a pretty reasonable statement. One of my favorite things about Java is that it’s almost 100% backwards compatible. We just dusted of a 20 year old (!) game and it pretty much worked on that latest JDK. That’s awesome.

                                    1. 1

                                      So is C, and C++ and other natively compiled languages. The advantages of a standardized lower layer!.

                                    2. 5

                                      If you want to maintain a project where breaking things to make other things better, find one where the things you break don’t affect people. There’s no shortage of them and it’s even easy to start your own!

                                      If you want to be the trusted steward of a large community, you have to consider how your choices affect the people who have placed their trust in you. There’s nothing wrong with not wanting that! It’s a lot of work. I don’t want it either. Thankfully, some people do, and that’s why communities have some people at the center and others at the periphery. The ones at the center are the ones doing the hard work of making it possible.

                                      1. 2

                                        Hopefully they do. It’s great to have new language features and to advance the state of the art, but it’s also great to be able to run my code from a few years ago without having to rewrite it.

                                        There are ways to have both, of course, which involve making compromises. For example, in the area of scientific computing I’m currently working in, there are a lot of Python 2 hold-outs who don’t want to migrate to Python 3, even though the changes are few* and Python 2 support is due to end. But many Python programmers are happy with Python 3 and have ditched 2 altogether already.

                                        *few, but important in context: changing how division works is a big deal for numerical simulations.

                                        1. 1

                                          maintainers who don’t want to be told that should not maintain languages

                                          1. 1

                                            This kind of thinking is how you get things like Python 3 being out for over a decade while some people still do everything in 2. If you intend for your language to be widely used, you have to come to terms with the fact that even minor changes that are highly beneficial will be massively painful, and might even destroy the language entirely, if they break old code.

                                          2. 3

                                            Python 3 actually introduced breaking changes, which in hindsight were all really good. I had to convert dozens of projects over a couple of years, it was not that bad once I understood how things worked in Python 3. The biggest change was the fact that strings are now Unicode strings instead of ascii, and it was very confusing at first.

                                            1. 1

                                              IMO python 3 is a great example of why I’m glad I don’t maintain any python codebases, despite loving the language.

                                              In a maintainer-friendly world, developers would still have to write a bunch of from __future__ import X at the top of every file today, which sucks differently but IMO not nearly as much. If you were somewhat forward about it, files that don’t have those lines could emit deprecation warnings when loaded warning that those defaults will be enabled in a few more years time.

                                        2. 1

                                          I’m sure that a lot of decisions in Ruby in the past were questionable, I just didn’t know about them before I started learning Ruby. However, now that I keep an eye out for programming languages in general, I feel like it’s made me a bit of a snob. I tend to agree with the author of the blog post that it puts a bad taste in my mouth for the language to be changing like it is (both the language itself as well as the process in which those changes are happening) but I’m not sure these things would have bothered me if I were coming to it as a new programmer like I did with Ruby 1.9.

                                          I made a comment somewhere that lamented that Ruby was adding Enumerable#filter because it was ambiguous whether it was equivalent to #select or #reject. The response I got was that it was a good change because that’s the way that every other language did it. Ruby’s just kind of weird sometimes, and I think I have accepted the a lot of the legacy weirdnesses. So in that respect, what’s one more feature I won’t use?

                                          In the end, I don’t have much stake in the game - if Ruby’s new path really starts to bothers me, there are plenty of other languages to pick up. But until then, it will be the first language I turn to for quickly translating thought into code, weird language design cruft aside.

                                          1. 5

                                            Ruby 1.9, in hindsight, was extremely well managed. It was an opt-in to breakage for getting fundamental problems out. They handled that switch in a very good way, making Ruby 1.9 the clearly better version while releasing 1.8.7, which closed the gap in between both versions, making it feasible to write codebases that run on both with relative ease. Sure, there were issues and not every aspect was perfect, but comparing e.g. the Python 2.7/3.0 story, I’m sad that the Python community hasn’t been watching and learning from that.

                                            1. 1

                                              Agreed, and I find Python’s rise in popularity comes in spite of the poor developer experience - compatibility and dependency management - so I wish Ruby had made more headway in non-Rails contexts.

                                            2. 2

                                              I made a comment somewhere that lamented that Ruby was adding Enumerable#filter because it was ambiguous whether it was equivalent to #select or #reject.

                                              Agreed. select and reject is a naming choice I have decided to steal, I wished filter just stopped existing (or returned (selectedElements, rejectedElements).

                                          1. 8

                                            So, an interesting side effect of the GPL is you’re effectively banning your software from being run in large enterprise environments with legal departments that are concerned that having any GPL code will be infectious and that Stallman will come and steal all their monies :)

                                            Our socialist free software utopia is ripe for exploitation by capitalists, and they’ll be rewarded for doing so. Capitalism is about enriching yourself - not enriching your users and certainly not enriching society.

                                            IMO this boils down to whether or not you think capitalism is inherently exploitative at its base or whether it can also be a force for good.

                                            I’m on the fence on this one. I would love to live in a post materialism utopia, and in that world, I would be utterly in favor of the GPL and the total freedoms it guarantees.

                                            But in this world, the world where my choices are profit or die (quite literally in my case) , I’m less convinced that profiting from other people’s work when it’s a gift, ostensibly freely given, is inherently exploitative.

                                            I give people free software because I want them to reciprocate with the same. That’s really all the GPL does.

                                            This right here? This is the best articulation of all the hurt and anger I see around companies like the one I work for building commercial products based on OSS code bases. This actually makes sense to me, and is perfectly reasonable.

                                            Permissive licenses were designed to allow for commercial use of the licensed work, so having expectations to the contrary seem like a recipe for disappointment to me. Rather than being outraged, software authors should choose licenses that will do what they want and mean, and save their energy for creating more awesome software :)

                                            As others have said it’s a great article - super thoughtful and well written. Thanks for posting it!

                                            1. 8

                                              I’m on the fence on this one. I would love to live in a post materialism utopia, and in that world, I would be utterly in favor of the GPL and the total freedoms it guarantees.

                                              There’s great irony here; as the article points out, in such a world the GPL wouldn’t exist, because it would be pointless.

                                              1. 6

                                                More correctly, it would be unnecessary. Like in the scientific environment, where people don’t feel the need for reciprocity and anti-troll clauses when publishing a paper.

                                                1. 2

                                                  In the scientific environment, most of it gets put behind paywalls even though that isn’t strictly necessary. I think they also give them the copyrights, too, in many cases. There’s been more papers on the open sites recently. So, we might look at the scientific environment like software when it was mostly proprietary with a strong upswing of F/OSS.

                                                  1. 1

                                                    most of it gets put behind paywalls

                                                    Researchers are not being paid by that and that’s also besides the point. There are no restrictions on the concepts of the paper, e.g. a theorem. Readers can teach the theorem to other people or use it without some legal restriction (e.g. being required to provide citation or not to sue the author of the theorem)

                                                2. 5

                                                  There’s great irony here; as the article points out, in such a world the GPL wouldn’t exist, because it would be pointless.

                                                  My understanding is that GPL is exactly that: a copyright way of fighting copyright. From what I remember Stallman basically created it to restore the world to the state it was before people started copyrighting software: hardware came with the full source code and you could modify whatever you wanted. Kinda like the freedom @SirCmpwn is describing in the article.

                                                  1. 1

                                                    You’re absolutely right. In a sense, the GPL exists to protect software author’s intent FROM capitalism and the legal mechanisms around it.

                                                      1. 1

                                                        He’d know. Look forward to reading that interview when I have more time.

                                                  2. 9

                                                    you’re effectively banning your software from being run in large enterprise environments

                                                    This is generally false. Only some large companies are avoiding GPL. Only some versions of it (3). And only in some use-cases. Also, they can change their decision without asking you.

                                                    IMO this boils down to whether or not you think capitalism is inherently exploitative at its base or whether it can also be a force for good.

                                                    How can you leap to this conclusion from reading a license? Plenty of companies release software under conditions that are way more restrictive than GPL (closed source, partnership agreements, contribution agreements…).

                                                    1. 3

                                                      This is generally false. Only some large companies are avoiding GPL. Only some versions of it (3). And only in some use-cases. Also, they can change their decision without asking you.

                                                      I will absolutely cop to my statement being too general, but you’re going to far the other way. I can speak to at least several environments where this is in fact the case.

                                                      1. 3

                                                        What, exactly, is the case? I’m aware of the internal policies of some FAANGS and other large companies.

                                                        1. 8

                                                          I can only speak for Google (having worked at the open source office), and our docs on GPL are here: https://opensource.google.com/docs/thirdparty/licenses/#restricted

                                                          Google’s monorepo and strong launch tooling means that we have very high confidence that GPL code doesn’t sneak into code that it shouldn’t, and we take great pains to ensure that all OSS is separated into a separate directory tree to make sure that people don’t accidentally patch the library and trigger a reciprocal license. We can do this because we have the money to have an OSS office, because we have the money to build the tooling, and because we have the institutional support to be a good OSS neighbor.

                                                          If I was CTO of a smaller company or one where all the code was federated into small repositories that I can’t track, I personally would ban GPL-style licenses. License forgiveness is certainly helpful, but once you’ve violated the license you are in a sticky situation where you have to either excise the library, or find employees who never looked at the code to clean-room implement it. Depending on how big that library is you might be very screwed. I would just see GPL as too dangerous.

                                                          1. 6

                                                            I would just see GPL as too dangerous.

                                                            That’s basically the point. If you plan to restrict users, stay away from the code that was written to provide them with freedom :)

                                                          2. 3

                                                            The (small) company I work for (based in Sweden, sells software for telecoms) bans the use of GPL libraries.

                                                            1. 5

                                                              The company I work for (based in Finland, sells software for telecoms) also bans the use of GPL libraries. ;)

                                                              1. 0

                                                                There’s a pattern emerging. It’s… that we need to sell GPL license exemptions to telecoms. Oh yeah!

                                                      2. 3

                                                        Except GPL would actually allow you to make money as a creator by selling a dual license. If you released it as MIT, then well, too bad.

                                                        1. 0

                                                          Care to explain this a bit? MIT is a permissive license, so you can sell your work, as can others. What’s “too bad” about this?

                                                          1. 2

                                                            Let’s say you have a library you wrote with MIT license that a company wants to use. You can’t sell them a license but you can sell them support. Most companies will simply not pay you.

                                                            However, with GPL, companies are afraid to use your library for free because GPL would force them to open source. You can say “look, I can sell you a license and you won’t be forced to open source”. This is a dual license scheme where companies pay you for the right not to have to open their own code.

                                                        2. 3

                                                          you’re effectively banning your software from being run in large enterprise environments

                                                          Are you ? AFAIK, you are not allowed to modify it privately or use it as an integral part of another solution. If you just use the tool as an end user on your own, I am 99% sure you can’t be approached by the layers. If I am wrong, I would also like to know :)

                                                          1. 12

                                                            The point is that these companies don’t actually care what the true legal implications are and just run away out of GPL phobia.

                                                            The GPL is used commercially in many places, so if you think you can’t do business with the GPL you’re either mistaken or your business is shady.

                                                            1. 2

                                                              The point is that these companies don’t actually care what the true legal implications are and just run away out of GPL phobia.

                                                              I see it more as paranoia, but in some cases, paranoia grounded in cold hard fact. When you are the biggest target, your legal department needs to figure out how to protect said target from attack. In order to do that, it MUST set incredibly paranoid boundaries to protect the company’s liability.

                                                              1. 3

                                                                Sort of. Apple forbids all GPLv3 but Google doesn’t. Both of these have comparable legal departments and are equally attractive targets and ship about as equally important software. They shouldn’t come to different conclusions on GPlv3.

                                                              2. 1

                                                                You and @feoh are talking about different thing which I don’t care for. I talk about legal stuff only, not about human psychology here. Please focus on topic.

                                                                1. 3

                                                                  No, this is the topic. You’re “effectively banning” your software because those companies have internal rules to ban any GPL software. They don’t care what the actual rules are, because effectively they have decided to interpret them their own way.

                                                                  Legal stuff is human psychology anyway, you have to convince a judge and a jury, who are fallible, biased, manipulatable humans.

                                                              3. 1

                                                                You are. Speaking from my personal experience at one such large corporate enterprise, use of GPL licensed software is straight up banned.

                                                                1. 6

                                                                  I don’t know where you work, so I can’t comment on specifics, but I have found at other places I worked many coworkers thought “using any GPL’d software was banned” but all ran Bash on their MacOS laptops… now maybe you’re all Windows all the time and really have a ban where you work, but in my experience such bans are not quite so total as is sometimes perceived.

                                                                  1. 3

                                                                    So you’re banning stupid people from using your software (stupid, because apparently they can’t read a license and estimate its effect). I’d call that a net win because it reduces customer support requests: stupid as they are, they’re probably of the “you must fix the issue I have, now, for free” kind, too.

                                                                    1. 2

                                                                      FSVO “stupid people” which includes “smart people who’ve chosen to work for people who make stupid decisions”, sure. But it’s not an invalid point.

                                                                      1. 4

                                                                        Respectfully, you’re both being a bit elitist here. There are limits on what we can conceive of based around our previous personal experiences.

                                                                        I have been thorugh the process of thinking something was stupid, only to learn that no really it’s NOT stupid and there were honest to god good reasons behind this or that restriction which I just wasn’t aware of at the time.

                                                                        Are they decisions you’d make? Possibly not. Are they decisions I’d make under the circumstances? Maybe and maybe not. I know I don’t have all the answers, and I’m arguably in a better position to have a wider view than some.

                                                                    2. 2

                                                                      One person already pointed out FAANG are known to do this. What they do doesn’t generalize to most enterprises, though. Heck, their success has a lot to do with being opposite of most enterprises. You should probably just say the specific companies, esp if it’s SaaS like Amazon.

                                                                      1. 5

                                                                        FAANG are known to do some of that.

                                                                        I work at Google and editing GPL code (not just using, actually changing and distributing an external project, coreboot) is what they hired me for, so the GPL is certainly not “banned”. There are bans though and the list is public: https://opensource.google.com/docs/thirdparty/licenses/#banned

                                                                        1. 0

                                                                          No but as someone else explained in more detail, it’s walled off from the mono-repo to protect the main code base from the viral nature of the GPL.

                                                                          Google has wisely chosen to put enough resources into play that it can safely play with fire safely.

                                                                  2. 3

                                                                    Rather than being outraged, software authors should choose licenses that will do what they want and mean, and save their energy for creating more awesome software :)

                                                                    You’re missing the important case where one’s ethics does not necessarily align with what one thinks should be enforced by law. For example, you might think that cheating on your SO is wrong, but it is generally not illegal to do.

                                                                    Just because I share similar goals as the FSF, does not mean I agree with their desired means to accomplish those goals.

                                                                    Effectively, you’re espousing a form of “the ends justify the means.”

                                                                    (You don’t need to ask me why I disagree with using copyleft as a means. Just go look up arguments against the use of intellectual property.)

                                                                    1. 2

                                                                      “Support Intellectual Prosperity, Not Property!”

                                                                      1. 1

                                                                        You’re missing the important case where one’s ethics does not necessarily align with what one thinks should be enforced by law. For example, you might think that cheating on your SO is wrong, but it is generally not illegal to do.

                                                                        So then get involved with activism efforts to change said law to more fully align with your desires?

                                                                        My point is simple - we live in a society awash with outrage, and honestly I think it’s becoming a canned response to way too much, so I’m suggesting the channeling of that energy into something more useful. That’s all.

                                                                        1. 2

                                                                          All I’m saying is that your outlook on how to choose licenses is extremely short sighted. And you aren’t the only one falling into this trap. Lots of people, for example, think it’s entirely unreasonable to be upset with someone plagiarizing your work if you put it into the public domain. And you’re effectively making the same argument, and it’s ridiculous.

                                                                          1. 1

                                                                            I don’t agree. You’re making analogies that don’t work, at least in my world view. I’m sure you have information or background that I don’t, but can you please help me understand how writing some code and then putting it under a license which is explicitly designed to allow it to be copied, sold or otherwise used in a particular way is equivalent to plagiarizing someone’s written work which was explicitly designed NOT to be copied etc?

                                                                            1. 3

                                                                              Sorry, but I don’t see what you’re missing. My last comment had zero analogies. The first analogy in my initial comment (cheating on SO) was merely used to demonstrate that laws and ethics are not the same thing. That is, just because I don’t want to use the full weight of the law to force you to do something (e.g., the GPL) doesn’t mean I thinkI don’t agree with the motivation for the GPL in the first place (reduce the amount kf proprietary code).

                                                                              In other words, saying you should choose a license based on its effect neglects the fact that one may disagree with the means by which the license achieves said effect.

                                                                              For example, I might choose to publish my source code in the public domain. In the eyes of the law, it would be legal for anyone to do anything with that work without restriction, including plagiarizing it. If you argue that one should choose a license only by its effect, then you’d think this was completely reasonable since I chose to put it into the public domain and knew this could happen. But what I’m saying is that this is a fairly shallow way to interpret license usage, and that it would be completely reasonable for the publisher to be upset at someone plagiarizing their public domain work. Because laws and ethics are not equivalent.

                                                                              1. 1

                                                                                For example, I might choose to publish my source code in the public domain. In the eyes of the law, it would be legal for anyone to do anything with that work without restriction, including plagiarizing it. If you argue that one should choose a license only by its effect, then you’d think this was completely reasonable since I chose to put it into the public domain and knew this could happen. But what I’m saying is that this is a fairly shallow way to interpret license usage, and that it would be completely reasonable for the publisher to be upset at someone plagiarizing their public domain work. Because laws and ethics are not equivalent.

                                                                                I see where you’re coming from now, and you’re right. I am a citizen of the US. In the US, putting something into the public domain says that you can do whatever the hell you want with that code. If you copy the code and claim it’s yours, then I would think that is morally bankrupt of you to do, but you wouldn’t be violating the law.

                                                                                The law is what it is, and we have to live by it, or break it and face the consequences. When I have discussions with people, my assumption is that generally speaking “we will act within the boundaries of the law” goes without saying.

                                                                                I guess if you think people’s outrage is just and warranted, then that’s fine. I don’t know that I agree with you, but I also suspect that we are coming at this from two very different perspectives and I’m unsure whether it makes sense to try to have a meeting of the minds in this forum.

                                                                                1. 1

                                                                                  I’m not advocating breaking the law. I’m not sure how I could be clearer unfortunately, and I don’t know why you think I’ve abdicated the assumption that one should generally act within the law. This is about choosing licenses and the reasons for doing so. i.e., It can be about the means as well as the ends.

                                                                      2. 2

                                                                        IMO this boils down to whether or not you think capitalism is inherently exploitative at its base or whether it can also be a force for good.

                                                                        As you mention later a lot of us don’t have a choice whether or not to participate in capitalism, but it is inherently exploitative. For example, you wouldn’t be forced to choose between profit or die unless you were being exploited in the first place.

                                                                        But you raise a really important point, which is that being able to avoid capitalism is a luxury and that’s something to keep in mind whenever we criticize people’s actions.

                                                                        1. 2

                                                                          As you mention later a lot of us don’t have a choice whether or not to participate in capitalism, but it is inherently exploitative. For example, you wouldn’t be forced to choose between profit or die unless you were being exploited in the first place.

                                                                          False dichotomy, every developed country has some form of social welfare for its citizens to fall back on should they absolutely need it. Even in the wacky old free-market capitalist utopia United States.

                                                                          1. 2

                                                                            False dichotomy, every developed country has some form of social welfare for its citizens to fall back on should they absolutely need it. Even in the wacky old free-market capitalist utopia United States.

                                                                            … Have you ever lived on welfare or other state supported benefit / plan? I have, albeit admittedly while I was still under my mother’s roof. I had MassHealth and she lived in survivor’s benefits and SSI to raise me.

                                                                            We got by and I never starved but please don’t put living in such a state forward as a viable alternative.

                                                                            For instance, with the expensive medical care I require, were I living on welfare or something like it, I might not die, but I’d likely wish for death given the hardship such a situation would impose.

                                                                            It’s very easy to make arguments based on theory, but living the reality is something quite different.

                                                                            1. 1

                                                                              I was responding to the grandparent’s statement that I quoted and the fact that he/she painted a false dichotomy under capitalism of “profit or die” and used social welfare systems as a counterpoint. I didn’t say every country’s social welfare systems are perfect, just that by and large, they exist and they keep a lot of people from dying.

                                                                            2. 0

                                                                              I’m not sure what you’re getting at, social welfare is not a capitalist construct.

                                                                              1. 1

                                                                                But lack of social welfare is not a capitalist construct either.

                                                                          2. 2

                                                                            “ermissive licenses were designed to allow for commercial use of the licensed work, so having expectations to the contrary seem like a recipe for disappointment to me. Rather than being outraged, software authors should choose licenses that will do what they want and mean, and save their energy for creating more awesome software :)”

                                                                            That’s what I keep saying.

                                                                          1. 11

                                                                            The Mac might be back, but Mac OS itself is on life support. Literally nothing useful to pros has happened to that OS for years, and their iCloud offering continues to be poor. Remember when Steve Jobs would walk out and show feature this and iPhoto that and banners which read “Redmond, start your photocopiers”? It seems like Apple would benefit spending time taking cues from Windows for Mac OS nowadays. For my home driver, I put Windows on my 2012 Macbook Pro and have generally been happier. Windows Subsystem for Linux works as advertised, I can stream my Xbox, access to PC games and web browsers are quick.

                                                                            Disclaimer: I work for Google, but I am a big fan of Chrome OS for Pros. Local Linux apps, very few crashes, automated security updates, ability to just toss it out and get a brand new one set up exactly as it was just by logging in (sans the Linux apps AFAIK). If you need more power/uptime, get yourself a cheap VM from your preferred cloud and SSH to it and/or mount the remote drive. Literally the only limitation I bump into which grinds my gears is the inability to install fonts without switching to developer mode.

                                                                            1. 12

                                                                              What makes MacOS useful to me as a ‘pro’ is it not changing the UI of everything every minor release, not making me think about drivers or updates (that might break stuff), not being prone to malware, respecting my privacy (and that of my company).

                                                                              Linux with KDE on a ThinkPad is similar in these respects.

                                                                              1. 7

                                                                                What makes it useful for me is that all the software I want to use exists on the Mac and not on Linux; and I can’t stand using Windows. I could probably survive on Windows, I guess, but I’m glad I don’t have to. The fact that, even among all the options, and even with the neglect and sometimes outright malice that Apple treats the Macintosh with, it remains by far the most amenable system for me to use.

                                                                                1. 2

                                                                                  I mainly talk to people in real life or on Slack, but when I’m on a computer I’m doing email, calendar, wiki, ticket systems, interacting with technical and organisational systems… all this just needs a browser. I hack on stuff mainly in a terminal window, but also pull up various JetBrains tools and VS Code when they are helpful.

                                                                                  OmniFocus is the only Mac only tool I am currently relying on but that’s okay - I can migrate onto something else if I have to switch away from MacOS one day.

                                                                                  1. 2

                                                                                    I use no web applications, save for Slack; I live in platform native software. I use Logic and Lightroom and Garage Band. All of these are replaceable, or available on Windows in some form; but I don’t have to switch, so I’m not going to. Besides, I still like the Mac.

                                                                                  2. 1

                                                                                    I suspect you have a better answer than I did, but I used to think this, and then I asked myself - what software am I actually using that only exists on the Mac?

                                                                                    For me, the answer was - aside from VERY occasionally twiddling with Garageband or Quartz composer, exactly none. However your mileage almost certainly varies.

                                                                                    1. 2

                                                                                      Logic, Garage Band. Reeder, my RSS reader. Things, my to-do/reminder app. Fantastical. Lots of small, quality-of-life apps as well.

                                                                                      1. 1

                                                                                        Logic Pro is indeed a fine example of a Mac only app, as is Things. Both are excellent examples of finely crafted Mac UI as well.

                                                                                        I don’t really use any of that. To be honest they only thing I’d vaguely miss moving to Linux wholesale is Alfred :)

                                                                                        I wish you luck and hope that Apple continues to support the workflows you use for many years to come.

                                                                                        1. 2

                                                                                          I wish you luck and hope that Apple continues to support the workflows you use for many years to come.

                                                                                          Thank you! Me too.

                                                                                2. 5

                                                                                  I totally agree. Microsoft, love them or hate them, are innovating like crazy in the developer space - and the Linux space as well!

                                                                                  Whether you use ChromeOS or an honest to god native Linux I think a lot of developers are starting to reconsider whether or not MacOS X is still giving them any meaningful advantage.

                                                                                  1. 1

                                                                                    The one critical advantage of it – is that it can run OS-X without issue. If you want to carry a single box and do Windows & Linux & Mac development – you got one real pick.

                                                                                    There are various ways to kludge it – from hackintosh to MacinCloud, but none of them feels that nice.

                                                                                    1. 1

                                                                                      Oh absolutely if you’re doing OSX or IOS platform development there’s no substitute. The hackintosh route seems more trouble than it’s worth, and for many methods you still need to maintain at least one Mac so you can get bonafide access to Apple’s ecosystem.

                                                                                  2. 2

                                                                                    Do you have any materials on great workflows for development on ChromeOS? I’ve been considering going this route – using ChromeOS as a terminal, SSH into cloud servers that I can suspend when I’m not using them, etc. – but have been too busy being productive to devote time to such a massive workflow change.

                                                                                    1. 2

                                                                                      No, not really. It’s pretty trivial, it’s exactly how you said.

                                                                                      Honestly, I’d get a cheapo Chromebook from Costco, try it for a week or so, then reformat and take it back. Then you get to see what you think and if you like it, get a good one.

                                                                                      1. 1

                                                                                        My Chromebook is a little bit more than 3 years old! I’ve just never took it down this path because of I’ve used it mostly as a souped-up tablet reading experience.

                                                                                      2. 1

                                                                                        I wasn’t terribly impressed by any Chromebook offerings until I picked up a Lenovo Yoga C630 on a whim. It has a nice big screen, a remarkably good touchpad, and a respectable keyboard. My original intention was to see if I could get Linux running natively without too many problems. I figured worse case I could flip the dev-mode switch and tool around with that, but then I discovered Crostini and was immediately sold. You can basically spin up distro-specific Linux containers within new Chrome tabs - way more convenient than dev-mode or ssh’ing into a cloud server.

                                                                                        Rather than going with a cheap Chromebook, do a little research first and see if the book you’re looking at supports Crostini and give that a shot. I’m a windows dev by day, but I otherwise live on an MBP and rely on it for side projects. I can easily see swapping out my MBP for something like the C630 as an alternative.

                                                                                        1. 1

                                                                                          It looks like my trusty but aging ASUS Chromebook Flip C101PA has Crostini support so maybe I’ll give it a shot sometime.

                                                                                    1. 11

                                                                                      I work on a Kubernetes-adjacent project. We consume K8S libraries, but we aren’t building something that is K8S-native (for those that care, we are using the K8S API server).

                                                                                      The monorepo is easily the worst part of K8S for me. The Go code that does a great deal of work erasing types (hi unstructured.Unstructured) is second. The Java-looking Go code is third.

                                                                                      The problem with the monorepo is that it makes dependency management for them really easy, but for consumers of the library incredibly hard. The sheer number of dependencies that the monorepo has means you spend a great deal of time in dependency hell (I did write “if you’re not careful” first, but that isn’t true, there’s nothing you can really do). The monorepo encourages code reuse, so the repo is strongly bound to itself. It’s very hard to tease out the threads of functionality with the packages you actually need without bringing in many others that then themselves bring in many others and so on and so on.

                                                                                      I am in the process of converting a code base to remove dependencies on the monorepo and depend on the new smaller ones. This makes life a great deal easier, but not all the functionality has been pulled out into the smaller ones, so that’s been a bit frustrating. It’ll get there. The only problem now is that each library is versioned together (e.g. tagged “kubernetes-1.12”), and you should really ensure that you are using the same version of each library, so you have to tell dep explicitly for each K8S library you use to use the right one. That’s a pain. I’m not sure how to get around it, but it sure is ugly.

                                                                                      I have very mixed feelings on K8S. I think it is a strong technical achievement which has enabled cloud providers to build some very useful tools for the enterprises that need it. I don’t think it was ever really intended to become this cornerstone of the cloud such that things like the K8S YAML is now some sort of lingua franca, and I think its popularity is driving more people into their Go libraries rather than consuming K8S as a tool. You don’t ever get the feeling that the K8S monorepo is trying to provide an API, it’s just trying to work. The smaller libraries are much easier to work with as an API.

                                                                                      I am a big fan of Go and it’s all I write professionally, but I don’t think K8S should have been written in it, for fault not really of their own. I think they started with Go too early in the Go lifecycle, before maintainable and clean Go was really understood, and they put a lot of what we’d now consider non-idiomatic code in there (huge overuse of interfaces being the worst, which makes navigating the code very hard). I think they would have had a better outcome if they’d stuck with Java, but hindsight is 20:20. Hopefully the refactors help.

                                                                                      1. 3

                                                                                        I am a big fan of Go and it’s all I write professionally, but I don’t think K8S should have been written in it, for fault not really of their own. I think they started with Go too early in the Go lifecycle, before maintainable and clean Go was really understood, and they put a lot of what we’d now consider non-idiomatic code in there.

                                                                                        Idiomatic Go was reasonably well understood when Kubernetes started. Several members of the core Go team even offered, if I recall correctly, to mentor the initial Kubernetes contributors, review PRs, etc. The problem was that the Kubernetes folks simply weren’t interested, and on a few occasions made that known to the Go team quite aggressively. They were primarily Java programmers beforehand, and they simply wrote Java-flavored Go, and all the Go resources in the world weren’t going to change that.

                                                                                        1. 3

                                                                                          The problem with the monorepo is that it makes dependency management for them really easy, but for consumers of the library incredibly hard.

                                                                                          It’s Google. They don’t give a single shit about anyone outside of Google who wants to use their open source code. I’ve been using their C++ WebRTC library a lot, and it’s extremely clear that they built it for Google Chrome, not as a library for other people to use.

                                                                                          They don’t care whether it’s standards-compliant C++ or if it breaks with other compilers than their version of Clang. I encountered a case where they deprecated a feature because it will be replaced by another feature in the future. I literally was recommended automatically rewriting all of their headers’ includes because they’re so unfriendly towards being used as a library for anyone other than Google:

                                                                                          I think webrtc is a bit unfriendly to the standard library install conventions. One option you might want to explore is to rewrite all #include “…” lines in the headers as you install them in /usr/include/webrtc/, so that all internal includes use relative names.

                                                                                          1. 1

                                                                                            This was my team’s experience trying to adopt gRPC. Huge, glaring issues that we couldn’t get support on, even when we had team members visiting Google campuses and talking about them! Needed features that we were repeatedly informed existed in the Google-internal gRPC implementation that no-one (ever) got around to putting in the open-source impl. After a lonnnng time we eventually realised Twirp would make everyone’s life easier. And it has.

                                                                                            If we couldn’t make it happen with that level of access, most people really just have to take it or leave it.

                                                                                          2. 1

                                                                                            I watched this talk last night and I found the Go reflection / Java stuff fascinating! I had no idea.

                                                                                            (I’ve never used Kubernetes, but for many years I used Google’s internal cluster system Borg, which is written in C++, and which Kubernetes is loosely based on. The C++ doesn’t have any of this kind of reflection (other than protobufs). I also tried to write a Borg-like cluster manager starting in 2010-2011, concurrent with Kubernetes!)

                                                                                            While I was watching it, I couldn’t shake the feeling that this is an advertisement for dynamic languages, or Greenspun’s tenth rule:

                                                                                            Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

                                                                                            It sounds like they are doing metaprogramming on objects so that they can be serialized to YAML (to store in etcd), and versioned. So they invented a little language within Go.

                                                                                            Now I get that Kubenetes is hundreds of thousands of lines of code, and a static type system is hugely beneficial there (even if it would be 1/5th the line count in a dynamic language).

                                                                                            But now I wonder what would happen if you went the opposite way and used a dynamic language with some optional type checking. There have been a bunch of developments in that area since Kubernetes was released in 2014.

                                                                                            Anyway, there are just idle thoughts. I’m sure the real details are a lot more complicated. But it did not occur to me that Go was a pretty bad fit for Kubernetes. To be fair I don’t think C++ or Python are great choices, having had some experience with cluster managers in both.

                                                                                            My guess is that the killer feature of Go is concurrency, and that basically trumps all the disadvantages. As far as I understand, Java does have better concurrency than both C++ and Python, so it indeed might have been a good choice.

                                                                                            https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

                                                                                          1. 4

                                                                                            The bit about how it’s hard to tell what will close a Reader/WriterCloser underneath you is super valid. I’m not sure how you’d manage that without some kind of “reference count” or similar (i.e. only actually close until the count hits zero).

                                                                                            Another awkward use case is how to write http middleware that hash-sums a post body (i.e. for slack webhook validation) and then also lets inner actions read from the request post body.

                                                                                            1. 6

                                                                                              It’s simple. If you pass a *Closer to something, you should assume that something is going to close it. Otherwise you’d just pass it a Reader or a Writer.

                                                                                              1. 2

                                                                                                Not everyone gets the memo that this is the way it’s supposed to work. Very often, folks will create large interfaces and pass them around, with no intent on using everything defined.

                                                                                                1. 2

                                                                                                  Sure, but at a certain point, what can you do about people ignoring the docs and even the sort of high level guidance of the language? I mean, deep reading docs is hard, reading the go proverbs isn’t https://go-proverbs.github.io/ – and you only have to get to the 4th one to get to “The bigger the interface, the weaker the abstraction.”

                                                                                                  1. 5

                                                                                                    “The bigger the interface, the weaker the abstraction.” says very little to someone not engaged in deeply understanding it.

                                                                                                    Obviously, we can’t throw our hands up and say, “what can you do? People will be people!!!” What we can do is ensure that code we write and review takes these principles to heart, and shows the value of it.

                                                                                                    1. 2

                                                                                                      Absolutely, and after doing all that – people are still going to write terrible code that directly goes against all the norms and recommendations. I am all for doing our level best to try to guide people – but leading a horse to water and all that.

                                                                                                2. 1

                                                                                                  I think this is true, but it basically means you should never pass a *Closer unless you really really have to. The callee should manage the IO lifecycle.

                                                                                                  I would even go so far as to say one of the heavily opinionated Go linters should warn (maybe they do, I have never checked because I don’t think highly opinionated linters are a good idea for anything but beginners).

                                                                                                  1. 1

                                                                                                    This makes sense, but two difficulties.

                                                                                                    1. Still requires careful analysis of docs. It’s very easy to pass a closereader off to a function taking a reader.

                                                                                                    2. You can’t just pass something like a gzip.reader to another layer. Even if that layer closes, it doesn’t close the bottom.

                                                                                                    1. 1

                                                                                                      When I read stuff like this I change my mind about go being a language that can be learned in a weekend.

                                                                                                      1. 1

                                                                                                        You can certainly pick it up and use it effectively in a weekend, but surely you couldn’t learn the ins and outs of anything substantial in just a weekend.

                                                                                                    2. 4

                                                                                                      Between io.TeeReader and io.Pipe I think you can probably wire something up. There’s a decent amount of “plumbing” included, although it took me a few passes through the docs to find it all.

                                                                                                      1. 4

                                                                                                        Yeah, its quite worth it to read through the whole std library docs, I seem to find a new thing each time I skim it again.

                                                                                                      2. 1

                                                                                                        how to write http middleware that hash-sums a post body (i.e. for slack webhook validation) and then also lets inner actions read from the request post body.

                                                                                                        I’ve had to do something like that and I ended up writing a small TeeReadCloser struct that wraps TeeReader but also has a Close method that closes both the reader and the writer. You can probably get by with a version that takes a WriteCloser like mine and one that just takes a Writer and combine them as needed, though I wonder why they couldn’t just put these in the standard library.

                                                                                                      1. 20

                                                                                                        I do agree with the theme of this post: at scale software is complex. Whether you use a monorepo or polyrepos, you’ll need a lot of complicated tooling to make things manageable for your developers.

                                                                                                        But I want to draw attention to sfink’s excellent rebuttal (full disclosure, he is a colleague of mine).

                                                                                                        Additionally, I’d like to address the VCS Scalablilty downside. The author’s monorepo experience seems to be with Git. Companies like Google, Facebook (who have two of the largest monorepos in the world) and to a lesser extent Mozilla all use Mercurial for a reason: it scales much better. While I’m not suggesting the path to get there was easy, the work is largely finished and contributed back upstream. So when the author points to Twitter’s perf issues or Microsoft’s need for a VFS, I think it is more a problem related to using the wrong tool for the job than it is something inherently wrong with monorepos.

                                                                                                        1. 5

                                                                                                          I was under the impression (possibly mistaken) that Google still used perforce predominantly (or some piper wrapper thing), with a few teams using mercurial or git for various externally visible codebases (android, chrome, etc).

                                                                                                          1. 10

                                                                                                            Perforce has been gone for quite a while. Internal devs predominantly use Piper, though an increasing group is using Mercurial to interact with Piper instead of the native Piper tooling. The Mercurial install is a few minor internal things (eg custom auth), evolve and core Mercurial. We’ve been very wary of using things outside of that set, and are working hard to keep our workflow in line with the OSS Mercurial workflow. An example of something we’ve worked to send upstream is hg fix which helps you use a source code formatter (gofmt or clang-format) as you go, and another is the narrow extension which lets you clone only part of a repo instead of the whole thing.

                                                                                                            Non-internal devs (Chrome, Android, Kubernetes, etc etc) that work outside of Piper are almost exclusively on Git, but in a variety of workflows. Chrome, AIUI is one giant git repo of doom (it’s huge), Android is some number of hundreds (over 700 last I knew?) of git repos, and most other tools are doing more orthodox polyrepo setups, some with Gerrit for review, some with GH Pull Requests, etc.

                                                                                                            1. 3

                                                                                                              Thanks for the clarification, sounds like Piper is (and will continue to be) the source of truth while the “rollout” Greg mentioned is in reference to client side tooling. To my original point, Google still seems to have ended up with the right tool for the job in Piper (given the timeline and alternatives when they needed it).

                                                                                                              1. 2

                                                                                                                But how does Mercurial interact with Piper? Is Mercurial a “layer” above Piper? Do you have a Mercurial extension that integrates with Piper?

                                                                                                                1. 3

                                                                                                                  We have a custom server that speaks hg’s wire protocol. Pushing to piper exports to the code review system (to an approximation), pulling brings down the new changes that are relevant to your client.

                                                                                                                  (Handwaving because I’m assuming you don’t want gory narrow-hg details.)

                                                                                                                  1. 2

                                                                                                                    It’s a layer, yeah. My understanding is that when you send out a change, it makes Piper clients for you. It’s just a UX thing on top of Piper, not a technical thing built into it.

                                                                                                                2. 2

                                                                                                                  I’m fuzzy on the details, but my understanding is that they’re in the middle of some sort of phased Mercurial rollout. So it’s possible only a sample population of their developers are using the Mercurial backend. What I do know is that they are still actively contributing to Mercurial and seem to be moving in that direction for the future.

                                                                                                                  1. 1

                                                                                                                    I wonder if they are using some custom mercurial backend to their internal thing (basically a VFS layer as the author outlined)? It would be interesting to get some first of second hand information on what is actually being used, as people tend to specifically call out Google and Facebook as paragons of monorepos.

                                                                                                                    My feeling is that google/facebook are both huge organizations with lots of custom tooling and systems. /Most/ companies are not google/facebook nor have google/facebook problems.

                                                                                                                    1. 6

                                                                                                                      This is largely my source (in addition to offline conversations): https://groups.google.com/forum/#!topic/mozilla.dev.version-control/hh8-l0I2b-0

                                                                                                                      The relevant part is:

                                                                                                                      Speaking of Google, their Mercurial rollout on the massive Google monorepo continues. Apparently their users are very pleased with Mercurial - so much so that they initially thought their user sentiment numbers were wrong because they were so high! Google’s contribution approach with Mercurial is to upstream as many of their modifications and custom extensions as possible: they seem to want to run a vanilla Mercurial out-of-the-box as possible. Their feature contributions so far have been very well received upstream and they’ve been contributing a number of performance improvements as well. Their contributions should translate to a better Mercurial experience for all.

                                                                                                                      So at the very least it seems they endeavour to avoid as much custom tooling on top of Mercurial as possible. But like you said, they have Google problems so I imagine they will have at least some.

                                                                                                                      1. 6

                                                                                                                        Whoa. This could be the point where Mercurial comes back after falling behind git for years.

                                                                                                                        Monorepo sounds sexy because Facebook and Google use that. If both use Mercurial and open source their modifications then Mercurial becomes very attractive suddenly.

                                                                                                                        In git, neither submodules nor LFS are well integrated and generate pain for lots of developers. If Mercurial promises to fix that many will consider to switch.

                                                                                                                        Sprinkling some Rust into the code base probably helps to seduce some developers as well.

                                                                                                                        1. 10

                                                                                                                          Narrow cloning (authored by Google) has been OSS from the very start, and now ships in the hg tarball. If you’ve got need of it, it’s still maturing (and formats can change etc) but it’s already in use by at least 3 companies. I’d be happy to talk to anyone that might want to deploy hg at their company, and can offer at least some help on narrow functionality if that’s needed.

                                                                                                                        2. 1

                                                                                                                          Thanks for digging!
                                                                                                                          Pretty interesting for sure.

                                                                                                                    2. 0

                                                                                                                      I’m getting verification from someone at Google, but the quick version as I understood it:

                                                                                                                      Google hasn’t actually used Perforce for a long time. What they had was a Perforce workalike that was largely their own thing. They are now using normal Mercurial.

                                                                                                                      1. 12

                                                                                                                        This isn’t true, Google uses Piper (their perforce clone) internally. Devs have the option of using mercurial or git for their personal coding environments, but commits get converted to piper before they land in the canonical monorepo.

                                                                                                                        1. 2

                                                                                                                          I’ll ping @durin42; I don’t think I’m misremembering the discussion, but I may have misunderstood either the current state or implementation details.

                                                                                                                    3. 3

                                                                                                                      What is it about git that makes it a poor choice for very large repos?

                                                                                                                      What does Mercurial and Perforce do differently?

                                                                                                                      1. 2

                                                                                                                        In addition to the article @arp242 linked, this post goes into a bit more technical detail. Tl;dr, it’s largely due to how data is stored in each. Ease of contribution is another reason (scaling Git shouldn’t be impossible, but for one reason or another no one has attempted it yet).

                                                                                                                        1. 1

                                                                                                                          Microsoft has a 300GB git repo. They built a virtual file system to make it work.

                                                                                                                          1. 1

                                                                                                                            True, but in the scalability section of the article the author argues that the need for a VFS is proof that monorepos don’t scale. So I think most of this thread is centered around proving that monorepos can scale without the need for a VFS.

                                                                                                                            I agree that a VFS is a perfectly valid solution if at the end of the day the developers using the system can’t tell the difference.

                                                                                                                        2. 2

                                                                                                                          Facebook wrote about Scaling Mercurial at Facebook back in 2014:

                                                                                                                          After much deliberation, we concluded that Git’s internals would be difficult to work with for an ambitious scaling project. [..] Importantly, it [mercurial] is written mostly in clean, modular Python (with some native code for hot paths), making it deeply extensible.

                                                                                                                          It’s a great example of how applications in a slower language can be made better performing than applications in a faster language, just because it’s so much easier to understand and optimize.

                                                                                                                      1. 18

                                                                                                                        What a curious article. Let’s start with the style, such as calling some of the (perceived) advantages of a monorepo a “lie”. Welp, guess I’m a liar 🤷‍ Good way to have a conversation, buddy. Based on this article I’d say that working at Lyft will be as much fun as working at Uber.

                                                                                                                        Anyway, we take a deep breath and continue, and it seems that everything is just handwaved away.

                                                                                                                        Our organisation has about 25 Go applications, supported by about 20 common dependency packages. For example, we have packages log, database, cache, etc. Rolling out updates to a dependency organisation-wide is hard, even for compatible changes. I need to update 25 apps, make PRs for 25 apps. It’s doable, but a lot of work. I expect that we’ll have 50 Go applications before the year is out.

                                                                                                                        Monorepos exist exactly to solve problems like this. These problems are real, and can’t just be handwaved away. Yes, I can (and have) written tools to deal with this to some extent, but it’s hard to get this right, and in the end I’ve still got 25 PRs to juggle with. The author is correct that tooling for monorepos also needs to be written, but it seems to me that that tooling will be a lot simpler and easier to maintain (Go already does good caching of builds and tests out of the box, so we just have to deal with deploys). in particular, I find it’s very difficult to maintain any sense of “overview” of stuff because everything is scattered over 25 PRs.

                                                                                                                        Note that the total size of our codebase isn’t even that large. It’s just distributed over dozens of repos.

                                                                                                                        It’s still a difficult problem, and there is no “one size fits all” solution. If our organisation would still have just one product in Go (as we started out three years ago) then the current polyrepo approach would continue to suffice. It still worked mostly okay when we expanded to two and three products. But now that we’ve got five products (and probably more on the way in the future) it’s getting harder and harder to manage things. I can write increasingly more advanced tooling, but that’s not really something I’m looking forwards to.

                                                                                                                        I’m not sure how to solve it yet; for us, I think the best solution will be to consolidate our 20 dependency packages in to a single one and consolidate all services of different applications in their own repo, so we’ll end up having 6 repos.

                                                                                                                        Either way, the problems are real, and people who look towards monorepos aren’t all stupid or liars.

                                                                                                                        1. 4

                                                                                                                          I would imagine that if all you use is Go, and nothing much else, then I would image that you are in the monorepo “sweet spot” (especially if your repo size isn’t enormous). From what I understand, Go was more or less designed around the google internal monorepo workflow. At least until Go 1.10/1.11 or so (6 years? after Go 1.0).

                                                                                                                          It makes me wonder…

                                                                                                                          • Are there other languages that seem to make monorepo style repos easier?
                                                                                                                          • Are monorepos harder/worse if you have many apps written in multiple disparate languages?
                                                                                                                          1. 7

                                                                                                                            Main issue with monorepos (imo) is that lots of existing tools assume you are not using them (eg: github webhooks, CI providers, VCS (support for partial worktrees), etc). Not an issue at google scale where such tools are managed (or built) in-house.

                                                                                                                            1. 3

                                                                                                                              This point isn’t made enough in the monorepo debate. The cost of a monorepo isn’t just the size of the checkout, it’s also all of the tooling you loose by using something non-standard. TFA mentioned some of it, but even things like git log become problematic.

                                                                                                                              1. 2

                                                                                                                                Is there a middleground that scopes the tooling better? What I mean is, keep your web app and related backend services in their monorepo assuming they aren’t built on drastically different platforms and you desire standardisation and alignment. Then keep your mobile apps in separate repos, unless you are using some cross-platform framework which permits a mobile monorepo. You get the benefits of the monorepo for what is possibly a growing set of services that need to refactored together while not cluttering git log et al with completely unrelated changes.

                                                                                                                                1. 2

                                                                                                                                  Sort of. What really matters is whether you end up with a set of tools that work effectively. For small organizations, that means polyrepos, since you don’t often have to deal with cross-cutting concerns and you don’t want to build / self-host tools.

                                                                                                                                  Once you grow to be a large organization, you start frequently making changes which require release coordination, and you have budget to setup tools to meet your needs.

                                                                                                                            2. 4

                                                                                                                              Interesting, Go in my experience is one of the places I have seen the most extreme polyrepo/microservice setups. I helped a small shop of 2 devs with 50+ repos. One of the devs was a new hire…

                                                                                                                            3. 0

                                                                                                                              Rolling out updates to a dependency organisation-wide is hard, even for compatible changes. I need to update 25 apps, make PRs for 25 apps.

                                                                                                                              What exactly is the concern here? Project ownership within an org? I fail to see how monorepo is different from having commit access to all the repos for everyone. PRs to upstream externally? Doesn’t make a difference either.

                                                                                                                              1. 3

                                                                                                                                The concern is that it’s time-consuming and clumsy to push updates. If I update e.g. the database package I will need to update that for 25 individual apps, and them create and merge 25 individual PRs.

                                                                                                                                1. 3

                                                                                                                                  The monorepo helps with this issue, but it can also be a bit insidious. The dependency is a real one and it’s one that any updates to need to be tested. It’s easier to push the update to all 25 apps in a monorepo, but it also can tend to allow developers to make updates without making sure the changes are safe everywhere.

                                                                                                                                  Explicit dependencies with a single line update to each module file can be a forcing function for testing.

                                                                                                                                  1. 2

                                                                                                                                    but it also can tend to allow developers to make updates without making sure the changes are safe everywhere

                                                                                                                                    The Google solution is by pushing the checking of the safety of a change onto the team consuming it, not the one creating it.

                                                                                                                                    Changes are created using Rosie, and small commits created with a review from a best guess as to who owns the code. Some Rosie changes wait for all people to accept. Some don’t, and in general I’ve been seeing more of that. Rosie changes generally assume that if your tests pass, the change is safe. If a change is made and something got broke in your product, your unit tests needed to be better. If that break made it to staging, your integration tests needed to be better. If something got to production, you really have bigger problems.

                                                                                                                                    I generally like this solution. I have a very strong belief that during a refactor, it is not the responsibility of the refactor author to prove to you that it works for you. It’s up to you to prove that it doesn’t via your own testing. I think this applies equally to tiny changes in your own team up to gigantic monorepo changes.

                                                                                                                                  2. 1

                                                                                                                                    Assuming the update doesn’t contain breaking changes, shouldn’t this just happen in your CI/CD pipeline? And if it does introduce breaking changes, aren’t you going to need to update 25 individual apps anyway?

                                                                                                                                    1. 4

                                                                                                                                      aren’t you going to need to update 25 individual apps anyway?

                                                                                                                                      The breaking change could be a rename, or the addition of a parameter, or something small that doesn’t require careful modifications to 25 different applications. It might even be scriptable. Compare the effort of making said changes in one repo vs 25 repos and making a PR for each such change.

                                                                                                                                      Now, maybe this just changes the threshold at which you make breaking changes, since the cost of fixing downstream is high. But there are trade offs there too.

                                                                                                                                      I truthfully don’t understand why we’re trying to wave away the difference in the effort required to make 25 PRs vs 1 PR. Frankly, in the way I conceptualize it, you’d be lucky if you even knew that 25 PRs were all you needed. Unless you have good tooling to tell you who all your downstream consumers are, that might not be the case at all!

                                                                                                                                      1. 1

                                                                                                                                        Here’s the thing: I shouldn’t need to know that there are 25PRs that have to be sent, or even 25 apps that need to be updated. That’s a dependency management problem, and that lives in my CI/CD pipeline. Each dependent should know which version(s) it can accept. If I make any breaking changes, I should make sure I alter the versioning in such a way that older dependents don’t try and use the new version. If I need them to use my new version, then I have to explicitly deprecate it.

                                                                                                                                        I’ve worked in monorepos with multiple dependents all linking back to a single dependency, and marshalling the requirements of each of those dependents with the lifecycle of the dependency was just hell on Earth. If I’m working on the dependency, I don’t want to be responsible for the dependents at the same time. I should be able to mutate each on totally independent cycles. Changes in one shouldn’t ever require changes in the other, unless I’m explicitly deprecating the version of the dependency one dependent needs.

                                                                                                                                        I don’t think VCS is the right place to do dependency management.

                                                                                                                                        1. 3

                                                                                                                                          Round and round we go. You’ve just traded one problem for another. Instead of 25 repos needing to be updated, you now might have 25 repos using completely different versions of your internal libraries.

                                                                                                                                          I don’t want to be responsible for the dependents at the same time.

                                                                                                                                          I mean, this is exactly the benefit of monorepos. If that doesn’t help your workflow, then monorepos ain’t gunna fly. One example where I know this doesn’t work is in a very decentralized ecosystem, like FOSS.

                                                                                                                                          If you aren’t responsible for your dependents, then someone else will be. Five breaking changes and six months later, I feel bad for the poor sap that needs to go through the code migration to address each of the five breaking changes that you’ve now completely forgotten about just to add a new feature to that dependency. I mean sure, if that’s what your organization requires (like FOSS does), then you have to suck it up and do it. Otherwise, no, I don’t actually want to apply dependency management to every little thing.

                                                                                                                                          Your complaints about conflating VCS and dependency management ring hollow to me.

                                                                                                                                          1. 1

                                                                                                                                            I mean, again, this arises from personal experience: I’ve worked on a codebase where a dependency was linked via source control. It was an absolute nightmare, and based on that experience, I reached this conclusion: dependencies are their own product.

                                                                                                                                            I don’t think this is adding “dependency management to every little thing”, because dependency management is like CI: it’s a thing you should be doing all the time! It’s not part of the individual products, it’s part of the process. Running a self-hosted dependency resolver is like running a self-hosted build server.

                                                                                                                                            And yes, different products might be using different versions of your libraries. Ideally, nobody pinned to a specific minor release. That’s an anti-pattern. Ideally, you carefully version known breaking changes. Ideally, your CI suite is robust enough that regressions never make it into production. I just don’t see how different versions of your library being in use is a problem. Why on Earth would I want to go to every product that uses the library and update it, excepting show-stopping, production-critical bugs? If it’s just features and performance, there’s no point. Let them use the old version.

                                                                                                                                            1. 2

                                                                                                                                              You didn’t really respond to this point:

                                                                                                                                              Five breaking changes and six months later, I feel bad for the poor sap that needs to go through the code migration to address each of the five breaking changes that you’ve now completely forgotten about just to add a new feature to that dependency.

                                                                                                                                              You ask why it’s a problem to have a bunch of different copies of your internal libraries everywhere? Because it’s legacy code. At some point, someone will have to migrate its dependents when you add a new feature. But the point at which that happens can be delayed indefinitely until the very moment at which it is required to happen. But at that point, the library may have already gone through 3 refactorings and several breaking changes. Instead of front-loading the migration of dependents as that happens by the person making the changes, you now effectively have dependents using legacy code. Subsequent updates to those dependents now potentially fall on the shoulders of someone else, and it introduces surprise yak shaves. That someone else then needs to go through and apply a migration to their code if they want to use an updated version of the library that has seen several breaking changes. That person then needs to understand the breaking changes and apply them to their dependent. If all goes well, maybe this is a painless process. But what if the migration in the library resulted in reduced functionality? Or if the API made something impossible that you were relying on? It’s a classic example of someone not understanding all of the use cases of their library and accidentally removing functionality from users of their library. Happens all the time. Now that person who is trying to use your new code needs to go and talk to you to figure out whether the library can be modified to support original functionality. You stare at them blankly for several seconds as you try to recall what it is you did 6 months ago and what motivated it. But all of that would have been avoided if you were forced to go fix the dependent in the first place.

                                                                                                                                              Like I said, your situation might require one to do this. As I said above, which you seem to have completely ignored, FOSS is one such example of this. It’s decentralized, so you can’t realistically fix all dependents. It’s not feasible. But in a closed ecosystem inside a monorepo, your build doesn’t pass unless all dependents are fixed. Everything moves forward, code migrations are front loaded and nobody needs to spend any time being surprised by a necessary code migration.

                                                                                                                                              I experience both of these approaches to development. With a monorepo at work and lots of participation in FOSS. In the FOSS world, the above happens all the time exactly because we have a decentralized system of libraries that are each individually versioned, all supported by semver. It’s a great thing, but it’s super costly, yet necessary.

                                                                                                                                              Dependency management with explicit versioning is a wonderful tool, but it is costly to assign versions to things. Sometimes it’s required. If so, then great, do it. But it is most certainly not something that you “just do” like you do CI. Versioning requires some judgment about the proper granularity at which you apply it. Do you apply it to every single module? Every package? Just third party dependencies? You must have varying answers to these and there must be some process you follow that says when something should be independently versioned. All I’m saying is that if you can get away with it, it’s cheaper to make that granularity as coarse as possible.

                                                                                                                              1. 9

                                                                                                                                I switched to Visual Studio Code with Neovim backend yesterday. Neovim provides all the Ext functionality so you can :s/foo/bar to your heart’s content. It’s finally Good Enough to operate as a Vim without having to spend months tuning your .vimrc. I have been using Vim for 5+ years and wrote all my Go code in it.

                                                                                                                                I think this is what the future of Vim actually is for the majority of people: Neovim providing a backend for their preferred IDE. Interacting in a terminal is incredibly antiquated, even if it’s the sort of thing you are super used to. You can spend your time actually understanding and learning Vim, not trying to make Vim do what you think is reasonable/behaves like your previous editor of choice.

                                                                                                                                1. 5

                                                                                                                                  Despite being somewhat of a diehard vim fan, 99% of my ‘vim’ usage these days is via emulators - either in VS, VSCode or Firefox.

                                                                                                                                  For me the true value of vim is modal editing (and the associated muscle memory); the plugin ecosystem etc is fine (and at one point I spent a lot of time honing my plugin config) but there’s very little I miss.

                                                                                                                                  1. 2

                                                                                                                                    My experience is the same. I don’t even have gVIm installed on my workstation anymore, but I love love working with the vim extensions in VS, Code, and Firefox.

                                                                                                                                  2. 3

                                                                                                                                    Maybe some day an interface to neovim will appear for Emacs, that would be a nice thing to happen. Perhaps I could start writing it, if I have a chance to learn elisp. Emacs as the extensible front end, with a proper modal backend. In fact the front end could be something better than Emacs, an scheme implementation would be amazing, in order to preserve separation of interests and provide users with a lightweight but infinitely extensible editing environment. If someone with adecuate skills for this (I don’t have them at the moment, so I will have to invest some time learning) is willing to start a with me such project, I would be more than honored to do so, if no interest is shown, eventually I would do it on my own.

                                                                                                                                    1. 3

                                                                                                                                      Check out Oni!

                                                                                                                                      1. 1

                                                                                                                                        Thanks for the recommendation, but I’m not interested in bringing the web to the desktop with the Electron framework, as exciting as it may be for many programmers I think it is still a bad idea. Personally, I think we don’t need tenths of MB in programs’ binaries in order to do text editing, and Javascript isn’t a coherent nor a well defined language to justify its expansion on servers and home computers, I think there are better alternatives to this. Nevertheless, if you like it and it solves your problems, then that’s all that matters in the end.

                                                                                                                                        1. 2

                                                                                                                                          I don’t actually use it - I use plain neovim in my terminal. I agree with you on the criticisms of electron - it’s just the only program of its kind that I’ve found.

                                                                                                                                          1. 2

                                                                                                                                            Sorry If I assumed something incorrect. Some of the ideas in Oni seem interesting, and would be a worthwhile effort to have a look at the source code.

                                                                                                                                    2. 3

                                                                                                                                      Little off-topic, but what do you use to do that integration?

                                                                                                                                      1. 4

                                                                                                                                        The VSCode Vim plugin will do it out of the box, just check “Enable NeoVim”

                                                                                                                                    1. 3

                                                                                                                                      It’d be good in general for commands that have an effect on the state of your filesystem to have a way to declare their inputs and their outputs for a given set of options. This way you’d be able to analyze e.g. install scripts to review which files would be read and written before even running it, and check if you’re missing anything the script depends on.

                                                                                                                                      1. 4

                                                                                                                                        They are unfortunately very chatty though. Alias rm to rm -i and it’s just an obnoxious amount of y y y for any non-trivial removal. I wish someone would fix this to something more like a table output showing a reasonable summary of what’s going and let me confirm the once.

                                                                                                                                        1. 4

                                                                                                                                          This is a powerful paradigm.

                                                                                                                                          Incremental confirmation can result in a state where the system has a half-performed operation when the operator decides to abort.

                                                                                                                                          Displaying a plan – and requiring confirmation of the entire plan – ensures that the operator intends to do everything, or intends to do nothing.

                                                                                                                                          1. 3

                                                                                                                                            The book Unix Power Tools, originally published in 1993, includes a recipe for the behavior to rm you’re describing. It is surprising this feature hasn’t made it in to coreutils sometime in the intervening 2 1/2 decades.

                                                                                                                                            1. 2

                                                                                                                                              I’ve worked on systems that enforce -i and it just made me develop a very bad -f habit.

                                                                                                                                          1. 6

                                                                                                                                            In the general case, I have developed a deep and long-lasting skepticism of DSLs. I was a very heavy proponent of them during my grad studies, and investigated pretty thoroughly using a rules engine for Space Invaders Enterprise Edition and a runtime monitor for Super Mario World.

                                                                                                                                            I went a little further down this path before I abandoned it for reasons unrelated to the DSL skepticism. That happened later. I just wanted to give context that I was actually naturally predisposed to liking them.

                                                                                                                                            What has happened in my time on this earth as a software engineer is the feeling that it is axiomatic that all DSLs eventually tend towards something Turing complete. New requirements appear, features are added, the DSL heads further towards Turing completeness. Except the DSL does not have the fundamental mechanics to express Turing completeness, it is by fundamental design supposed to not do that. What you end up with is something very complex, where users are performing all sorts of crazy contortions to get behavior they want, and you can never roll that back. I feel like DSLs are essentially doomed from the outset.

                                                                                                                                            I am much, much more optimistic about opinionated libraries as the means to solve the problems DSLs do (Ruby on Rails being the most obvious one). That way any of the contortions can be performed in a familiar language that the developer is happy to use and won’t create crazy syntax, and the library then called to do whatever limited subset of things it wants to support. For basic users, they’ll interact with the library only and won’t see the programming language. As things progress, the base language can be brought in to handle more complex cases as pre/post-processing by the caller, without infringing on the design of the library.

                                                                                                                                            At Google, we have a number of DSLs to perform many different tasks which I won’t go into here. Each one requires a certain learning curve and a certain topping-out where you can’t express what you want. I was much happier with an opinionated library approach in Python, where I could do a great deal of what I wanted without peering behind the curtain of what was going to be performed.

                                                                                                                                            1. 6

                                                                                                                                              sklogic on Hacker News had a different view: you start with a powerful, Turing-complete language that supports DSL’s with them taking the place of libraries. He said he’ll use DSL’s for stuff like XML querying, Prolog where logic approach makes more sense, Standard ML when he wants it type-safe in simple form, and, if all else fails or is too kludgy, drops back into LISP that hosts it all. He uses that approach to build really complicated tools like his mbase framework.

                                                                                                                                              I saw no problem with the approach. The 4GL’s and DSL’s got messy because they had to be extended toward powerful. Starting with something powerful that you constrain where possible eliminates those concerns. Racket Scheme and REBOL/Red are probably best examples. Ivory language is an example for low-level programming done with Haskell DSL’s. I have less knowledge of what Haskell’s can do, though.

                                                                                                                                              1. 3

                                                                                                                                                I think it’s a good approach, but it’s still hard to make sure that the main language hosting all the DSLs can accomodate all of their quirks. Lisp does seem to be an obvious host language, but if it were that simple then this approach would have taken off years ago.

                                                                                                                                                Why didn’t it? Probably because syntax matters and error messages matter. Towers of macros produce bad error messages. And programmers do want syntax.

                                                                                                                                                And I agree that syntax isn’t just a detail; it’s an essential quality of the language. I think there are fundamental “information theory” reasons why certain syntaxes are better than others.

                                                                                                                                                Anything involving s-expressions falls down – although I know that sklogic’s system does try to break free of s-expression by adding syntax.

                                                                                                                                                Another problem is that ironically by making it too easy to implement a DSL, you get bad DSLs! DSLs have to be stable over time to be made “real” in people’s heads. If you just have a pile of Lisp code, there’s no real incentive for stability or documentation.

                                                                                                                                                1. 4

                                                                                                                                                  “but if it were that simple then this approach would have taken off years ago.”

                                                                                                                                                  It did. The results were LISP machines, Common LISP, and Scheme. Their users do little DSL’s all the time to quickly solve their problems. LISP was largely killed off by AI Winter in a form of guilt by association. It was also really weird vs things like Python. At least two companies, Franz and LispWorks, are still in Common LISP business with plenty of success stories on complex problems. Clojure brought it to Java land. Racket is heavy on DSL’s backed by How to Design Programs and Beautiful Racket.

                                                                                                                                                  There was also a niche community around REBOL, making a comeback via Red, transformation languages like Rascal, META II follow-ups like Ometa, and Kay et al’s work in STEPS reports using “IS” as foundational language. Now, we have Haskell, Rust, Nim, and Julia programmers doing DSL-like stuff. Even some people in formal verification are doing metaprogramming in Coq etc.

                                                                                                                                                  I’d say the idea took off repeatedly with commercial success at one point.

                                                                                                                                                  “Probably because syntax matters and error messages matter. Towers of macros produce bad error messages. And programmers do want syntax.”

                                                                                                                                                  This is a good point. People also pointed out in other discussions with sklogic that each parsing method had its pro’s and con’s. He countered that they can just use more than one. I think a lot of people don’t realize that today’s computers are so fast and we have so many libraries that this is a decent option. Especially if we use or build tools that autogenerate parsers from grammars.

                                                                                                                                                  So, IIRC, he would use one for raw efficiency first. If it failed on something, that something would get run through a parser designed for making error detection and messages. That’s now my default recommendation to people looking at parsers.

                                                                                                                                                  “Anything involving s-expressions falls down – although I know that sklogic’s system does try to break free of s-expression by adding syntax.”

                                                                                                                                                  Things like Dylan, Nim, and Julia improve on that. There’s also just treating it like a tree with a tree-oriented language to manipulate it. A DSL for easily describing DSL operations.

                                                                                                                                                  “nother problem is that ironically by making it too easy to implement a DSL, you get bad DSLs!”

                                                                                                                                                  The fact that people can screw it up probably shouldn’t be an argument against it since they can screw anything up. The real risk of gibberish, though, led (per online commenters) a lot of teams using Common LISP to mandate just using a specific coding style with libraries and no macros for most of the apps. Then, they use macros just handling what makes sense like portability, knocking out boilerplate, and so on. And the experienced people wrote and/or reviewed them. :)

                                                                                                                                                  1. 2

                                                                                                                                                    Probably because syntax matters and error messages matter. Towers of macros produce bad error messages. And programmers do want syntax.

                                                                                                                                                    Another problem is that ironically by making it too easy to implement a DSL, you get bad DSLs! DSLs have to be stable over time to be made “real” in people’s heads. If you just have a pile of Lisp code, there’s no real incentive for stability or documentation.

                                                                                                                                                    I’m so glad to see this put into words. Although for me, I find it frustrating that this seem to be universally true. I was pretty surprised the first time around when I felt my debugger was telling me almost nothing because my syntax was so uniform, I couldn’t really tell where I was in the source anymore!

                                                                                                                                                    Some possibilities for this not to be true that I’m hoping for: maybe its like goto statements and if we restrict ourselves to make DSLs in a certain way, they won’t become bad (or at least won’t become bad too quickly). By restricting the kind of gotos we use (and presenting them differently), we managed to still keep the “alter control flow” aspect of goto.

                                                                                                                                                    Maybe there’s also something to be done for errors. Ideally, there’d be a way to spend time proportional to the size of the language to create meaningful error messages. Maybe by adding some extra information somewhere that currently implicit in the language design.

                                                                                                                                                    I don’t know what to do about stability though. I mean you could always “freeze” part of the language I guess.

                                                                                                                                                    For this particular project, I’m more afraid that they’ll go the SQL route where you need to know so much about how the internals work that it mostly defeats the purpose of having a declarative language in the first place. I’d rather see declarative languages with well-defined succinct transformations to some version of the code that correspond to the actual execution.

                                                                                                                                                    1. 1

                                                                                                                                                      (late reply) Someone shared this 2011 essay with me, which has apparently been discussed to death, but I hadn’t read it until now. It says pretty much exactly what I was getting at!

                                                                                                                                                      http://winestockwebdesign.com/Essays/Lisp_Curse.html

                                                                                                                                                      In this essay, I argue that Lisp’s expressive power is actually a cause of its lack of momentum.

                                                                                                                                                      I said:

                                                                                                                                                      Another problem is that ironically by making it too easy to implement a DSL, you get bad DSLs!

                                                                                                                                                      So that is the “curse of Lisp”. Although he clarifieds that they’re not just “bad” – there are too many of them.

                                                                                                                                                      He mentions documentation several times too.

                                                                                                                                                      Thus, they will have eighty percent of the features that most people need (a different eighty percent in each case). They will be poorly documented. They will not be portable across Lisp systems.

                                                                                                                                                      Domain knowledge is VERY hard to acquire, and the way you share that is by developing a stable and documented DSL. Like Awk. I wouldn’t have developed Awk on my own! It’s a nice little abstraction someone shared with me, and now I get it.

                                                                                                                                                      The “bipolar lisp programmer” essay that he quotes also says the same things… I had not really read that one either but now I get more what they’re saying.

                                                                                                                                                      1. 1

                                                                                                                                                        Thanks for sharing that link again! I don’t think I’ve seen it before, or at least have forgotten. (Some of the links from it seem to be broken unfortunately.)

                                                                                                                                                        One remark I have is that I think you could transmit information instead of code and programs to work around this curse. Implicit throughout the article is that collaboration is only possible if everyone uses the same language or dialect of it; indeed, this is how version controlled open-source projects are typically structured: around the source.

                                                                                                                                                        Instead, people could collaboratively share ideas and findings so everyone is able to (re)implemented it in their own DSL. I say a bit more on this in my comment here.

                                                                                                                                                        In my case, on top of documentation (or even instead of it), I’d like to have enough instructions for rebuilding the whole thing from scratch.

                                                                                                                                                        To answer your comment more directly

                                                                                                                                                        Domain knowledge is VERY hard to acquire, and the way you share that is by developing a stable and documented DSL

                                                                                                                                                        I totally agree that domain knowledge is hard to acquire but I’m saying that this only one way of sharing that knowledge once found. The other way is through written documents.

                                                                                                                                                2. 4

                                                                                                                                                  Since I like giving things names, I think of this as the internal DSL vs external DSL argument [1]. This applies to your post and the reply by @nickpsecurity about sklogic’s system with Lisp at the foundation. If there is a better or more common name for it, I’d like to know.

                                                                                                                                                  I agree that internal DSLs (ones embedded in a full programming language) are preferable because of the problems you mention.

                                                                                                                                                  The external DSLs always evolve into crappy programming languages. It’s “failure by success” – they become popular (success) and the failure mode is that certain applications require more power, so they become a programming language.

                                                                                                                                                  Here are my examples with shell, awk, and make, which all started out non Turing-complete (even Awk) and then turned into programming languages.

                                                                                                                                                  http://www.oilshell.org/blog/2016/11/14.html

                                                                                                                                                  Ilya Sher points out the same problems with newer cloud configuration languages.

                                                                                                                                                  https://ilya-sher.org/2018/09/15/aws-cloudformation-became-a-programming-language/

                                                                                                                                                  I also worked at Google, and around the time I started, there were lots of Python-based internal DSLs (e.g. the build system that became Blaze/Bazel was literally a Python script, not a Java interpreter for a subset of Python).

                                                                                                                                                  This worked OK, but these systems eventually got rewritten because Python isn’t a great language for internal DSLs. The import system seems to be a pretty significant barrier. Another thing that is missing is Ruby-style blocks, which are used in configs like Vagrantfile and I think Puppet. Ruby is better, but not ideal either. (Off the top of my head: it’s large, starts up slowly, and has version stability issues.)

                                                                                                                                                  I’m trying to address some of this with Oil, although that’s still a bit far in the future :-/ Basically the goal is to design a language that’s a better host for internal DSLs than Python or Ruby.

                                                                                                                                                  [1] https://martinfowler.com/bliki/InternalDslStyle.html

                                                                                                                                                  1. 3

                                                                                                                                                    If a programming language is flexible enough, the difference between DSL and library practically disappears.

                                                                                                                                                    1. 1

                                                                                                                                                      DSL’s work great when the domain is small and stays small and is backed by corporal punishment. Business Software is an astronomically large domain.

                                                                                                                                                    1. 1

                                                                                                                                                      I use Google Books for a one-by-one search. Google Books is a digital search inside an analog book. I search for the phrase I think I want in the book I own, use the snippet to figure out which page I want, then find it in the book. Works very well.

                                                                                                                                                      I usually don’t have multiple books on the same topic, I don’t have that much space :)

                                                                                                                                                      1. 1

                                                                                                                                                        “the phrase I think I want in the book I own” -> this is interesting, why would you like something like that before you buy a book? I’m more like, curious to explore what the book gives me.

                                                                                                                                                        1. 1

                                                                                                                                                          I meant I already own the book. So I have a book, and I think “I think there’s a word or phrase in here that will get me what I want, but the index in this book is garbage”. So I use Google Books.

                                                                                                                                                      1. 1

                                                                                                                                                        I wrote my thesis in LaTeX, which got converted into a book. The publisher, who is technical (not O’Reilly) and you can probably find the book if you doxx me hard enough, made me convert it into Word so the editor could leave comments. The editor would never actually do the edits, just leave comment after comment, which drove me nuts.

                                                                                                                                                        It was not a good experience. I wish I could have just gotten a LaTeX template or Pandoc or something.

                                                                                                                                                        1. 1

                                                                                                                                                          I use paper a lot for kinda ephemeral stuff but by far the number one problem I have is that it prevents being able to always capture. If I’m away from my desk I now need a secondary capture system to get it onto that list.

                                                                                                                                                          This is a bit inevitable of course, I think you can’t avoid having more than one and doing some cleanup work. The trick I have at the moment when I’m away from my primary todo-capture systems is to use my phone to set a reminder to remind myself to write it down later.

                                                                                                                                                          For the “don’t keep a long time” thing, I tend to make daily to-do’s and review the previous day’s to-do’s to actively build the following one. This requires a lot of honesty about not doing something though (otherwise you end up aggregating dead to-do’s in your list). For some things that have been in the list too long I set a calendar event to come back to it in a week or two (time for me to accept failure/have a different perspective on the task).

                                                                                                                                                          1. 2

                                                                                                                                                            Yeah, for me it’s a bit different, I’m a kind of sitting-all-day guy, I do have a second list at home as well, the two lists for two kinds of tasks. So I guess, if you gonna need a list when you’re away from your desk, that list should have a different type of tasks from the one you have at your desk :D

                                                                                                                                                            1. 2

                                                                                                                                                              The Bullet Journal app on iOS is pretty good for this. It lets you add tasks… but it deletes them after 2 days. You have to put them in your book or they’re gone. So when you’re away from your desk you can enter stuff, but you have to jot it down if you care about it.

                                                                                                                                                            1. 5

                                                                                                                                                              I greatly enjoyed using Bullet Journaling. I tried Project Evo and bought 4 from the Kickstarter, and it’s kinda good, but I kinda regret it. I thought “Oh, I would use this monthly layout! Oh, I would use this weekly todo list” and I never do. I do like the gratitude/wellness prompts, but I could have come up with those myself. I get no particular value out of the app aspect.

                                                                                                                                                              The forcing function of rewriting tasks is the clutch bit of Bullet Journalling. I used to put all my tasks in Inbox, snooze them, snooze them, snooze them, I got anxious and deluged. I think I will go back to Bullet Journals once I fill up my current Evo.

                                                                                                                                                              1. 3

                                                                                                                                                                Does anyone know any more about this? I’ve never heard of it and it seems very new, but there is already a BallerinaCon in July? Looks like it’s owned by WSO2 who I’ve never heard of before either.

                                                                                                                                                                1. 3

                                                                                                                                                                  It has been about 3 years in development but we really started talking about it earlier this year. The origins indeed have been in WSO2’s efforts in the integration space (WSO2 is an open-source integration company and had a research project on code-first approach to integration). Ballerina is an open-source project - at this moment has 224 contributors.

                                                                                                                                                                  It is getting a lot of interest in the microservices and cloud-native (CNCF) space because it supports all the modern data formats and protocols (HTTP, WebSockets, gRPC, etc.), has native Docker and Kubernetes integration (build directly into a Docker image and K8S YAMLs), is type-safe, compiled, has parallel programming and distributed constructs baked in, etc.

                                                                                                                                                                  You can see lots of language examples in Ballerina by Example and Ballerina Guides.

                                                                                                                                                                  1. 2

                                                                                                                                                                    I actually posted this hoping someone would have more info. The language looks interesting and far along to be under the radar.

                                                                                                                                                                    1. 1

                                                                                                                                                                      The company seems to be based in Sri Lanka. It is nice to see cool tech coming from countries like that.

                                                                                                                                                                      1. 1

                                                                                                                                                                        The company seems to be based in Sri Lanka. It is nice to see cool tech coming from countries like that.

                                                                                                                                                                        The project has non-WSO2 contributors as well, and WSO2 has also offices in Mountain View, New York, Pao Paolo, London, and Sydney, but indeed Colombo (Sri Lanka) is the biggest office so at the moment my guess would be that Ballerina is 90% from Sri Lanka - which indeed is a fantastic place! :)

                                                                                                                                                                    1. 1

                                                                                                                                                                      I think standups are all doomed to devolve into misery without an incredibly strong and dedicated hand leading and cutting people off. That would be step 1 for me: find someone who is willing to say “No, you’re standing up so you are as uncomfortable as we are… No your time is up…”

                                                                                                                                                                      I only find the first two or three sentences of what anyone is doing to be useful. The other thing that is useful is to know what problem anyone is wrestling with at the time in case someone knows how to help.

                                                                                                                                                                      If I was running my Iron Fist standups, I think I could get it down to 30s per person ;)

                                                                                                                                                                      1. 5

                                                                                                                                                                        Examples of major changes:

                                                                                                                                                                        generics?

                                                                                                                                                                        simplified, improved error handling?

                                                                                                                                                                        I am glad to see they are considering generics for Go2.

                                                                                                                                                                        1. 5

                                                                                                                                                                          Russ has more background on this from his Gophercon talk: https://blog.golang.org/toward-go2

                                                                                                                                                                          The TL;DR for generics is that Go 2 is either going to have generics or is going to make a strong case for why it doesn’t.

                                                                                                                                                                          1. 1

                                                                                                                                                                            As it should be…

                                                                                                                                                                            1. 1

                                                                                                                                                                              Glad to hear that generics are very likely on the way from someone on the Go team.

                                                                                                                                                                              The impression I got was that generics were not likely to be added without a lot of community push in terms of “Experience Reports”, as mentioned in that article.

                                                                                                                                                                              1. 1

                                                                                                                                                                                They got those :)

                                                                                                                                                                            2. 1

                                                                                                                                                                              Wouldn’t generic types change Go’s error handling too? I mean that when you can build a function that returns a Result<Something, Error> type, won’t you use that instead of returning Go1 “tuples” ?

                                                                                                                                                                              1. 5

                                                                                                                                                                                For Result type, you either need boxing, or sum type (or union, with which you can emulate sum type), or paying memory cost of both value and error. It’s not automatic with generics.

                                                                                                                                                                                1. 1

                                                                                                                                                                                  I see, thanks for clarifying! :)

                                                                                                                                                                                2. 1

                                                                                                                                                                                  As I understand it Go has multiple return values and does not have a tuple type, so not sure how your example would work. There are some tickets open looking at improving the error handling though.

                                                                                                                                                                              1. 5

                                                                                                                                                                                Google contributes suprisingly little back to in terms of open source compared to the size of the company and the number of developers they have. (They do reciprocate a bit, but not nearly as much as they could.)

                                                                                                                                                                                For example this is really visible in the area where they do some research and/or set a standard like with compression algorithms (zopfli, brotli), network protocols (HTTP/2, QUIC), the code and glue they release is minimal.

                                                                                                                                                                                It’s my feeling that Google “consumes”/relies on a lot more open source code than they then contribute back to.

                                                                                                                                                                                1. 10

                                                                                                                                                                                  Go? Kubernetes? Android? Chromium? Those four right there are gargantuan open source projects.

                                                                                                                                                                                  Or are you specifically restricting your horizon to projects that aren’t predominantly run by Google? If so, why?

                                                                                                                                                                                  1. 11

                                                                                                                                                                                    I’m restricting my horizon for projects that aren’t run by Google because it better showcases the difference between running and contributing to a project. Discussing how Google runs open source projects is another interesting topic though.

                                                                                                                                                                                    Edit: running a large open source project for a major company is in large part about control. Contributing to a project where the contributor is not the main player running the project is more about cooperation and being a nice player. It just seems to me that Google is much better at the former than the latter.

                                                                                                                                                                                    1. 2

                                                                                                                                                                                      It would be interesting to attempt to measure how much Google employees contribute back to open source projects. I would bet that it is more than you think. When you get PRs from people, they don’t start off with, “Hey so I’m an engineer at Google, here’s this change that we think you might like.” You’d need to go and check out their Github profile and rely on them listing their employer there. In other words, contributions from Google may not look like Contributions From Google, but might just look like contributions from some random person on the Internet.

                                                                                                                                                                                      1. 3

                                                                                                                                                                                        I don’t have the hat, but for the next two weeks (I’m moving teams) I am in Google’s Open Source office that released these docs.

                                                                                                                                                                                        We do keep a list of all Googlers who are on GitHub, and we used to have an email notification for patches that Googlers sent out before our new policy of “If it’s a license we approve, you don’t need to tell us.” We also gave blanket approval after the first three patches approved to a certain repo. It was ballpark 5 commits a day to non-Google code when we were monitoring, which would exclude those which had been given the 3+ approval. Obviously I can share these numbers because they’re all public anyway ;)

                                                                                                                                                                                        For reasons I can’t remember, we haven’t used the BigQuery datasets to track commits back to Googlers and get a good idea of where we are with upstream patches now. I know I tried myself, and it might be different now, but there was some blocker that prevented me doing it.

                                                                                                                                                                                        I do know that our policies about contributing upstream are less restrictive than other companies, and Googlers seem to be happy with what they have (particularly since the approved licenses change). So I disagree with the idea that Google the company doesn’t do enough to upstream. It’s on Googlers to upstream if they want to, and that’s no different to any other person/group/company.

                                                                                                                                                                                        1. 2

                                                                                                                                                                                          So I disagree with the idea that Google the company doesn’t do enough to upstream.

                                                                                                                                                                                          Yeah, I do too. I’ve worked with plenty of wonderful people out of Google on open source projects.

                                                                                                                                                                                          More accurately, I don’t even agree with the framing of the discussion in the first place. I’m not a big fan of making assumptions about moral imperatives and trying to “judge” whether something is actually pulling its weight. (Mostly because I believe its unknowable.)

                                                                                                                                                                                          But anyway, thanks for sharing those cool tidbits of info. Very interesting! :)

                                                                                                                                                                                          1. 3

                                                                                                                                                                                            Yeah, sorry I think I made it sound like I wasn’t agreeing with you! I was agreeing with you and trying to challenge the OP a bit :)

                                                                                                                                                                                            Let me know if there’s any other tidbits you are interested in. As you can tell from the docs, we try to be as open as we can, so if there’s anything else that you can think of, just ping me on this thread or cflewis@google.com and I’ll try to help :D

                                                                                                                                                                                            1. 1

                                                                                                                                                                                              FWIW I appreciate the effort to shed some light on Google’s open source contributions.Do you think that contributions could be more systemic/coordinated within Google though, as opposed to left to individual devs?

                                                                                                                                                                                              1. 1

                                                                                                                                                                                                Do you think that contributions could be more systemic/coordinated within Google though, as opposed to left to individual devs?

                                                                                                                                                                                                It really depends on whether a patch needs to be upstreamed or not, I suppose. My gut feeling (and I have no data for this) and entirely personal and not representative of my employer opinion, is that teams as a whole aren’t going to worry about it if they can avoid it… often the effort to convince the upstream maintainers to accept the patch can suck up a lot of time, and if the patch isn’t accepted then that time was wasted. It’s also wasted time if the project is going in a direction that’s different to yours, and no-one really ever wants to make a competitive fork. It’s far simpler and a 100% guarantee of things going your way if you just keep a copy of the upstream project and link that in as a library with whatever patches you want to do.

                                                                                                                                                                                                The bureaucracy of upstreaming, of course, is working as intended. There does have to be guidance and care to accepting patches. Open source != cowboy programming. That’s no problem if you are, say, a hobbyist who is doing it in the evenings here and there, where timeframes and so forth are less pressing. But when you are a team with directives to get your product out as soon as you can, it generally isn’t something a team will do.

                                                                                                                                                                                                I don’t think this is a solved problem by any company that really does want to commit back to open source like Google does. And I don’t think the issue changes whether you’re a giant enterprise or a small mature startup.

                                                                                                                                                                                                This issue is also why you see so much more open source projects released by companies rather than working with existing software: you know your patches will be accepted (eventually) and you know it’ll go in your direction, It’s a big deal to move a project to community governance as you now lose that guarantee.

                                                                                                                                                                                    2. 0

                                                                                                                                                                                      Chromium?

                                                                                                                                                                                      Did you ever tried to compile it?

                                                                                                                                                                                      1. 2

                                                                                                                                                                                        Yeah, and?

                                                                                                                                                                                        1. 0

                                                                                                                                                                                          How much time it took? On which hardware?

                                                                                                                                                                                          1. 1

                                                                                                                                                                                            90 minutes, on a mid-grade desktop from 2016.

                                                                                                                                                                                            1. 1

                                                                                                                                                                                              Cool! You should really explain to Google your build process!

                                                                                                                                                                                              And to everybody else, actually.

                                                                                                                                                                                              Because a convoluted and long build process, concretely reduce the freedom that an open source license gives you.

                                                                                                                                                                                              1. 1

                                                                                                                                                                                                Cool! You should really explain to Google your build process!

                                                                                                                                                                                                Google explained it to me actually. https://chromium.googlesource.com/chromium/src/+/lkcr/docs/linux_build_instructions.md#faster-builds

                                                                                                                                                                                                Because a convoluted and long build process, concretely reduce the freedom that an open source license gives you.

                                                                                                                                                                                                Is the implication that Google intentionally makes the build for Chromium slow? Chromium is a massive project and uses the best tools for the job and has made massive strides in recent years to improve the speed, simplicity, and documentation around their builds. Their mailing lists are also some of the most helpful I’ve ever encountered in open source. I really don’t think this argument holds any water.

                                                                                                                                                                                    3. 5

                                                                                                                                                                                      The amount Google invests in securing open source software basically dwarfs everyone else’s investment, it’s vaguely frightening. For example:

                                                                                                                                                                                      • OSS-Fuzz
                                                                                                                                                                                      • Patch Rewards for OSS projects
                                                                                                                                                                                      • Their work on Clang’s Sanitizers and libFuzzer
                                                                                                                                                                                      • Work on the kerne’s self protection program and syzkaller
                                                                                                                                                                                      • Improvements to linux kernel sandboxing technologies, e.g. seccomp-bpf

                                                                                                                                                                                      I don’t think anyone else is close, either by number (and severity) of vulnerabilities reported or in proactive work to prevent and mitigate them.

                                                                                                                                                                                      1. 2

                                                                                                                                                                                        Google does care a lot about security and I know of plenty of positive contributions that they’ve made. We probably could spend days listing them all, but in addition to what you’ve mentioned project zero, pushing the PKI towards sanity, google summer of code (of which I was one recipient about a decade ago), etc all had a genuinely good impact.

                                                                                                                                                                                        OTOH Alphabet is the world’s second largest company by market capitalization, so there should be some expectation of activity based on that :)

                                                                                                                                                                                        Stepping out of the developer bubble, it is an interesting thought experiment to consider if it would be worth trading every open source contribution Google ever made for changing the YouTube recommendation algoritm to stop promoting extremism. (Currently I’m leaning towards yes.)

                                                                                                                                                                                    1. 5

                                                                                                                                                                                      It is kind of funny how companies refuse to use software with any freedom restrictions, it is almost as if they know it is a bad thing to have done to you.

                                                                                                                                                                                      1. 4

                                                                                                                                                                                        FWIW Google does not refuse to use GPL, it’s right there in the docs. Using GPL and other restrictive licenses at Google does have some legal overhead involved, and most teams understandably don’t want to have to jump those hoops. The non-restrictive licenses like MIT/BSD/Apache (Apache 2 being the license the vast number of our projects use because of the patent grant) just make staying compliant easier.

                                                                                                                                                                                        The Open Source team deeply cares about being compliant, not just because of the legal issues, but because it’s the right thing to do. It’s important to us as engineers who joined the team because we <3 open source in the first place that we do right by authors. I think it’s easy to think about these things as being created by faceless entities, and not realize that the people that staff these teams all had previous lives, and with our team, all of them released open source projects one way or another.

                                                                                                                                                                                        1. 4

                                                                                                                                                                                          The GPL is not restrictive, given that it starts with copyright and gives freedoms from that.

                                                                                                                                                                                          1. 1

                                                                                                                                                                                            The only time I ever wanted to use GPL was to restrict competition to software I was working on.

                                                                                                                                                                                        2. 1

                                                                                                                                                                                          Or default on the opposites of how they acquire 3rd party software for their own software that they build or license for their users with careful exceptions for open source:

                                                                                                                                                                                          “Google gives you a personal, worldwide, royalty-free, non-assignable and non-exclusive license to use the software provided to you by Google as part of the Services. This license is for the sole purpose of enabling you to use and enjoy the benefit of the Services as provided by Google, in the manner permitted by these terms. You may not copy, modify, distribute, sell, or lease any part of our Services or included software, nor may you reverse engineer or attempt to extract the source code of that software, unless laws prohibit those restrictions or you have our written permission.

                                                                                                                                                                                          Open source software is important to us. Some software used in our Services may be offered under an open source license that we will make available to you. There may be provisions in the open source license that expressly override some of these terms.”

                                                                                                                                                                                          Right there Google tells you what kind of software is most valuable to them and funds all the FOSS they support. Maybe that should be more developers’ default, too, if they can find a business model to pull it off. ;)