1. 7

    Go however is solely maintained within Google, and we all know what happens to most Google projects after a few years.

    This is not true. Its mostly maintained by Google, but it’s an open source project with many contributers. Furthermore its much more widely used than Rust.

    That may change in 5 years, in fact I think it probably will, even though I really like Go.

    So the thing I don’t get about this, Javascript and Rust are very different programming languages. I get why a C or C++ developer would want to get into Rust, or someone who is really interested in performance, maybe even a Java or C# developer, but its such a leap to go from Javascript to Rust… Why in the world did you choose Javascript in the first place?

    There was an idea here about frontend developers being able to get into backend development, as opposed to Java which was so foreboding. Have we all collectively forgotten why Node came about in the first place? Because Rust isn’t an easy programming language to pickup.

    Don’t get me wrong. It’s really powerful, and one of the days I want to sit down and really figure it out.

    But it’s not easy.

    1. 21

      The title of the article is a heavy “editorialization” (more like misrepresentation) of the actual contents, a.k.a bait an switch. The tl;dr quote from the actual text:

      Q: So when will these [generics, errors, etc.] actually land?

      A: When we get it right. [… Maybe, let’s] say two years from now […]

      No slightest mention of “coming to Go[lang] in 2019” for those.

      1. 3

        I agree. I clicked on it to see the specifics of how they did generics and modules. Especially generics since it was a contested issue.

        1. 4

          Modules are usable today: https://github.com/golang/go/wiki/Modules

          The additional work is turning it on by default and adding things like the central server.

          For generics the proposal is here: https://go.googlesource.com/proposal/+/master/design/go2draft-generics-overview.md

          And apparently there’s a branch you can test it with:

          https://github.com/golang/go/issues/15292#issuecomment-438880159

          1. 2

            Go modules are more “unusable” in the current state than “usable”. It’s looking to me like it didn’t solve any of the real problems with Go depedencies. It still doesn’t vendor local to the project by default, there is little or no CLI support for basic dependency management tasks (like adding new deps, removing old ones, updating existing ones), and there’s no support for environment-specific dependencies.

            At this point they just need to scrap it completely and rewrite it using crate and yarn as examples of how to solve this problem. It’s frustrating that this is a solved problem for many languages, but not Go.

            On the plus side, I think it speaks the the strength of the language to be so prolific with such poor dependency management tooling.

            1. 3

              Completely disagree. I’ve converted several projects from dep and found modules a real pleasure. Its fast and we’ll thought out.

              Vendoring has major problems for large projects. It takes me 30 seconds to run any command on Macos docker because there are so many files in this repo I’m working on. Sure that’s a docker problem, but it’s a pain regardless.

              With modules you can get offline proxying server capabilities without the need to vendor the dependencies in the repo itself and if you really want vendoring it’s still available. And this is something they are actively working on. A centralized, signed cache of dependencies.

              Also the go mod and go get commands can add and update dependencies. You can also change import paths this way. It’s underdocumented but available. (do go get some/path@someversion)

              Not sure about env-specific dependencies… That’s not a situation I’ve heard of before.

              There are a lot of things go got right about dependencies: a compiler with module capabilities built into the language, no centralized server, import paths as urls so they were discoverable, code as documentation which made godoc.org possible.

              And FWIW this isn’t a “solved” problem. Every other solution has problems too. Software dependencies is a hard problem and any solution has trade-offs.

              1. 1

                I’m glad it’s working well for you! This gives me hope. I’m basing my feedback on the last time I messed around with go modules which was a few months ago, so sounds like things have improved. Nevertheless I think it’s a long way off what it should be.

                By environment specific dependencies, I’m referring to things like test and development libraries that aren’t needed in production.

      1. 18

        What a curious article. Let’s start with the style, such as calling some of the (perceived) advantages of a monorepo a “lie”. Welp, guess I’m a liar 🤷‍ Good way to have a conversation, buddy. Based on this article I’d say that working at Lyft will be as much fun as working at Uber.

        Anyway, we take a deep breath and continue, and it seems that everything is just handwaved away.

        Our organisation has about 25 Go applications, supported by about 20 common dependency packages. For example, we have packages log, database, cache, etc. Rolling out updates to a dependency organisation-wide is hard, even for compatible changes. I need to update 25 apps, make PRs for 25 apps. It’s doable, but a lot of work. I expect that we’ll have 50 Go applications before the year is out.

        Monorepos exist exactly to solve problems like this. These problems are real, and can’t just be handwaved away. Yes, I can (and have) written tools to deal with this to some extent, but it’s hard to get this right, and in the end I’ve still got 25 PRs to juggle with. The author is correct that tooling for monorepos also needs to be written, but it seems to me that that tooling will be a lot simpler and easier to maintain (Go already does good caching of builds and tests out of the box, so we just have to deal with deploys). in particular, I find it’s very difficult to maintain any sense of “overview” of stuff because everything is scattered over 25 PRs.

        Note that the total size of our codebase isn’t even that large. It’s just distributed over dozens of repos.

        It’s still a difficult problem, and there is no “one size fits all” solution. If our organisation would still have just one product in Go (as we started out three years ago) then the current polyrepo approach would continue to suffice. It still worked mostly okay when we expanded to two and three products. But now that we’ve got five products (and probably more on the way in the future) it’s getting harder and harder to manage things. I can write increasingly more advanced tooling, but that’s not really something I’m looking forwards to.

        I’m not sure how to solve it yet; for us, I think the best solution will be to consolidate our 20 dependency packages in to a single one and consolidate all services of different applications in their own repo, so we’ll end up having 6 repos.

        Either way, the problems are real, and people who look towards monorepos aren’t all stupid or liars.

        1. 4

          I would imagine that if all you use is Go, and nothing much else, then I would image that you are in the monorepo “sweet spot” (especially if your repo size isn’t enormous). From what I understand, Go was more or less designed around the google internal monorepo workflow. At least until Go 1.10/1.11 or so (6 years? after Go 1.0).

          It makes me wonder…

          • Are there other languages that seem to make monorepo style repos easier?
          • Are monorepos harder/worse if you have many apps written in multiple disparate languages?
          1. 7

            Main issue with monorepos (imo) is that lots of existing tools assume you are not using them (eg: github webhooks, CI providers, VCS (support for partial worktrees), etc). Not an issue at google scale where such tools are managed (or built) in-house.

            1. 3

              This point isn’t made enough in the monorepo debate. The cost of a monorepo isn’t just the size of the checkout, it’s also all of the tooling you loose by using something non-standard. TFA mentioned some of it, but even things like git log become problematic.

              1. 2

                Is there a middleground that scopes the tooling better? What I mean is, keep your web app and related backend services in their monorepo assuming they aren’t built on drastically different platforms and you desire standardisation and alignment. Then keep your mobile apps in separate repos, unless you are using some cross-platform framework which permits a mobile monorepo. You get the benefits of the monorepo for what is possibly a growing set of services that need to refactored together while not cluttering git log et al with completely unrelated changes.

                1. 2

                  Sort of. What really matters is whether you end up with a set of tools that work effectively. For small organizations, that means polyrepos, since you don’t often have to deal with cross-cutting concerns and you don’t want to build / self-host tools.

                  Once you grow to be a large organization, you start frequently making changes which require release coordination, and you have budget to setup tools to meet your needs.

            2. 4

              Interesting, Go in my experience is one of the places I have seen the most extreme polyrepo/microservice setups. I helped a small shop of 2 devs with 50+ repos. One of the devs was a new hire…

            3. 0

              Rolling out updates to a dependency organisation-wide is hard, even for compatible changes. I need to update 25 apps, make PRs for 25 apps.

              What exactly is the concern here? Project ownership within an org? I fail to see how monorepo is different from having commit access to all the repos for everyone. PRs to upstream externally? Doesn’t make a difference either.

              1. 3

                The concern is that it’s time-consuming and clumsy to push updates. If I update e.g. the database package I will need to update that for 25 individual apps, and them create and merge 25 individual PRs.

                1. 3

                  The monorepo helps with this issue, but it can also be a bit insidious. The dependency is a real one and it’s one that any updates to need to be tested. It’s easier to push the update to all 25 apps in a monorepo, but it also can tend to allow developers to make updates without making sure the changes are safe everywhere.

                  Explicit dependencies with a single line update to each module file can be a forcing function for testing.

                  1. 2

                    but it also can tend to allow developers to make updates without making sure the changes are safe everywhere

                    The Google solution is by pushing the checking of the safety of a change onto the team consuming it, not the one creating it.

                    Changes are created using Rosie, and small commits created with a review from a best guess as to who owns the code. Some Rosie changes wait for all people to accept. Some don’t, and in general I’ve been seeing more of that. Rosie changes generally assume that if your tests pass, the change is safe. If a change is made and something got broke in your product, your unit tests needed to be better. If that break made it to staging, your integration tests needed to be better. If something got to production, you really have bigger problems.

                    I generally like this solution. I have a very strong belief that during a refactor, it is not the responsibility of the refactor author to prove to you that it works for you. It’s up to you to prove that it doesn’t via your own testing. I think this applies equally to tiny changes in your own team up to gigantic monorepo changes.

                  2. 1

                    Assuming the update doesn’t contain breaking changes, shouldn’t this just happen in your CI/CD pipeline? And if it does introduce breaking changes, aren’t you going to need to update 25 individual apps anyway?

                    1. 4

                      aren’t you going to need to update 25 individual apps anyway?

                      The breaking change could be a rename, or the addition of a parameter, or something small that doesn’t require careful modifications to 25 different applications. It might even be scriptable. Compare the effort of making said changes in one repo vs 25 repos and making a PR for each such change.

                      Now, maybe this just changes the threshold at which you make breaking changes, since the cost of fixing downstream is high. But there are trade offs there too.

                      I truthfully don’t understand why we’re trying to wave away the difference in the effort required to make 25 PRs vs 1 PR. Frankly, in the way I conceptualize it, you’d be lucky if you even knew that 25 PRs were all you needed. Unless you have good tooling to tell you who all your downstream consumers are, that might not be the case at all!

                      1. 1

                        Here’s the thing: I shouldn’t need to know that there are 25PRs that have to be sent, or even 25 apps that need to be updated. That’s a dependency management problem, and that lives in my CI/CD pipeline. Each dependent should know which version(s) it can accept. If I make any breaking changes, I should make sure I alter the versioning in such a way that older dependents don’t try and use the new version. If I need them to use my new version, then I have to explicitly deprecate it.

                        I’ve worked in monorepos with multiple dependents all linking back to a single dependency, and marshalling the requirements of each of those dependents with the lifecycle of the dependency was just hell on Earth. If I’m working on the dependency, I don’t want to be responsible for the dependents at the same time. I should be able to mutate each on totally independent cycles. Changes in one shouldn’t ever require changes in the other, unless I’m explicitly deprecating the version of the dependency one dependent needs.

                        I don’t think VCS is the right place to do dependency management.

                        1. 3

                          Round and round we go. You’ve just traded one problem for another. Instead of 25 repos needing to be updated, you now might have 25 repos using completely different versions of your internal libraries.

                          I don’t want to be responsible for the dependents at the same time.

                          I mean, this is exactly the benefit of monorepos. If that doesn’t help your workflow, then monorepos ain’t gunna fly. One example where I know this doesn’t work is in a very decentralized ecosystem, like FOSS.

                          If you aren’t responsible for your dependents, then someone else will be. Five breaking changes and six months later, I feel bad for the poor sap that needs to go through the code migration to address each of the five breaking changes that you’ve now completely forgotten about just to add a new feature to that dependency. I mean sure, if that’s what your organization requires (like FOSS does), then you have to suck it up and do it. Otherwise, no, I don’t actually want to apply dependency management to every little thing.

                          Your complaints about conflating VCS and dependency management ring hollow to me.

                          1. 1

                            I mean, again, this arises from personal experience: I’ve worked on a codebase where a dependency was linked via source control. It was an absolute nightmare, and based on that experience, I reached this conclusion: dependencies are their own product.

                            I don’t think this is adding “dependency management to every little thing”, because dependency management is like CI: it’s a thing you should be doing all the time! It’s not part of the individual products, it’s part of the process. Running a self-hosted dependency resolver is like running a self-hosted build server.

                            And yes, different products might be using different versions of your libraries. Ideally, nobody pinned to a specific minor release. That’s an anti-pattern. Ideally, you carefully version known breaking changes. Ideally, your CI suite is robust enough that regressions never make it into production. I just don’t see how different versions of your library being in use is a problem. Why on Earth would I want to go to every product that uses the library and update it, excepting show-stopping, production-critical bugs? If it’s just features and performance, there’s no point. Let them use the old version.

                            1. 2

                              You didn’t really respond to this point:

                              Five breaking changes and six months later, I feel bad for the poor sap that needs to go through the code migration to address each of the five breaking changes that you’ve now completely forgotten about just to add a new feature to that dependency.

                              You ask why it’s a problem to have a bunch of different copies of your internal libraries everywhere? Because it’s legacy code. At some point, someone will have to migrate its dependents when you add a new feature. But the point at which that happens can be delayed indefinitely until the very moment at which it is required to happen. But at that point, the library may have already gone through 3 refactorings and several breaking changes. Instead of front-loading the migration of dependents as that happens by the person making the changes, you now effectively have dependents using legacy code. Subsequent updates to those dependents now potentially fall on the shoulders of someone else, and it introduces surprise yak shaves. That someone else then needs to go through and apply a migration to their code if they want to use an updated version of the library that has seen several breaking changes. That person then needs to understand the breaking changes and apply them to their dependent. If all goes well, maybe this is a painless process. But what if the migration in the library resulted in reduced functionality? Or if the API made something impossible that you were relying on? It’s a classic example of someone not understanding all of the use cases of their library and accidentally removing functionality from users of their library. Happens all the time. Now that person who is trying to use your new code needs to go and talk to you to figure out whether the library can be modified to support original functionality. You stare at them blankly for several seconds as you try to recall what it is you did 6 months ago and what motivated it. But all of that would have been avoided if you were forced to go fix the dependent in the first place.

                              Like I said, your situation might require one to do this. As I said above, which you seem to have completely ignored, FOSS is one such example of this. It’s decentralized, so you can’t realistically fix all dependents. It’s not feasible. But in a closed ecosystem inside a monorepo, your build doesn’t pass unless all dependents are fixed. Everything moves forward, code migrations are front loaded and nobody needs to spend any time being surprised by a necessary code migration.

                              I experience both of these approaches to development. With a monorepo at work and lots of participation in FOSS. In the FOSS world, the above happens all the time exactly because we have a decentralized system of libraries that are each individually versioned, all supported by semver. It’s a great thing, but it’s super costly, yet necessary.

                              Dependency management with explicit versioning is a wonderful tool, but it is costly to assign versions to things. Sometimes it’s required. If so, then great, do it. But it is most certainly not something that you “just do” like you do CI. Versioning requires some judgment about the proper granularity at which you apply it. Do you apply it to every single module? Every package? Just third party dependencies? You must have varying answers to these and there must be some process you follow that says when something should be independently versioned. All I’m saying is that if you can get away with it, it’s cheaper to make that granularity as coarse as possible.

              1. 2

                I still can’t determine how Go modules would make my life measurably better than plain old GOPATH. 🤷🏼‍♂️

                1. 4

                  One thing I always liked about GOPATH and using VCS for dependency fetching is how easy it is to make changes to your dependencies: just cd over to the easy known path of your dep, make a change there, rebuild your project, then push your change up from your dep. Great! There’s no need to go find the git repo and clone it and then mess with your manifests to use local sources like you’d need to do with Rubygems/Bundler, or participate in symlink hanky-panky like with NPM.

                  But the downside is that when you need to work on two different projects that require different versions of a dependency, you need either two GOPATHs, or some other tacked on dependency management scheme using vendor. Either way means an additional tool for you (and your team) that you need to version and install on workstations and train people to use, which I see as an annoyance.

                  With go modules, it’s still very easy to get a version-controlled copy of a dependency (just clone the module URL), and then to use that copy when you build a specific project (by specifying the path of your local clone in your go.mod file). And you don’t need any tools other than go to do it.

                  1. 4

                    i was critical about the improvements at first, too, but they work really well for me where i’ve used them. the design is well thought out, migration from existing systems isn’t hard (fully automatic in my case, only small code bases though). the checksums for modules (go.sum) is also nice to have. it pays that the go team seems to read many papers and take their time with things.

                    i still manage my sources like in the GOPATH with full paths, which i’ve found very useful (and do so for other languages). i always know the url to the repository :)

                    1. 2

                      Were you a Go user at the time when code.google.com got closed down? Vendoring is a way to protect against events like this, or packages being taken down/moved from/on github/… by their authors. Modules and the planned global mirrors take this idea further, making the resulting ecosystem more globally interoperable than vendoring alone. While code.google.com may still exist in your GOPATH, there’s no easy way you could share your code that’s using it with others.

                      Also, modules introduce a somewhat controlled way of upgrading code & dependencies. Especially dependencies of dependencies, and further down the rabbit hole.

                      1. 1

                        Or an npm user when left-pad got deleted? Vendoring would’ve avoided that problem, too. Plus, you can still work / deploy when the npm registry is down.

                        1. 1

                          Yarn does have an “offline mirror” feature that is now one of the more significant distinguishing features from npm.

                          https://yarnpkg.com/blog/2016/11/24/offline-mirror/

                      2. 2

                        If you ever need a non-master branch of a dependency, modules will help.

                      1. 38

                        Wow. Microsoft engineer complains about “some seemingly anti-competitive practices done by Google’s Chrome team”. Now that is some piquant irony.

                        Also, the page’s YouTube video appears to be blocked. Icing on the cake?

                        1. 37

                          …one of the reasons we decided to end EdgeHTML was because Google kept making changes to its sites that broke other browsers, and we couldn’t keep up…

                          I can appreciate the shadenfreude of Microsoft’s new position, but this is a pretty legitimate concern. Especially if Google is/was doing that intentionally. What we need is a good incentive for Google to care about web standards and performance in non-Chrome browsers, but this move by Microsoft only drives things in the opposite direction.

                          1. 12

                            I don’t know if it’s intentional or not, but I am almost never able to complete reCaptchas in Firefox, it just keeps popping up ridiculous ones, like traffic lights that are on the border of three squares, and it keeps popping the same unsolvable ones for 2-3 minutes until I get tired/locked out of it and just use Chrome to log in, where somehow I always get sane ones and it lets me in first try. Anyone had the same?

                            This video sums it up very well, although not Firefox specific: https://www.youtube.com/watch?v=zGW7TRtcDeQ

                            (Btw I don’t use Tor, or public VPNs or any of the like.)

                            1. 4

                              This is known to happen: Blocking via an unsolvable CAPTCHA.

                              1. 1

                                Ha! Thanks for this, I won’t keep trying anymore :)

                            2. 17

                              Especially if Google is/was doing that intentionally.

                              I disagree that intention has anything to do with it. We have to judge these kinds of situations by the effect they have on the web, not on good feelings.

                              1. 7

                                Spot on. Intent only matters when it is ill. Even if not intended, the outcome is what matters materially.

                                1. 6

                                  One reason intention matters: if the intention is to handicap edge, then it’s probably not serving some other purpose that’s good for all of us. If handicapping edge is a side-effect of some real benefit, that’s just a fact about the complexity of the web (it might still be a bad decision, but there are trade-offs involved).

                                2. 7

                                  OK, let’s put aside the schadenfreude as best we can and examine the consequences. I think it’s fair to assume, for the sake of argument, that Alphabet Inc absolutely will do everything in its power, dirty tricks included, to derive business value from it’s pseudo-monopolist position. If Microsoft were to dig in their heels and ship a default browser for their desktop OS that didn’t play YouTube videos as well as Chrome does, would that harm Alphabet, or just Microsoft at this point?

                                  I don’t really understand your talk of “a good incentive”. Think of it this way: what incentive did Google, an advertising company, ever have to build and support a web browser in the first place? How did this browser come to its current position of dominance?

                                  1. 15

                                    Google built a web browser because Microsoft won the browser wars and did nothing with IE for 10 years.

                                    Their entire suite of products were web based and their ability to innovate with those products was severely hampered by an inadequate platform.

                                    https://googleblog.blogspot.com/2008/09/fresh-take-on-browser.html

                                    Chrome was revolutionary when it was released and many of the web technologies we take for granted today could never have happened without it.

                                    I’m not entirely thrilled with everything it led too but whatever their motives now, Google had good reasons to build Chrome in the first place.

                                    1. 23

                                      I’m sure whichever visionaries were at Google at that point had great reasons to build Chrome. But Google isn’t the same company anymore, and Chrome doesn’t mean what it once meant.

                                      “You could not step twice into the same river.” —Heraclitus

                                      1. 11

                                        That’s certainly ONE factor. The other is that Chrome by default makes “address bar” and “search bar” the same thing, and sends everything you type into the search bar to Google.

                                        Same as Google Maps, or Android as a whole. I often navigate with Google Maps while driving. The implication is that Google knows where I live, where I work, where I go for vacation, where I eat, where I shop. This information has a monetary value.

                                        If there is something Google does that is not designed to collect information on it’s users that can be turned into ad revenue, that something will eventually be shut down.

                                        1. 9

                                          “This information has a monetary value.”

                                          Exactly. They are trying to build accurate profiles of every aspect of people and businesses’ existences. Their revenue per user can go up as they collect more information for their profiles. That gives them an incentive to build new products that collect more data, always by default. Facebook does same thing. Revenue per user climbed for them year after year, too. I’m not sure where the numbers are currently at for these companies, though.

                                        2. 8

                                          Google built a web browser because Microsoft won the browser wars and did nothing with IE for 10 years.

                                          No, that was Mozilla. They together with Opera were fighting IE’s stagnation and by 2008 achieved ~30% share which arguably made Microsoft notice. Chrome was entering the world which already was multi-browser at that point.

                                          Also, business-wise Google needed Chrome as a distribution engine, it has nothing to do with fighting browser wars purely for the sake of users.

                                          1. 1

                                            I’m not entirely sure what you mean by a distribution engine. For ads? Or for software?

                                            I think business motives are extremely hard to discern from actions. I think you could make the argument that Google has been trying for years to diversify their business, mostly unsuccessfully, and around 2008 maybe they envisioned office software (spreadsheets, document processing, etc) as the next big thing. GMail was a surprise hit, and maybe they thought they could overthrow Microsoft’s dominance in the field. But they weren’t about to start building desktop software, so they needed a better browser to do it.

                                            Or maybe they built it so that Google would be the default search engine for everyone so they could serve more ads?

                                            Or maybe some engineers at Google really were interested in improving performance and security, built a demo of it, and managed to convince enough people to actually see it through?

                                            I realize the last suggestion may sound helplessly naive, but having worked as an engineer in a company where I had a lot of say in what got worked on, my motives were often pretty far afield of any pure business motive. I got my paycheck regardless, and sometimes I fixed a bug or made something faster because it annoyed me. I imagine there are thousands of employees at Google doing the same thing every day.

                                            Regardless, the fact remains that the technology they built for Chrome has significantly improved the user experience. The reason Chrome is now so dominant is because it was better. Much better when compared to something like IE6.

                                            And even ChromeOS is better than the low-price computing it competes with. Do you remember eMachines? They were riddled with junk software and viruses rendering them almost completely useless. A 100$ Chromebook is such a breath of fresh air compared to that experience.

                                            I realize there’s a cost to this, and I get why there’s a lot of bad press about Google, but I don’t think we need to rewrite history about it. I think we’re all better off with Google having created Chrome (even if I don’t agree with many of the things they’re doing now).

                                            1. 5

                                              The reason Chrome is now so dominant is because it was better.

                                              There are two reasons why Chrome became so dominant:

                                              • Google makes deals with OEMs to ship Chrome by default on the new desktops and laptops. Microsoft cannot stop them because of historical antitrust regulations.

                                              • Google advertised Chrome on their search page (which happens to be the most popular web page in the world) whenever someone using another browser visited it. It looks like they’ve stopped, though, since I just tried searching with Google from Firefox and didn’t get a pop-up.

                                        3. 3

                                          The incentive to play fair would come from Google not wanting to lose the potential ad revenue from users of non-Chrome browsers due to them deliberately sabotaging their own products in those browsers. Not trying to imply that EdgeHTML was the solution to that problem or that it would somehow be in Microsoft’s best interest to stick with it, just that its loss is further cementing Google’s grip on the web and that’s a bad thing.

                                          1. 3

                                            All the user knows is “browser A doesn’t seem to play videos as good as browser B”. In general they can’t even distinguish server from client technologies. All they can do about it, individually, is switch browsers.

                                            Now that Alphabet has cornered the market, their strategy should be obvious. It’s the same as Microsoft’s was during the Browser Wars. The difference is, Alphabet made it to the end-game.

                                            1. 1

                                              The end game being anti-trust action? I’m not following your line of argument. Are you examining that particular consequence?

                                              1. 2

                                                The antitrust case against Microsoft ended up with not much happening, and that was 18 years ago. Do you have much confidence that there is likely to be an effective antitrust action against Google?

                                                1. 1

                                                  I’m not the one making a case here.

                                                  Your interpretation[1][2] of how a single historical case went doesn’t change the fact that antitrust action is bad for a company’s long-term prospects and short-term stock price. The latter should directly matter to current leadership. Companies spend a reasonable amount of time trying to not appear anti-competitive. @minimax is utterly ignoring that consequence of “dirty tricks”.

                                                  [1] https://www.nytimes.com/2018/05/18/opinion/microsoft-antitrust-case.html illustrates the opposite perception. [2] https://arstechnica.com/tech-policy/2010/09/the-eternal-antitrust-case-microsoft-versus-the-world is more balanced, and points out the effect of the lawsuit on Microsoft PR, leadership focus and product quality.

                                              2. 1

                                                What makes Chrome’s position more of an end-game than what IE had in the early 2000s?

                                                1. 4

                                                  You’re looking at it wrong. The question you really need to consider is:

                                                  What makes Google’s position more of an end-game than what Microsoft had in the early 2000s?

                                                  Microsoft was the dominant OS player, but the Internet itself was undergoing incredible growth. What’s more, no one existed solely within what’s Microsoft provided.

                                                  Today, the Internet is essentially the OS for many (most?). People exist in a fully vertically integrated world built by Google - operating system, data stored in their cloud, documents written on their editor and emails sent through their plumbing… all of it run by the worlds most profitable advertising company, who just built themselves mountains of data to mine for better advertisements.

                                                  Microsoft in the 00’s could only dream of having that.

                                                  1. 4

                                                    Your assessment of Google today strikes me as not completely unreasonable, although it does neglect the fact that only a small fraction of Internet users live so completely in Google’s stack; I suspect far more people just use Android and Chrome and YouTube on a daily basis but don’t really use Gmail or GSuite (Docs, etc.) very frequently, instead relying on WhatsApp and Instagram a lot more.

                                                    And back in the 2000s there were definitely a large group of people who just used Windows, IE, Outlook, Hotmail, MSN & MS Office to do the vast majority of their computing. SO it’s not as different as you seem to believe. Except now there are viable competitors to Google in the form of Facebook & Apple in a way that nobody competed with MS back then.

                                                    1. 2

                                                      SO it’s not as different as you seem to believe.

                                                      It’s incredibly different.

                                                      When I used IE5 Microsoft’s tactic was to bundle it with Windows and use Microsoft-specific APIs to boost its performance, killing Netscape. If I used Chrome today, I’d find dark UI patterns are used to ensure my data is captured.

                                                      Similarly, Office/Outlook/Windows in 2000 didn’t mine the files I was working on to enrich an advertising profile that would follow me across the internet. If memory serves, while Hotmail did serve advertisements, they were based on banner advertisements / newsletters generated by Microsoft, and not contextually targeted.

                                                      The real risk here, I believe, is in both the scope and ease of understanding what’s happening today versus what Microsoft did. Microsoft’s approach was to make money by being the only software you run, and they’d use any trick they could to achieve that - patently anticompetitive behavior included.

                                                      Google, on the other hand… at this point I wonder if they’d care if 90% of the world ran Firefox as long as the default search engine was Google. I think their actions are far more dangerous than those of Microsoft because they are much wider reaching and far more difficult for regulators to dig into.

                                                      I suspect far more people just use Android and Chrome and YouTube on a daily basis but don’t really use Gmail or GSuite (Docs, etc.) very frequently, instead relying on WhatsApp and Instagram a lot more.

                                                      Even if we take that as a given, this means most people are sending:

                                                      • their location
                                                      • the videos and pictures from their phone’s camera
                                                      • their search history
                                                      • a list of content they watched

                                                      up to Google.

                                                      1. 1

                                                        Your assessment that Chrome is only a means to an end, the end being to have people continue using Google’s web search, seems dead on. But then you follow that up with a claim that doesn’t seem to logically follow at all.

                                                        The reach of Google now relative to Microsoft 15 years ago is lower as a fraction of total users; it only seems higher because the absolute number of total users has grown so much.

                                                        1. 3

                                                          Doesn’t this depend on how you define a “user”, though? Google has a grip on search that would be the envy of IBM back in the day. Android is by far the most popular operating system for mobile phones, if not for computing devices writ large. They pay for Mozilla because they can harvest your data through Firefox almost as easily as via Chrome, and they prop up a competitor, in case the US DOJ ever gets their head out of their ass and starts to examine the state of the various markets they play in.

                                                          1. 2

                                                            Depends on how narrowly you define “search” too; Do you include all the searches people conduct directly on Amazon or Facebook or Siri?

                                                          2. 1

                                                            The reach of Google now relative to Microsoft 15 years ago is lower as a fraction of total users; it only seems higher because the absolute number of total users has grown so much.

                                                            Android’s global smartphone penetration is at 86% in 2017[1]. And while the “relative reach” might be lower, the absolute impact of the data being Hoovered up is significant. In 2000, annual PC sales hit 130 million per the best figures I could find[2] … that’s less than a tenth of smartphone sales in 2017 alone.

                                                            What does it matter that Google’s relative reach is lower when they control nearly 9/10 smartphones globally and proudly boast over two billion monthly active devices?

                                                            1. 1

                                                              The level of control isn’t directly comparable. Microsoft sold Windows licenses for giant piles of money while Google licenses only the Play Store and other apps that run on Android. Android in China is a great example of the difference, although I guess Microsoft probably lost revenue (but not control over the UX) there via piracy.

                                              3. 1

                                                Think of it this way: what incentive did Google, an advertising company, ever have to build and support a web browser in the first place?

                                                Is this a real question asked in good faith? Maybe it’s just been a long day but I really can’t tell.

                                                1. 2

                                                  I was going for Socratic. You’re quite welcome to assume anything you like about the goodness of my faith.

                                                  1. 1

                                                    Got it - always happy to play along with Mr. Socrates ;) I mostly wanted to be sure I wasn’t just biting on an obvious troll.

                                            2. 11

                                              That’s just a picture of a blocked YouTube video to emphasis their point.

                                            1. 2

                                              A multi-process model works just fine if you use the right IPC primitives. I wonder if those were available for the author at the time.

                                              1. 1

                                                What are the right IPC primitives?

                                                1. 1

                                                  I think a named pipe with shared memory would be very fast. (http://anil.recoil.org/papers/drafts/2012-usenix-ipc-draft1.pdf)

                                                  Just a guess though based on the fact that Chrome uses multiple processes for each tab and it doesn’t seem to hinder it.

                                                  Now thay I think about it, it’s likely the serialization/deserialization which would be particularly painful. But with shared memory you might be able to avoid that. (your message would be a short “render this memory location”)

                                                  This approach would also require careful synchronization… you wouldn’t want the first process to modify an object the second process was rendering.

                                              1. 2

                                                How has Rust solved the callback problem of async/await? To explain what I mean, imagine you have an array of URLs and you want to open each of them in turn, download their contents, then do something with those contents. If you wanted to iterate over an array normally you’d do something like urls.foreach(|url| process(url.get()));.

                                                But what happens when that get is async? Don’t you have to duplicate all your functions to take async functions? You need an async_foreach that takes an async function, etc. etc.

                                                Now I know that in Rust, iterators are used, but essentially the same problem still exists, just a little swapped around. In most ‘async/await’ systems you can’t call async functions from normal functions. I would assume the same is true of Rust.

                                                To compare to a system like Go’s, where there aren’t special ‘async’ functions, and you don’t need to rewrite every bit of code that takes a callback to also have a version that takes an async callback and calls the callback asynchronously.

                                                1. 3

                                                  Depends on whether you want to fetch each url in parallel or in series.

                                                  Go does it in series ‘by default’ and provides goroutines to do it in parallel.

                                                  From my understanding, you can do something like foreach process(await url.get()) to run in series, or join(foreach(|url| await process_async(url.get()))) to do it in parallel (where process_async accepts a future and returns another).

                                                  This is also how javascript does it. You don’t need a special version of foreach since futures are just another type.

                                                  You should also be able to generate a future-ized version of a function automatically with macros (eg automatically converting a function of type PageHtml => Result type into Future<PageHtml> => Future<Result>

                                                  1. 2

                                                    But you still need a process and process_async?

                                                    If so, this post illustrates the problem: http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ .

                                                    Go avoids it by having a high level m:n userspace thread scheduler so there’s no need for separate versions of each function. (basically everything is async) It’s not a zero-cost abstraction though so wouldn’t make sense for Rust. (its a lot like a garbage collector in that regard)

                                                    I’m not sure what a better solution would be. At least async/await cleans up the code.

                                                    1. 1

                                                      If zero-cost abstraction is the goal, would it be possible (in any reasonable time) to make everything async-by-default but have the compiler remove asynchronity where it’s not needed?

                                                      Go’s approach seems quite brilliant as an actual real-world, existing & working solution. I think Crystal follows its lead on this.

                                                      1. 2

                                                        I don’t think it can be done without introducing a custom scheduler. Channel, network and time operations can be handled intelligently (described in more detail here: https://dave.cheney.net/2015/08/08/performance-without-the-event-loop)

                                                        So blocking a thread might be a normal OS thread wait, or it might be put into a network event loop (kqueue, epoll, etc) so the code looks uniform, but there’s a lot of magic behind the scenes.

                                                        It’s very clever, but there are costs. Occasionally there have been major performance issues from unexpected edge cases. I’m not sure how often people run into those anymore, but it can happen. And there is definitely overhead and less control when it comes to the scheduling. Though in theory the scheduler will probably do a better job scheduling than whatever the vast majority of programmers would come up with.

                                                        I think using futures with async/await allows you to do something very similar if you use an async io library: https://manishearth.github.io/blog/2018/01/10/whats-tokio-and-async-io-all-about/

                                                        Though the part I don’t understand is how you do selection

                                                        1. 1

                                                          Interesting thought (making things async by default) but I guess one would need to be more specific. If done manually, unnecessary async should be optimized away. Rust with the current proposals, of course, distinguishes between future construction and execution which is sometimes nice.

                                                        2. 1

                                                          You write a process function, and use a macro to future-ize it when you pass it to foreach.

                                                          1. 1

                                                            You need process_async if processing itself needs asynchronous calls. Yes, that also means that if you have to abstract and one of the implementations uses async calls, the interface also needs to be async. That said, it will be incredibly easy to implement an async interface with sync code.

                                                            The only time this is a problem is when you have a sync integration point in a library that you do not control and you need to do async calls. You can still, of course, simply block the calling thread until your processing is done.

                                                            What you get in rust is: really nice guarantees that you do not have in go, less overhead, easy cancelation. That said, as GC, the go model makes some things more uniform and even more easily composable; and you have stack traces, awesome (I mean it)! Rusts to avoid a (larger) runtime and manages to have more safety guarantees nevertheless. Quite a feat.

                                                        3. 2

                                                          You need async_foreach bc each variant is literally different. Parallel, sequential, fire and forget vs. collect and wait; are all different semantics and all valid. So it’s not accidental complexity that which variant you want must be specified when choosing the foreach since it evaluates to the result.

                                                          On the other hand – an intresting way around this is to use coroutines and explicit scheduling (like in Lua) (so you do schedule(function() process(...) end)). You do have to explicitly wait after the foreach if you want waiting semantics.

                                                          1. 1

                                                            When get is async, it returns a future that must be polled to completion (or cancelled, though I’m not sure what the cancellation story is for Rust futures). This can be done one at a time, blocking the current thread, or each can be registered on an Executor, such as an epoll-backed thread pool.

                                                            If using async/await, then you can call your example inside an asyc function, and either await each url or select between the ready ones.

                                                            Disclaimer: this is my current understanding, but it may be incomplete or inaccurate, as I’ve spent very little time with Rust futures.

                                                          1. 7

                                                            When I’ve had time over the week or two, I’ve been working on my blog series on decreasing the cost of running a personal k8s cluster. Just yesterday, I wrapped up a post on utilizing Reserved Instances and Spot Instances to decrease the overall bill by ~40%. As I have time this week, I’ll work on decreasing/stabilizing costs by running a single External Load Balancer via Ingress and optimizing Kops’ default EBS volume allocations.

                                                            I also just finished reading Google’s Site Reliability Workbook, so I’ll hopefully have some time at work to think about applying some of its ideas.

                                                            1. 1

                                                              It’s really funny for me reading some of your comments on reducing k8s costs, because you’re talking about reducing $160/mo by 40%, and my personal nerd-golf is to try to reduce my cloud budget to < $10/mo (free tier doesn’t count)

                                                              (I’m not deriding your hobby, just appreciating how it’s so similar in intent but so different in scope than my own)

                                                              1. 1

                                                                Haha yeah, definitely wouldn’t recommend running a personal Kubernetes cluster if cost-savings is a predominant concern :) I think for me, the experience I’m gaining with Kubernetes/Cloud Native, and the fun I’m having working with it, justifies the extra cost.

                                                              2. 1

                                                                I got one down to 5$ a month on GKE: http://www.doxsey.net/blog/kubernetes--the-surprisingly-affordable-platform-for-personal-projects.

                                                                It helps that they run the control plane for free but I still had to run my own load balancer, since the built-in one is 18$/mo.

                                                                Digital Ocean will have a managed k8s soon that might be great for personal clusters, but I haven’t tried it yet.

                                                                1. 1

                                                                  Very cool! Saved your post for later to give it a good look :)

                                                              1. 7

                                                                My usual desktop

                                                                I’m on OS X, and use a tiling window manager called ChunkWM with a hotkey daemon.

                                                                There’s NeoVim with a variety of syntax/editing plugins on top right, a currently active PDB session on bottom right, various IRC channels and servers via Weechat on bottom left, and Mutt top left.

                                                                With the combination of hotkey daemon, window manager, and the variety of CLI-based tools I use, I essentially never use my trackpad/mouse. Not shown is Firefox with a vim-like set of keybindings so that I can navigate with the keyboard.

                                                                I’m also a huge motorsports fan, so the wallpaper usually rotates between various Formula 1 or World Endurance Championship scenes.

                                                                1. 4

                                                                  How do you find ChunkWM? I’ve batted around the idea of installing it on my Mac at home, because I am a fan of tiling, keyboard-driven wms when forced to spend time in X; but I worry that it’d end up being a case of fighting the platform, a neither fish-nor-foul hybrid that manages to combine the worst of both worlds.

                                                                  1. 4

                                                                    It’s actually not that bad – there are a few configuration-level things that you need to setup to get things working smoothly, but once it’s going I really don’t have to touch it. I’m actually a bit lost when I need to use a computer that is not my own due to all the built-up muscle memory from the skhd hotkeys.

                                                                    But, it does have some flaws.

                                                                    1. When switching between single monitor & multiple monitors, sometimes windows don’t reposition themselves correctly and I have to hide all windows and then bring them to the foreground to get the chunkwm daemon to recognize them and resize them. It’s not a big deal, but it can be jarring the first time it happens to you. Also, this seems to have almost disappeared in the most recent versions of ChunkWM.
                                                                    2. When resizing windows, you can sometimes see redraw artifacts (edit: on further thought, this might be an issue with iTerm2 - I don’t ever see redraw artifacts on non-console windows). You can even see that in my screenshot that I took (it looks like an extra letter in the self parameter in the top right window, first line). The artifacts disappear when the window in question has to redraw itself again for some reason (e.g. you typed some text), but is supremely annoying.
                                                                    3. The default keybindings in skhd might be a tad annoying for non-English keyboards. I know when I need to type French accented characters, I have to go a circuitous route due to the use of the option key as the main skhd modifier.
                                                                    4. Some menubar applications will need to be added to the chunkwmrc config file as a “do-not-touch”, since chunkwm tries to tile floating menubar windows that appear, and it really just goes a bit nuts. This seems to have been resolved in the most recent versions of chunkwm, but I’m still a bit wary about it.

                                                                    Overall, though, for software that is at 0.4.x level of completeness, I’m very happy with it, and deal with the warts because the productivity it provides me is worth so much.

                                                                    The author of the software has gone through a few iterations of building these hotkey daemons & window managers for OS X, and seems to have taken a lot of knowledge and experience from past implementations.

                                                                  2. 2

                                                                    Another tiling window manager: https://www.spectacleapp.com/.

                                                                    1. 1

                                                                      +1 for Spectacle

                                                                  1. 6

                                                                    This is actually a solid tutorial. I’ve recently been thinking if Kubernetes might suit us after all. Because what we are doing in practice always didn’t seem to fit Kubernetes very well: instances of our app that need to run on separate infra, often even different regions, currently running on single servers. Translating to Kubernetes pretty much seemed like we would have to map each of those servers we have now to individual Kubernetes clusters.

                                                                    But managing a bunch of Kubernetes clusters doesn’t seem any worse than managing a bunch of individual servers? And if we’re already running on single servers, we could turn them into single node Kubernetes clusters for roughly the same price, with GKE masters being free.

                                                                    GCE definitely has an advantage in terms of pricing, here. We’re an AWS shop, but EKS is priced $0.20 per hour for the master, on top of your node costs. That’s instantly ~$150 per month added to your bill.

                                                                    1. 2

                                                                      Translating to Kubernetes pretty much seemed like we would have to map each of those servers we have now to individual Kubernetes clusters.

                                                                      You can assign pods to specific nodes in a single kubernetes cluster quite easily. https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

                                                                      1. 2

                                                                        By default some metadata is associated with each node, for example the region and availability zone. Using that information you can provide an affinity to target only a certain region, or make sure pods are distributed across availability zones.

                                                                        You can also add custom taints to nodes, and then add a toleration to a pod to make sure it runs where you want it to.

                                                                        At Datadog we built a custom controller (similar to the ip sync code in the blog post) which when handed a custom resource definition would create a nodepool with the requested constraints (node types, SSDs, etc), thus allowing developers to also describe the hardware requirements for their application.

                                                                        Paired with the horizontal pod autoscaler and the cluster autoscaler you can go a long way to automating fairly sophisticated deployments.

                                                                        1. 1

                                                                          But everything I can find about Kubernetes (in the cloud) is that you start it in a single region. Am I missing something? Can you span a Kubernetes cluster across multiple regions, or somehow treat it as one large cluster?

                                                                          1. 2

                                                                            Yeah that’s true. I think the etcd latency wouldn’t play well multi-region.

                                                                            You could still tag the nodes and apply the same config in several kubernetes clusters and then in the other clusters the workload just wouldn’t run.

                                                                            Course then you’re going to have the issue that services in one cluster need to talk to services in another. Kubernetes has federation support, but I hear its a work in progress. Istio might be worth a look though.

                                                                      1. 4

                                                                        I love k8s because I love just having a bunch of yaml files I can just apply and have it work, but gke’s pricing for 4 cores and 8 gigs of RAM was like 2 or 3 billion dollars a month I think, so I went back to crappy scripts and digital ocean. Really hope DOs kubernetes offering ends up being good, because using kubernetes is wonderful but administering it isn’t something I want to do for little side projects.

                                                                        1. 3

                                                                          You could also use Typhoon if you want something better than scripts. It also supports DO.

                                                                          1. 1

                                                                            A 3 nodes (n1-standard-1) Kubernetes cluster is ~72$/month. You can even get a 1node k8s cluster but don’t have all the benefits discussed in the OP. Although 3 nodes is still a light cluster, it allows you to have some benefits that you’d not have with 3 crappy servers managed by configuration management (although it would still be cheaper).

                                                                            1. 1

                                                                              Google has a sustained use discount. I think a 4 core, 15GB machine is 100$/mo. So on the low end its cheaper than digital ocean, but the price ramps up quickly for more computing power. (also pre-emptible nodes are cheaper if you can live with your server disappearing every day)

                                                                              I suppose it depends on what you’re trying to do. Their burst instances work well for web apps, especially if you can cut down on memory usage.

                                                                              Some competition from digital ocean would be great. I’d probably switch if the price were competitive.

                                                                            1. 3

                                                                              The whole vgo project seems somewhat surreal. It pulled out the rug from dep, which everyone thought was on its way to becoming the official way to do dependencies.

                                                                              And yet I’m not sure I’ll really miss it. Having worked with it the last 6 months it’s god-awful slow, local vendoring doesn’t really work well with git, download proxy servers are a nice idea that was never really implemented in any plausible way, private repos are challenging. It feels half finished and forgotten, with no traction in months.

                                                                              sboyer didn’t even have the time to actually write down his objections.

                                                                              And how is it a single developer manages to supplant an entire open source project on his own? And even more, how does he manage to do that and produce something better?

                                                                              I played with vgo. Its fast, works well with the existing go tool, its dead simple to use, and has a robust proxy solution to avoid left-padding. Somehow it manages to preserve Go’s distributed dependency model.

                                                                              I really don’t know what to make out of it all.

                                                                              1. 4

                                                                                I left a comment on the blog: (his reply was surprisingly hostile…)

                                                                                Check out https://github.com/ungerik/pkgreflect for a way to reflect packages.

                                                                                Go leaves many of these problems to be solved at build time rather than at runtime.

                                                                                Your point about generics is actually about inheritance. You might be the first person I’ve seen defend classical inheritance with virtual methods.

                                                                                Virtual methods turn out to cause a lot of problems, the primary problem being they’re slow, which is why c# decided to make virtual opt-in instead of the default.

                                                                                But from a language design perspective inheritance often ends up making very confusing code because where something is implemented can be hard to track down and developers will tend to make assumptions about how something is implemented which later binds your hands as the implementor of the base class.

                                                                                In general in OO code you should hide everything by default and use virtual with great care. It’s this realization which drives the reasoning behind discarding the concept entirely. It’s just not all that useful in practice.

                                                                                I also think getting rid of the ternary operator was good, it leads to easier to understand code, for a very slight cost in verbosity. (surprisingly slight… Rob Pike once said developers think a newline is 100x more expensive than any other character)

                                                                                Also I like git imports. It makes it really easy to find docs and the original source code. Distribution of code is a separate issue from its name and a centralized packaging system is a liability.

                                                                                1. 3

                                                                                  Kubernetes uses ConfigMaps: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

                                                                                  They are basically YAML/JSON properties which can be sent to containers in various ways:

                                                                                  env:
                                                                                  - name: LOG_LEVEL
                                                                                      valueFrom:
                                                                                      configMapKeyRef:
                                                                                          name: env-config
                                                                                          key: log_level
                                                                                  

                                                                                  Kubernetes handles the rollout of changes, and since a lot of infrastructure tasks are pre-defined (like routing from one service to another ala istio) there’s a lot less one-off config changes that you need to do. They support literals, files and directories. You can also do secrets: https://kubernetes.io/docs/concepts/configuration/secret/

                                                                                  1. 2

                                                                                    Go 1.9 introduced type aliases:

                                                                                    type callback = func(int) bool
                                                                                    
                                                                                    func filter(xs []int, f callback) []int {
                                                                                    	var filtered []int
                                                                                    	for _, x := range xs {
                                                                                    		if f(x) {
                                                                                    			filtered = append(filtered, x)
                                                                                    		}
                                                                                    	}
                                                                                    	return filtered
                                                                                    }
                                                                                    
                                                                                    func main() {
                                                                                    	fmt.Println(filter([]int{1, 2, 3, 4}, func(x int) bool {
                                                                                    		return x < 3
                                                                                    	}))
                                                                                    }
                                                                                    

                                                                                    C# has delegate types:

                                                                                    public delegate bool callback(int x);
                                                                                    
                                                                                    public static int[] filter(int[] xs, callback cb) {
                                                                                        var filtered = new List<int>();
                                                                                        foreach (int x in xs) {
                                                                                            if (cb(x)) {
                                                                                                filtered.Add(x);
                                                                                            }
                                                                                        }
                                                                                        return filtered.ToArray();
                                                                                    }    
                                                                                    
                                                                                    public static void Main() {
                                                                                        int[] xs = {1,2,3,4};
                                                                                        foreach (int x in xs) {
                                                                                            Console.Write(x + " ");
                                                                                        }
                                                                                        Console.WriteLine();
                                                                                    }
                                                                                    
                                                                                    1. 2

                                                                                      Excellent point, but the actual parameters still end up being structurally typed. The formal parameters get named as instances of the type, but the actual values when constructed are not declared to be of that type.

                                                                                      That is, in your first example, I could do something like this:

                                                                                      func foo(i int) bool {
                                                                                          ...
                                                                                      }
                                                                                      
                                                                                      filter(int_array, foo)
                                                                                      

                                                                                      The function foo was never explicitly declared to be of type callback, but rather assignment/passing was allowed because foo met the structural requirements of the callback type.

                                                                                      I think the answer to my question may be “no, it’s not possible to reasonably have function types in a purely nominative type system” though that just rubs me the wrong way.

                                                                                    1. 2

                                                                                      I’m trying to understand what you are after with the “single executable” part?

                                                                                      1. 2

                                                                                        Self-contained. For the most part controversy I guess? :-)

                                                                                        1. 5

                                                                                          Right. That controversy you may have. I guess we have rather differing interpretations of self-contained.

                                                                                          $ file start.sh

                                                                                          start.sh: POSIX shell script, ASCII text executable
                                                                                          

                                                                                          $ file target/hprotostuffdb-rjre

                                                                                          target/hprotostuffdb-rjre: ELF 64-bit LSB executable
                                                                                          

                                                                                          $ grep JAR start.sh

                                                                                          JAR=comments-all/target/comments-all-jarjar.jar
                                                                                          $BIN $PORT comments-ts/g/user/UserServices.json $ARGS\
                                                                                            $PUBSUB $ASSETS -Djava.class.path=$JAR comments.all.Main
                                                                                          

                                                                                          $ objdump -p target/hprotostuffdb-rjre |grep RPATH

                                                                                          RPATH                $ORIGIN:$ORIGIN/jre/lib/amd64/server
                                                                                          

                                                                                          $ objdump -p target/hprotostuffdb-rjre |grep NEEDED

                                                                                          NEEDED               libpthread.so.0
                                                                                          NEEDED               libjvm.so
                                                                                          NEEDED               libcrypto.so.1.0.0
                                                                                          NEEDED               libssl.so.1.0.0
                                                                                          NEEDED               libz.so.1
                                                                                          NEEDED               libstdc++.so.6
                                                                                          NEEDED               libgcc_s.so.1
                                                                                          NEEDED               libc.so.6
                                                                                          

                                                                                          $ find . -name ‘*so’

                                                                                          ./target/jre/lib/amd64/server/libjsig.so
                                                                                          ./target/jre/lib/amd64/server/libjvm.so
                                                                                          ./target/jre/lib/amd64/libzip.so
                                                                                          ./target/jre/lib/amd64/libnet.so
                                                                                          ./target/jre/lib/amd64/libjava.so
                                                                                          ./target/jre/lib/amd64/libnio.so
                                                                                          ./target/jre/lib/amd64/libverify.so
                                                                                          

                                                                                          I’m not even going into the rest of the jre scaffolding. I guess you could argue the stuff under comments-ts is not part of the “comment-engine”, but it’s there, and it (or something equivalent) is needed anyway. Admittedly, only two of the files in the entire package have the ‘executable’ flag set, so you can have half your cake if that’s the criteria for being self-contained :-)

                                                                                          1. 4

                                                                                            Thanks for the detail response.
                                                                                            It was my way of showing people that jvm apps can have “golang-style” deployments where you ship a binary and run and be only 12MB (my production nginx binary is 14MB)

                                                                                            But realistically, if you have the jvm installed, the jar is only 455kb and that is only the one that needs to be shipped along with the 92kb js and 7.1kb css. That is how I deploy internally.

                                                                                            With golang, you do not have this choice.

                                                                                            1. 4

                                                                                              Ah, so now I am starting to see the points that you are really trying to make.

                                                                                              1. Bundling of dependencies. I don’t think there’s much novelty to it; proprietary and floss applications alike have distributed binaries with bundled dependencies for a long long time. This includes many applications that bundle a jvm.

                                                                                              2. A jvm bundle can be reasonably small. Admittedly I haven’t paid attention to it, but I’ve had jvm bundles before, and I don’t recall them being outrageously large.

                                                                                              Calling it a “single executable” or self-contained might not be the best terminology to get the point across. Even more so when you consider that the executable also depends on many libraries that are not bundled; see objdump output above and compare to the list of bundled shared objects. Any one of these system libraries could go through an ABI change (in worst case one where existing symbols are removed or modified to work in an incompatible way, without symbol versioning…), and when that happens, your uhm.. self-contained single executable thingy won’t run right, or at all. It’s not just a theoretical concern, it’s something people using binary distributions of proprietary software (e.g. games) may have to feck with rather often.

                                                                                              I can’t comment on how this compares to golang deployments, which I’ve never used.

                                                                                              1. 1
                                                                                                1. Pretty much agree.
                                                                                                2. A lot of ppl dismiss the jvm as bloated (in terms of size and runtime memory). I guess it all depends how one uses it (along with the knobs to tune). I run mine at 128mb memory max, and that could handle 130k req/s. My usage of the jvm is like a stored-procedure language though. All the heavy lifting is on the C++ libs that I’m depending on.

                                                                                                I understand your points and appreciate your comments. Cheers

                                                                                              2. 2

                                                                                                Recent versions of go have additional build modes: https://golang.org/cmd/go/#hdr-Description_of_build_modes

                                                                                                Theoretically you could deploy your code as a plug-in.

                                                                                        1. 3

                                                                                          Generally agree but there are operational costs to vertical scaling. That single DB is also a single point of failure and achieving high availability is often just as hard as scaling horizontally. (master / slave failover and backups may seem mundane but there are plenty of examples of companies screwing it up)

                                                                                          Something like Cassandra, Elasticsearch or Kafka has redundancy built in, which is a big win. I think spanner style sql databases could hit a real sweet spot.

                                                                                          As for SOA I think it depends on what you’re working on. Sometimes breaking up applications into separate processes with well defined interfaces can make them easier to reason about and work on.

                                                                                          As an application evolves over time the complexity can grow out of control until any time someone touches the code they break it. How often have new developers thrown up their hands, scrapped the product, started over and wasted 6 rebuilding what they had in the first place?

                                                                                          Maybe SOA could help with that by limiting the scope? (Though maybe better code discipline would achieve the same result?)

                                                                                          I guess all I’m saying is that good engineering practices can help smaller software too.

                                                                                          1. 2

                                                                                            Makefiles are great for small Unix projects, not so much for something that needs to be built for Windows too… Windows developers live in a parallel universe of build tools and that coupled with the shear size of chromium helps to explain why they felt it necessary to make ninja.

                                                                                            The complexity of the project is pretty crazy though.

                                                                                            1. 2

                                                                                              I’ve used nmake with some success to build things on Windows.

                                                                                            1. 3

                                                                                              Why do folks have such a hard time looking for 3rd party packages? An immutable, sorted map doesn’t come up all that often (I don’t think I’ve ever “needed” the immutability bit), but there are packages out there that can do it. For example: https://github.com/Workiva/go-datastructures#dtrie. *

                                                                                              It’s not type safe, but it fits the requirements he wanted.

                                                                                              • caution: I’ve never actually used this package
                                                                                              1. 4

                                                                                                One of the chief dangers of excessively verbose and inflexible code is not just that its implementer has to do a lot of typing; it’s that all that typing provides a high surface area for bugs and generally difficult-to-reason-about implementations. This sort of issue affects whoever has to use the library, not just whoever has to do the implementing.

                                                                                                It’s also, by the way, generally true that code which somebody else wrote is going to be more general and, you know, not written by you, and therefore magnify the verbosity and difficult-to-reason-about issues.

                                                                                                1. 5

                                                                                                  Why do folks have such a hard time looking for 3rd party packages?

                                                                                                  Sometimes people don’t want to add external dependencies for things that, in some cases, are (relatively speaking) straightforward to implement.