1. 6

    GNU Autotools: just kill this horrific pile of garbage with fire. Especially terrible when libtool is used. Related: classic PHK rant.

    CMake: slightly weird language (at least a real language which is miles ahead of autocraptools), bad documentation.

    Meson: somewhat inflexible (you can’t even set global options like b_lundef conditionally in the script!) but mostly great.

    GYP: JSON files with conditions as strings?! Are you serious?

    Gradle: rather slow and heavy, and the structure/API seems pretty complex.

    Bazel/Buck/Pants (nearly the same thing): huge mega build systems for multiple languages that take over everything, often with little respect for these langauges’ build/package ecosystems. Does anyone outside Googlefacetwitter care about this?

    Grunt, Rake, many others: good task runners, but they’re not build systems. Do not use them to build.

    1. 6

      Related: classic PHK rant.

      This one is even better since its observations apply to even more FOSS than libtool. It also has some laughable details on that along with the person who wrote libtool apologizing in the comments IIRC.

      1. 3

        I recalled that too, but it was David McKenzie of Autoconf who popped up to apologize.

        1. 1

          Oh OK. Thanks for the correction. At least one owned up to their mess. :)

      2. 3

        FWIW: bazelbuckpants seem to be written for the proprietary software world: a place where people are hesitant to depend on open-source dependencies in general, and people have a real fear (maybe fear is strong, but still) of their dependencies and environment breaking their build. I use them when I’m consulting, because I can be relatively certain that the build will be exactly the same in a year or so and I don’t like having to fix compilation errors in software I wrote a year ago.

        1. 2

          I’m with you on Grunt, but Rake is actually a build tool with Make-style rules and recipes for building and rebuilding files when their dependencies change. There’s a case that Rake is just Make ported to Ruby syntax. It’s just more commonly used as a basic task runner.

          https://ruby.github.io/rake/doc/rakefile_rdoc.html

          1. 1

            I think Make is also somewhat close to a task runner. It has dependencies, but not much else. You write compiler invocations manually…

            1. 1

              It sort of has default rules for building a number of languages, though these aren’t terribly helpful anymore.

              I also use Make as task runner. Mostly to execute the actual build system, because everybody knows how to run make and most relevant systems probably have Make installed in one form or another.

          2. 1

            We use Pants here at Square, in our Java monorepo. It works quite nicely, actually. For our Go monorepo, we just use standard Go tooling, but I’ve volunteered to convert to Pants if anyone can get everyone to move to a single monorepo. They won’t, because every Rails project has its own repo, and the Rails folks like it that way.

          1. 10

            For the same observation but accompanied by investigation instead of ranting, watch George Tankersley’s GopherCon 2017 lightning talk: https://www.youtube.com/watch?v=7y2LhWm04FU&list=PL2ntRZ1ySWBfhRZj3BDOrKdHzoafHsKHU&index=11

            1. 4

              To be fair, the article on lemire.me did not seem like ranting. And I’m a plush-gopher-on-my-desk fan of Go :-)

            1. 2

              Anyone know how this differs from jsonnet? http://jsonnet.org/

              1. 1

                For an organized list of links to videos and slides, see https://github.com/gophercon/2017-talks

                1. 5

                  I posted a long response as a comment on the blog. The tl;dr is that this is a very naive use of a monorepo.

                  1. 6

                    Right, you can’t just check in all your code into a monorepo without any tooling or processes and expect things to turn out well. Just like how teams can’t each check their code into individual microrepos without any tooling or processes and expect things to turn out well.

                    For some reason people always think version control is a silver bullet that can fix complexities of scale.

                    1. 12

                      You wouldn’t believe the kinds of things you get to see as a version control consultant. For instance: 50 branches, one per dev, kept open and unmerged for a year “because people kept breakng the build on trunk and got in each others way”. Then it was time for a release. Then they called us in. Not much to salvage…

                      1. 2

                        Oh dear.

                        I had a boss who (the shop using SVN at the time) was amused at my eagerness to go branch off.

                        “The branching is easy”, he’d said, “Tell me what you think when it’s time to merge.”

                        git, for all of its faults, is pretty good at that.

                        SVN is still nice for certain scenarios though that aren’t heavily branched.

                        1. 2

                          It’s certainly possible with SVN, I’ve seen it done. Will for quality and discipline can go a long way.

                          1. 2

                            My recollection of my experience with svn merge was that the problem with it was just that it was arcane, buggy (e.g. IIRC if you try creating a branch in which there is a file called ‘foo’ and another in which there is a directory called ‘foo’ then misery happens) and super violent because it immediately does whatever idiot thing I asked for (*) on the shared repo instead of in some kind of safe sandbox like a local working copy or local clone of the repo.

                            (* assuming some non zero fraction of the time I get the wrong time range or something in the merge command on the first attempt)

                            I have had easy, painless branch merges with SVN. “Will for quality and discipline” had nothing to do with it; “using svn diff and patch to apply and edit patches manually instead” had everything to do with it.

                            git-svn, despite being a bit evil, was actually really useful because it let you try your merges locally and then examine the result before committing them.

                            1. 2

                              (e.g. IIRC if you try creating a branch in which there is a file called ‘foo’ and another in which there is a directory called ‘foo’ then misery happens)

                              Well, what do other systems do? Merging such nonsense is always a pain especially if it happens at scale of dozens of items, even in git/hg.

                              SVN might get better at this, eventually. At the moment, this is still being worked on. Subversion 1.10 will better handle cases like this where the node kind remains the same (file vs file, dir vs dir). Your scenario is still out of scope for that release, unfortunately – pay me good money and eventually i might fix that, too. But I have done more than enough work on this complex problem in my spare time while pretty much everyone else on the planet is just complaining from the peanut gallery.

                              1. 1

                                But I have done more than enough work on this complex problem in my spare time while pretty much everyone else on the planet is just complaining from the peanut gallery.

                                I don’t want to discourage or criticise you. I’m relating things that have happened to me personally, not calling your considerable abilities and hard work into question. My honest appraisal at this point is that:

                                • svn is a very high quality centralised VCS. It is enormously better than the competition that existed at the time it was created (though I’ve never had an opportunity to try p4 for comparison).
                                • I would still strongly recommend svn over any DVCS for doing version control of large binary files like film or game assets such as footage, meshes and textures.
                                • DVCS systems are, for source code, just a better model up until you hit scaling limits (which only mega-rich companies like Google, Facebook and Microsoft realistically ever do).
                                • svn now has way better merging than it used to. svn version 1.6 era merging was so bad that it caused an entire generation of developers to be scared of branching.

                                Well, what do other systems do? Merging such nonsense is always a pain especially if it happens at scale of dozens of items, even in git/hg.

                                They certainly don’t succeed at doing a merge when you do this; as you rightly point out there exists no sensible merge for this.

                                What all the DVCSes I’ve used did was hand me a working tree with both sides written into different files and then ask me to resolve the conflict.

                                IME svn used to do, upon attempting this merge, put my checkout into a completely broken state, in which none of the usual operations work any more, and which the available documentation didn’t explain very well how to get out of. For example, svn revert used to not work any more in this situation.

                                OTOH if I did the same with git or darcs, it:

                                • told me that the merge failed
                                • put the conflicting files and directories side by side in my working copy, keeping one with the name foo and giving the other a name like foo~HEAD or foo.~0~.
                                • asked me to resolve this by: deciding what I want to do, editing files to get the working tree into the shape I want to end up with, git adding and then git commiting to complete the merge. Note that git add and git commit are just the ordinary verbs for writing commits in git; this isn’t some kind of unusual weird situation where all the usual tools suddenly stopped working and I have to use completely different verbs from a whole different section of the manual to get out of it.
                                • if I really decided this was a terrible idea and want to back out, I could run git merge --abort to ditch this whole can of worms and put the repo back to the state it was in before attempting the impossible merge. This works reliably. In a rather atypical feat of decent UX, git even tells me about the possibility of doing this at the time the merge failed.
                                1. 1

                                  For example, svn revert used to not work any more in this situation.

                                  This must have been many many years ago. Such basic problems have long been fixed, mostly with the working copy redesign and rewrite that happened with Subversion 1.7 (which is old by now, first released in 2011).

                                  I suppose many people saw some buggy behaviour from the SVN 1.5 days, then switched to something else, and still believe that what they have is actual working knowledge of SVN. It’s not, unless you have been using 1.8 or 1.9.

                                  Granted, there are still many implementation bugs being found (here is a fun and recent one). But it’s not anywhere as horrible as it used to be. Thankfully :)

                                  1. 1

                                    Subversion 1.7

                                    That didn’t exist when I started using subversion. I mentioned version 1.6 explicitly by name for this reason. ☺ I think, though I’m not sure, that I might even have started out with some version as early as 1.4.

                                    I remember having to svn upgrade all the working copies when 1.7 came out! There were a few messes where behaviour differed, plus maybe a couple cases where the upgrade didn’t go smoothly for some reason.

                                    I do not recall svn merge being noticeably more robust with 1.7 than it was with 1.6.

                                    I think the company I was working for started moving to git in earnest around the time 1.8 was released.

                              2. 1

                                We’re talking organisation-wide level here, not about an assessment of how good svn merge is in detail, or for the individual. It’s not great, that’s known.

                                git-svn can be a tool for solving that, but that still needs adoption in your org.

                                1. 2

                                  git-svn can be a tool for solving that, but that still needs adoption in your org.

                                  I have no idea what you mean by this. One of git-svn’s historical advantages was that it could be used by a single person without anyone else knowing or needing to know that someone was using it.

                                  Are we just talking about completely different things? Like you’re talking about the need for an organisation to avoid having long-running branches without at least merging trunk into them every time trunk is committed to, and I’m just talking about the much narrower problem that doing so with only svn merge is painful because svn merge is very imperfect?

                    1. 1

                      6 years old already? I remember first reading this in 2013. Did we learn anything?

                      1. 2

                        It would be interesting to suggest to Norvig that he write a current update…

                        1. 1

                          Sorry. I forgot to add the year it was published.

                        1. 4

                          I’m bemused – post Snowden – by the commenters that think this is benign or unlikely to be exploited. I’m pretty sure everything theoretically capable of being exploited has a post-it for each manufacturer slowly moving across a Kanban board somewhere inside the NSA.

                            1. 5

                              I was at the first day of this sprint. It was really interesting seeing so many engineers excited about mercurial development. Git may have the mindshare but mercurial is definitely not dead.

                              1. 2

                                I would like to strongly encourage y'all to do everything possible to make Hg as easy to use as possible. In particular, the floundering discussion on that etherpad about “friendly Hg” was discouraging. Right now git is (a) the defacto winner, and (b) a tire-fire of a codebase and © widely agreed that the porcelain is terrible, but nobody has the ability to change it.

                                Hg is currently “waiting in the wings” so to speak, and many of us are following closely as Facebook and Google remove obstacles to us switching over to it as a sane monorepo-scale-capable substitute. Please, please, please keep in mind that the potential future users of Hg vastly outnumber current users. While it’s important not to lose community goodwill by churning things and giving people an expectation of version pain, it’s more important to get things right, and to get good defaults. I’ve watched Emacs users struggle with painpoints that 25 years ago weren’t changed “to not hurt existing users"… the future is long! :-)

                                Also, fantastic work - I am waiting for Hg to reach the point where I can just stand it up, and have it work at company-wide-monorepo scale, and then I look forward to subjecting all my coworkers to the Glorious Monorepo :-)

                                1. 1

                                  The hg community is quite friendly. If you’d be interested in working on friendlyhg and know a bit of Python you should try sending in some patches.

                                2. 2

                                  On the contrary, mercurial seems to be the darling on the rise, doesn’t it?

                                  1. 2

                                    In this particular community, yes. Most of us favor simplicity, for our rather suckless-branded version of that, more than we care about being mainstream, and that means Mercurial has an outsized amount of support here. And Google and Facebook, for obvious reasons, are probably seeing Mercurial on the rise. But in general? Nah. Git won. (For now, at least.)

                                    1. 2

                                      I think in business around the world too.

                                      • Academic is one of big user of Mercurial
                                      • There are companies migrating from SVN still, and they actually prefer to switch to Mercurial because of CLI UX, and large files support which is far superior to GIT.
                                    2. 1

                                      Git probably has less mindshare among the people who care about the design decisions of the VCS they use. While this is a minority niche, it includes (although it is not limited to) most of the people who contribute to VCS development, and Mercurial does well enough in that niche to grow.

                                    1. 2

                                      When Facebook first changed their license and patents information on all their projects, they were troublesome. But a lot of people worked to point out the problems, and they very quickly updated them to a new version which was acceptable to most companies.

                                      1. 1

                                        I’d be curious to see the graphs if you were actually trying to cool something: like half-fill the cooler with room-temperature bottles of water or cans of soda/beer. Then add (wet/dry) ice, and see what happens.

                                        1. 4

                                          Most of the answers seem to be about personal secret management, rather than server secret management.

                                          I work at Square, so naturally we use KeyWhiz :-) One nice property of KeyWhiz is that the secrets mount with FUSE and look just like files, so in our local dev environments, we can easily set up dev secrets in the same location using real files.

                                          1. 9

                                            Whenever I post elm articles I usually I just tag them with haskell.

                                            1. 4

                                              Seems like an argument for a separate tag! :-)

                                              1. 1

                                                could argue either way. :)

                                            1. [Comment removed by author]

                                              1. [Comment removed by author]

                                                1. 4

                                                  But larceny, usually the second fastest, finishes a lot more tests (53 vs. 44). It’s a very interesting Scheme compiler: http://www.larcenists.org/overview.html

                                                  1. 1

                                                    The license for larceny is weird though. That probably hampers it a lot as far as adoption goes.

                                                    1. 2

                                                      It looks like a weird variant of the MIT license. Probably non-lawyers thinking they can improve the wording :-)

                                                      Anyway, the x86 version is also licensed under LGPL.

                                                      1. 1

                                                        Is there anything particularly objectionable in their license? Or is it just that it isn’t one of the common standard ones?

                                                        1. 2

                                                          If it doesn’t explicitly allow me to modify or sell the software, I would have to consider it nonfree.

                                                1. 4

                                                  How can the code “just happen to be owned by Google”?

                                                  1. 8

                                                    Author works at Google and is using his work computer to work on this project?

                                                    1. 2

                                                      He wouldn’t necessarily have to be using his work computer :(

                                                      1. 1

                                                        Google claims ownership of work done on personal time with personal resources?

                                                        That’s incredibly shitty of them, if so.

                                                        1. 10

                                                          It’s being done on 20% time, from what I understand.

                                                          1. 4

                                                            There’s a process to get the company to formally disclaim ownership of things, but then you’re pretty heavily restricted in terms of when you can work on it. If you don’t care about ownership, just getting an OSS license on something is the simpler path by a wide margin.

                                                            1. 1

                                                              If it’s useless enough then the process is easy :-)

                                                            2. 1

                                                              Shitty, perhaps, but also not uncommon.

                                                              1. 2

                                                                Not uncommon, but I normally associate the practice with companies that don’t “get” Open Source, or why devs might pursue side-projects and what their personal IP means for their careers in general.

                                                                I wouldn’t normally associate those attitudes with Google. And since a lot of developers refuse to sign agreements signing personal IP over to their employer, I’m surprised to hear Google requires it, given how popular they have been among developers as a “good” employer.

                                                      1. 4

                                                        This zero-copy approach to serialization is similar to capn proto, although the latter includes a nice rpc framework. A related comparison blog entry is here: https://capnproto.org/news/2014-06-17-capnproto-flatbuffers-sbe.html

                                                        I’d be interested if anyone has tried them as well as protobuf3. Does the new protobuf3 more efficiently transfer large byte arrays? I’d like to use something like gRPC for syncing servers but assume protobuf3 will add too much overhead for my messages (large % is just binary blob data).

                                                        1. 2

                                                          Agreed. I would love to see a nice up-to-date comparison of flatbuffers, protobuf3, and capnproto.

                                                          1. 1

                                                            proto3 should have roughly the same size as proto2. But you can always compress… although, looking through the grpc site, it’s surprisingly difficult to find a succinct description that includes whether compression is available/on-by-default/not-implemented-yet. Anyone know?

                                                          1. 22

                                                            I was hoping for a more cogent critique of React: I think an informed discussion of its weaknesses would be useful. This article failed to deliver: The main criticism, as already mentioned, falls down: there’s nothing to stop you rolling up chunks of DOM into custom Web Components, and then still using React. Other than that, it mostly says React’s design is “bad” and that the event model is nothing new. Hmmm.

                                                            1. 1

                                                              Behind this boring-sounding title was a surprisingly good article: no new information, but collected details of just how advanced the NSA’s capabilities are.

                                                              1. 3

                                                                I can’t stand articles like this that paraphrase a source without linking to it: http://arxiv.org/abs/1502.03182

                                                                1. 28

                                                                  I found the tone overly sarcastic, and the article seems to impute a degree of arrogance and condescension to the go authors that I see no evidence for.

                                                                  At this point, go’s shortcomings/differences-from-haskell are well-known. I don’t know why people keep writing articles about them.

                                                                  1. 8

                                                                    Expression is proportional to sensation, not novelty. Yes, there is obviously a feedback loop that causes desensitization of some non-novel repeated stimuli, but the mechanism is not so straightforward. Your 50th raspberry may not strike you as overwhelmingly delicious as the first handful. Sometimes, the mechanism works in reverse, and repeated stimuli result in amplified sensation. Your neighbor’s wall-piercing voice may not induce rage the first time it is encountered when you are relaxing in your home, but over time your sensitivity to it may indeed reach that point.

                                                                    These qualia of the Go experience are quite diverse. They result from real experiences. The expression of them is as necessary as any other. We should try to imagine how such sensations may come about in the first place. If we don’t think that such sensations should be experienced by others, we are obligated to act in a way that influences the triggering factors. If you have a problem with something, it’s YOUR problem to solve.