Threads for noelle

    1. 3

      I stumbled over that mentioned CodaBar format and now I wonder if that’s a US thing or what libraries around the world use. 5 minutes of research did not give me a good answer.

      1. 7

        according to what i can find it’s mostly a US standard (though it’s also called NW-7 in japan) and was in use with a variety of US applications where simplicity and resilience was important (you can print with dot matrix and it’s self checking nature plus resilience to skip checks are nice). places where it’s still in use are the applications where they don’t really have the investment (or need) to upgrade to newer formats. nothing I can find seems to indicate it got popularity across the atlantic.

        places it’s still in use include:

        • fedex uses a modified version for their airbills
        • japan post uses it for delivery recipients
        • JP/US blood banks and blood samples use it bc of it’s resilience to stretch
        • JP/US libraries
        • JP photo processing (????)

        some EU/UK barcode reader software and system documentation shows its use as “Libraries” but I had trouble finding any other UK/EU sources talking about it specifically.

        other alternative names across the world include mostly American standards so I think it stayed in the influence of 1970s American tech standards: Ames Code Code 2 of 7, ANSI/AIM BC3-1995.

        1. 2

          Comparing the image on the wiki page tells me that one of my library cards, from 2 different systems in the greater Stockolm area, use the Codabar system. The other looks like a “normal” barcode.

          1. 2

            I also encountered in the wild in at least one German hospital.

      2. 2

        The libraries for my county (in the UK) all use a barcode on the card, a QR code on a key fob and/or an app on the phone. I’ve admittedly not looked into the format.

    2. 7

      This is a really interesting look at a casual keyboard user doing some sound dampening to their keyboard with “simple” modifications.

      I’m glad that they talked about switches, but was interested in the fact that they found silent tactile to produce a noticeable amount of noise. I would attribute this to them probably checking out the more common models from the major manufacturers (which is extremely reasonable given this wasn’t an enthusiast endeavor). I have found Kailh Purple Pro and Zealios to be extraordinarily quiet in my real world usage. If there had been a run of Zilents or Zealios I am sure they would have found them as quiet while still providing the tactile response (which helps prevent bottoming out). My personal recommendation (if they are available) is the Tacit. switch from Keebwerk. It provides a noticeable tactile bump while being very quiet, and as far as I am concerned it’s black magic.

      Also it’s interesting they mention wanting heavy linear switches, then showing Gateron Whites (also known as Clears) in their testing. While these are linear they are one of the lightest linear keys widely available with an actuation force of only 35g which is super light.

      For anyone thinking of doing this themselves, I highly recommend lubing your stabilizers. Stabilizers are often overlooked in these kinds of projects but can account for a lot of noise. The ErgoDox-EZ uses costar stabilizers which can be pretty noisy, but with a small amount of lube they get extra quiet. You can also lube the key switches themselves though this is way more effort compared to everything else mentioned.

      1. 4

        “casual” :)

        1. 6

          Agreed. Dude has a split keyboard without labels. I shudder to think what a real “hardcore” keyboard looks like.

          1. 6

            I assume it must look something like this!

            1. 4

              (Crocodile Dundee voice) Now, this is a keyboard!

          2. 2

            I mean, you can start getting into the compact ortholinear if you really want to start getting spooky hahaha, but I was referring to the user, not the keyboard. And for a few examples of that kind of hardcore keyboard builder: , ,

        2. 3

          hahahahah, yeah, I said casual (and forgot the hobbyist after) because they seem to be doing keyboard stuff as secondary (they need their keyboard to be quieter for video) and not for the sake of doing keyboard stuff. rereading that it sounds pretentious as hell >.<

      2. 2

        I agree. Lubing stabilizers will have a huge effect in sound and feel, though maybe it will be less on a split ergo keyboard since there isn’t going to be a big rattly spacebar. So far this is the best stabilizer lubing/tuning tutorial I’ve seen

    3. 3

      A question for anyone who might have context – from this piece it seems like they have a cluster per restaurant, which doesn’t make much sense in terms of complexity versus payoff to my mind. The thing that would make more sense and be very interesting is if they’re having these nodes join a global or regional k8s cluster. Am I misreading this?

      1. 2

        They seem to be using NUCs as their Kubernetes nodes, so the hardware cost isn’t going to be too great.

        I imagine it’s down to a desire to not be dependent on an internet connection to run their POS and restaurant management applications, I’m sure the costs of a connection with an actual SLA are obscene compared to the average “business cable” connection you can use if it doesn’t need to be super reliable.

        1. 3

          Still, restaurants have been using computers for decades. It looks as if they have a tech team that’s trying very hard to apply trendy tools and concepts (Kuberneetes, “edge computing”) to a solved problem. I’d love to be proven wrong, though.

          1. 3

            I’ve never been to one of these restaurants but I can’t imagine anything that needs a literal cluster to run its ordering and payments system.

            Sounds like an over engineered Rube Goldberg machine because of some resume/cv padding.

          2. 2

            While restaurants certainly have been using computers for decades the kind of per location ordering integrations needed for today’s market are pretty diverse:

            • Regular orders
            • Delivery services in area (Postmates, dd, caviar, eat24, ubereats)
            • Native app ordering
            • Coupons
            • App coupons

            If you run a franchise like Chick-fil-A, you don’t want a downtime in the central infrastructure to prevent internet orders at each location, as it would make your franchisees upset that their business was impacted. You also want your franchisees to have easy access to all the ordering methods available in their market. This hits both as it allows them to run general compute using the franchisee’s internet, and easily deploy new integrations, updates, etc w/o an IT person at the location.

            I have a strong suspicion that this is why I see so many Chick-fil-As on almost every food delivery service.

            Beyond that, it’s also easier and cheaper to deploy applications onto a functional k8s/nomad/mesos stack than VMS or other solutions because of available developer interest and commodity hw cost. Most instability I’ve seen in these setups is a function of how many new jobs or tasks are added. Typically if you have pretty stable apps you will have fewer worries than other deployment solutions. Not saying there aren’t risks, but this definitely simplifies things.

            As an aside I would say that while restaurants have been using computers for decades they haven’t necessarily been using them well and lots of the systems were proprietary all in one (hw/sw/support) ‘solutions.’ That’s changed a bit but you’ll still see lots of integrated POS systems that are just a tablet+app+accessories in a nice swivel stand. I’ve walked into places where they were tethering their POS system to someone’s cell phone because the internet was down and the POS app needed internet to checkout (even cash).

        2. 1

          Most retail stores like this use a $400/mo T1 which is 1.5mbit/sec (~185kb/sec) symmetrical – plenty for transaction processing but not much else. Their POS system is probably too chatty to run on such a low bandwidth link.

      2. 1

        It could just be a basic, HA setup or load balancing cluster on several, cheap machines. I recommended these a long time ago as alternatives to AS/400’s or VMS clusters which are highly reliable, but pricey. They can also handle extra apps, provide extra copies of data to combat bitrot, support rolling upgrades, and so on. Lots of possibilities.

        People can certainly screw them up. You want person doing setup to know what they’re doing. I’m just saying there’s benefits.

    4. 1

      Having used mermaid for a decent amount of complex flowcharts (used it for architecture diagrams for an auth proxy) I really like it. I find the layout of subgraphs to be lacking, and it could have extra features like vertical or horizontal alignment, but overall I’ve had a good experience in being able to build complex graphs relatively simply. I’ve been evangelizing it on my team for describing our infrastructure as code modules. I find it quite helpful considering I used to write graphviz and this is sooooo much easier (but less powerful obviously).

      Addressing @soc’s comment about markdeep, mermaid is much more focused on graphing and it’s markdown is more concise, but you lose the fine control of markdeep.

    5. 7

      Programming Languages: Application and Interpretation (PLAI) is pretty good, and has the added benefit of being free online.

      Essentials of Programming Languages is another good intro PLT book.

      Programming Language Pragmatics is a good book, and it’s useful. I have a copy. If I lost it, I’d replace it. I refer to it occasionally.

      Whether it is a good choice as the primary text for a PLT class depends on the specific PLT class.

      Programming Language Pragmatics is basically a large collection of small sections about specific programming language features. Each feature is introduced, described, and several code snippets in different languages are provided to illustrate the use of that feature (by the end of the book, dozens of languages have been mentioned). What is conspicuously absent is the theoretical basis for the feature and any real detail about how the feature is actually implemented. (TLDR: There’s a reason the book is called “Programming Language Pragmatics” rather than “Programming Language Theory.”)

      If your PLT course is about “learn about using a bunch of different programming language features,” then Programming Language Pragmatics makes a lot of sense as a primary text.

      Personally, I think that’s a perfectly reasonable subject for a course, but I wouldn’t call that course “Programming Language Theory.”

      If your PLT course is about “learn the theoretical basis of programming languages and use that theory to implement a simple programming language and several variations of it,” or something similar, then I think Programming Language Pragmatics is a poor choice - that just isn’t what the book is about. It might be handy if you’re having trouble understanding what the pieces you’re building do, but it won’t really help you build them.

      As an example, you mention type systems. Programming Language Pragmatics only has a few pages total on type systems, type checking, and type inference. There’s no mathematical description of types, no discussion of how to actually DO type checking, and no discussion of how to actually DO type inference. The entire section basically boils down to “some programming languages have types, and will make sure that the types match up - some languages will even figure out the types for you!”

      1. 10

        Please note that there’s also a second edition of PLAI, which is also available at the same link:

        I think the second edition is much better than the first. (Of course, I’m a bit biased!) It’s the result of teaching the first edition for about a decade, finding much better ways of explaining its concepts, and eventually transcribing those better ways back into the book.

        The language of implementation is also slightly different. This has some advantages and disadvantages.

        Incidentally, the second edition has as of a week or two ago just been translated into Chinese, though that may not be of must interest to people on an English-language thread. (-:

        1. 3

          I’ll have to take a look at the second edition, I enjoyed the first.

          Thank you for your generosity in making such a valuable resource available at no cost.

          1. 2

            Thank you kindly! It’s a delight.

        2. 3

          This was what we used in our first level PL class (at Cal Poly), and I just want to say thanks for writing such an easy to approach book!

          While there wasn’t much about types, I found it was perfect for the initial dip to get the context of types while making a basic PL.

          1. 2

            My pleasure — thanks! There isn’t much on types because I didn’t see the value in producing a watered-down version of TAPL. Rather, I show people the notation and what they need to know so that they can read TAPL.

      2. 1

        So I “think” the course is a bit of both. But I’ve only had the intro yet and I’m doing the first exercise tonight. So I’ve yet to have a full understanding of how the course will be.

        For instance, most of the intro talked about BNF, programming paradigms and a short intro of different languages. The teacher did mention hoping that everyone would, at the very least, understand closures perfectly by the end of the course.

    6. 4

      This is news spam and clickbait. Title even says “allegedly”.

      1. 2

        I don’t really know if that’s true about it being clickbait. The allegedly is because they have been arrested for that but they haven’t been convicted so ostensibly the Feds still have to prove criminal conspiracy. If you read the article the RCMP and Feds ran a sting in which they were specifically marketed the phone for use in drug activities by the company.

        I think there’s probably a more interesting/in depth article on the company from a privacy perspective, but I still don’t think it’s spam (though that’s much more subjective). If the article went into greater depth about the privacy concerns and what this means for other privacy focused hardware manufacturers I think it would be more solidly worthy.

    7. 15

      Java, XML, Soap, XmlRpc, Hailstorm, .NET, Jini, oh lord I can’t keep up. And that’s just in the last 12 months!

      Oh simpler times when we only had 7 new technologies in the last 12 months. Also after I read that I realized this was published in 2001 and it suddenly made a lot more sense.

      All they’ll talk about is peer-to-peer this, that, and the other thing. Suddenly you have peer-to-peer conferences, peer-to-peer venture capital funds, and even peer-to-peer backlash with the imbecile business journalists dripping with glee as they copy each other’s stories: “Peer To Peer: Dead!”

      s/peer-to-peer/blockchain/g, this may have been from 2001 but it’s still so relevant

      1. 2

        What are the 2018 equivalents? Obviously Blockchain: Is there anything else that has that ‘new hotness’ quality which makes it irresistible to neophiles?

        1. 8

          IOT, AI/ML, Serverless and of course: microservices

          1. 3

            Oo yes. Docker et al definitely qualify.

            1. 2

              I forgot the most important one: kubernetes

        2. 1

          Also of interest is the converse: what are the things that have recently lost (or are in the process of losing) this quality?

          1. 2

            I’m hearing less about big data and nosql

            1. 1

              Bigdata has folded into AI/ML or just analytics

              1. 2

                On top of it, we have a new fad of stronger-consistency DB’s with SQL layers. One of few fads I like, too. I hope they design even more. :)

    8. 4

      If you’re feeling confused by Erlang

      Try Elixir! It’s all the fun of Erlang but in a much better package.

      1. 2

        The author has definitely tried it. First link in the “If you liked this you might also enjoy” section after the text.

    9. 20

      Just like IBM and their clients have benefitted from Lenovo.

      have the customers benefitted? The Thinkpads got worse and worse the more time has progressed after the sale. The old IBM built machines were absolutely top-class in build-quality and absence of bloatware (minus some IBM-built updaters which were bad but could be removed).

      As time has progressed, the Thinkpads have become Notebook like all other PC Notebooks. Nothing special, some with better build quality, some with worse, but definitely no longer top class.

      Yes. The machines got cheaper, but sometimes, cheaper isn’t the be-all, end-all. For some types of usage, build-quality matters much more than price.

      I really hope that Apple can keep supporting the Mac for a very long time as I still love their hardware very much, exclusively future-looking ports be dammed.

      1. 5

        My first reaction was very similar. While Lenovo kept the quality of Thinkpads high for a few years following the sale, they’ve now become (as you said) very much average. There may be cases in which this type of thing worked, but Lenovo + IBM is not one of those situations.

      2. 4

        Yeah, I used to say that thinkpads were the only laptops worth buying, and now I feel none are worth buying. There probably won’t be a laptop I’m excited about for a very long time.

        Hopefully someone raises tons of money and is able to make laptops targeting developers that isn’t shy about charging a ton of money for a top class product for people who use their machines for work all day.

      3. 2

        Not just the quality of the products, but also the ethical standards and practices of the firm that makes them.

        IBM was (and is) a leader in the practice of ethically sourcing minerals, Lenovo has continually purchased conflict minerals from African kleptocracies and genocidal terrorists, particularly in the Kivu province and the Congo at large.

        If you are buying a new machine this holiday season, please check out

      1. 1

        Yeah, probably should be. I didn’t see that one. @Irene @jcs?

        1. 1

          I think they should actually be unmerged as the two articles, while sharing a subject have 2 very different focuses. The Citizen Lab article provides much more information as the Lookout article is a relatively short intro, followed by a link to the actual technical analysis and the Citizen Lab article. When I see merged articles I expect them to be very similar in the amount of information they provide, however the Citizen Lab article provides much, much more information and is far more technical than the Lookout post. See the following sentence from the last section of the Lookout article (links from article).

          Our reports provide in-depth information about the threat actor as well as their software and the vulnerabilities exploited — Citizen Lab has tracked the actor’s political exploits around the world, while Lookout has focused on the technical details of the malware from the beginning of the exploit chain to its use.

          If the merge reason was “article has links to the other article for more information” I would be more inclined to agree, but I think the articles provide significantly different takes on the story. Or, instead of unmerging, I would suggest making the Million Dollar Dissident article the main one as it was published earlier and contains far more information.

    10. 6

      I like how the only thing all this negative press does is convince other companies to not do what Google does: try to clearly tell you how they use your data and clearly give you a chance to opt out of new data collection. At the bottom of that page it asks if you accept the changes. If you don’t you can continue with what you’ve already allowed (even if that is nothing). Meanwhile most online sites sell any data they can get their hands on, but because Google come out and shows what they collect clearly the community latches onto it.

      I don’t want Google tracking anything I do so I have all the web activity stuff turned off and its remained off. When I saw the ‘lets review your privacy’ I figured it was a change to data collection, saw that it was completely opt in from the beginning and relaxed. It even ends with big bold to choose accept or more options if you don’t want. One of the options is to do a full audit of your current account’s privacy and ad settings. I don’t think I’ve ever seen such a clear opt in page for a privacy policy.

      Go to the page and make an informed decision instead of looking at a screenshot that hides all the opt-in nature of this.

      Let’s not vilify companies that actual try to do data collection with consent. Let’s focus on the other major companies that don’t make it clear. Like the fact that Facebook collects pretty much every site you visit because of the ubiquity of FB share buttons (gotta get those privacy add-ons, don’t see any clear opt out there).

      1. 1

        We have no idea if they actually used the data they definitely collected before this point (I’m sure they did) and we have no idea if they will actually use your data or not ignoring your response from now on.
        It’s like the secure padlock icon you see when using a secure connection when banking/etc. users don’t realise that it means “this connection to may be reasonably secure and is to someone who may be trust worthy, so in transit your data is probably safe, but who knows what happens on the backend or even in transit from the front facing secure server and the internal our outsourced backend servers”.
        That is the problem with all online services, especially ubiquitous ones like Google and Facebook, who knows what they do with your data after collection or even in the future. There is no accountability, any company can say one thing and do the complete opposite.

    11. 1

      What is the idiomatic way to short circuit logic in Erlang? Or a pattern to restructure short circuit style code into something more “Erlangy”?

      1. 2

        andalso and orelse will short circuit. and and or will always eval both sides. If you want to short circuit your logic use that.

        Relevant section of LYSEFGG:

      2. 2

        I just wrote a reply to the original blog post with that (i.e. with a functional way to implement the same thing described in the article).

        1. 1

          That’s a nice solution for a smaller scale problem, but what about when your conditions are more sophisticated than what’s allowed in guard clauses? And what about a large chain of conditional checks like the andalso example?

          I end up writing code like this:

          case do_thing() of
              {ok, Result} -> case do_next_thing(Result) of
                                  {ok, NextResult} -> and_so_on_and_so_forth();
                                  _ -> {error, "next thing failed"}
              _ -> {error, "thing failed"}

          I feel like I must be approaching Erlang with the wrong mentality because I can’t imagine sensible people would write code like this.

          Whereas in Go, this is perfectly reasonable:

          result, err := doThing()
          if err != nil {
              return nil, errors.New("thing failed")
          nextResult, err := doNextThing(result)
          if err != nil {
              return nil, errors.New("next thing failed")
          return andSoOnAndSoForth()
          1. 5

            You’re correct. For that code there are many alternatives:

            Use exceptions

            Instead of returning {error, …}, use throw and just write the happy path in your main function:

            Result = do_thing(),
            NextResult = do_next_thing(Result),
            Concatenate Functions
            your_fun() -> check_and_and_so_on_and_so_forth(check_and_do_next_thing(do_thing())).
            check_and_do_next_thing({error, X}) -> {error, X};
            check_and_do_next_thing({ok, Result}) -> do_next_thing(Result).
            check_and_and_so_on_and_so_forth({error, X}) -> {error, X};
            check_and_and_so_on_and_so_forth({ok, _NextResult}) -> and_so_on_and_so_forth().

            (I kept your names, don’t blame me :P)

            Use Sophisticated Tools

            You can use something like ktn_recipe to implement your logic.

            1. 2

              Would you say the concatenate functions method or exception method is more idiomatic?

              I feel like the concatenate functions method is more idiomatic than exceptions, but I’d be interested in your opinion.

              1. 3

                I don’t really have a strong opinion on the subject. I’ve found places where each one of them looks and feels better. The only advice I can give is: do not mix both, stick with just one for your whole project. For what I gather in the erlang community, concatenating seems to be the favourite one, because exceptions are expensive, but then again it’s not like nobody uses them either.

    12. 4

      Is there a conclusion to this article? Reads like it cuts off half way through.

      1. 3

        I think the last sentence and the mic drop gif were the ‘conclusion’. Most of his other posts seem very show-not-tell and end without conclusions. A conclusion would feel out of place with his tone.

      2. 2

        Reads fine to me. Ifs are ugly and cumbersome in Erlang.

    13. 4

      The description in the screenshots is so vague that I wonder just what it is that Google are going to do. Are they really going to ship user browsing history across to their data mining services (I have a feeling they wouldn’t dare risk the wrath of privacy advocates by doing that… or would they)? Does it only apply when you’re using Google services (something we kinda expect anyway)?

      Either way, it’s not like I needed any more reasons to wean myself off Chrome as it is - it’s a horribly bloated piece of software that loves killing my machine periodically.

      1. 6

        I’ve been happily using Firefox on Android and on my work computer for ~6 months. Last weekend I popped open an older computer I have and was surprised at myself when I was disappointed Chrome was my primary browser.

        I forgot just how awful Chrome is. It kills battery and constantly causes the discrete GPU to kick in at seemingly random times.

      2. 3

        I’m actually surprised to hear this about Chrome. I don’t actually use Chrome, but I leave Chromium running 24 hours a day on three machines (2x Linux, 1x OSX) and haven’t had any problems with it being bloated or killing the machines.

        TBH, I almost wish it were bloated and killed my machines because it’d be great motivation to quit using it, but to me, Firefox feels more bloated, is less responsive, and the interface is klunky. And the alternative browsers I’ve tried feel like half finished wrappers around webkit (midori, arora) or are just skins over Chromium (opera).

        GMail and Chromium are the only two Google products I use on a regular basis any more, and I’d love to quit Google products all together.

        1. 3

          Interesting, my experience has been that later versions of chrome are more laggy and bloated than firefox. I wonder if it varies from machine to machine.

        2. 3

          Over the past few years I’ve found Chrome to get slower and slower - running Safari makes the performance degradation even more obvious. Unfortunately there are a few Chrome extensions I really like, which is why I still use it (admittedly, those extensions are almost all available for Firefox, so I should probably give it another try).

        3. 2

          Hah, I feel the same way. Chrome has slowly been getting slower, but it does still feel faster than Firefox (at least on my Linux system). I’ve stopped really caring about finding or building an “unbloated” browser since my interest in HTTP has been rapidly declining anyway.

          1. 1

            You’re no longer interested in the Internet?

            1. 2

              You’re no longer interested in the Internet?

              Sheesh, get with it! All the hipsters are using Gopher these days.

            2. 1

              That’s right.

      3. 3

        …they wouldn’t dare risk the wrath of privacy advocates…

        Yeah, god forbid a bunch of nerds who are already constantly yelling at them start yelling at them some more.

      4. 2

        Here’s the page in question:

        The full page goes into more detail and gives you the option to opt in, keep current settings, or keep your current settings and go through a full privacy audit.

    14. 3

      Erlang is awesome for building things like RabbitMQ/Messaging/Whatsapp etc. But for a web app its a terrible language to use.

      1. 10

        I have to disagree here. Have you seen the various Erlang web frameworks available or tried any of them?

        I’ve spent quite a bit of time recently doing so combined with a modern JS front-end stack and my experience was very positive.

        Building web apps with Erlang is very reasonable choice. Especially building it on top of the awesome multi-process web servers combined with web sockets is a perfect combination. Additionally being able to use static typing (via dialyzer) and OTP makes working with concurrency and multiple web services/APIs very clean and manageable.

      2. 8

        I know Elixir isn’t exactly Erlang, but Phoenix (an Elixir web framework) is a delight to work in. IMO, you get the benefits of Erlang without actually needing to always write pure Erlang.

        1. 2

          The reasoning behind Elixir is, in fact, to make Erlang’s power easier to use, especially for web development. Platformatec started building apps in Erlang, but began designing Elixir to make it easier to do common web stuff while leveraging the IO beast that Erlang is.

        2. 1

          Thanks for sharing! Phoenix looks really slick actually. I’m going to give it a try.

    15. 1

      Not entirely relevant comment: I know it’s a small thing, but naming every function /numberarguments is one of those little things that kills me. I read one paragraph and realized I could not possibly imagine myself writing this blog post.

      1. 2

        That is the way you describe functions when writing about Erlang in more technical documentation or referencing the function in annotations. The actual functions are not named that way, but when referencing them for exports you do have to specify arity. I dislike it’s usage here because it takes away from the reading, but it’s nice when reading docs.

    16. 5

      This seems like misuse of tools to me, though. A file system is not a VCS, no matter its snapshot capabilities.

      Still really cool.

      1. 8

        The Golang devs really want this, a smarter filesystem. They don’t want a VCS. They begrudgingly used Mercurial and when everyone kept whining they even more begrudgingly moved to git. They have intimated often that they want neither git nor hg, but are stuck with git.

        They are not alone. Many, perhaps most devs hate babysitting a VCS. You want to be writing code, right? Not figuring out how to write commits, and commit messages, and rebase, and branching, and merging, and pulling, and updating, and pickaxing… It is not an unreasonable position that these things should be automated or abstracted away from the actual business of writing code.

        1. 9

          The argument to be made here is probably that the complexity of Git, Mercurial, and the like is (largely) inherent to the problem being solved, and not incidental. That, essentially, it is not reasonable to expect that the difficulties of managing collaboration of multiple people on a single collection of code be abstracted away.

          1. 5

            There is much improvement to what we can automate. For example, we could have regular automatic commits. Or, watch the filesystem and every save is an automatic commit. That would get some way towards what the Golang devs want. Branching could be “automatic”, as it was originally with bitkeeper, git, and Mercurial (i.e. branching by cloning, every time you copy the repo you already have a unique branch). Pushing and pulling could be a simultaneous “synch” operation, optionally automatically tied to some sort of CI to make sure the new synched state isn’t broken. Better merge algorithms and tools are possible, such as for example something like Semantic Merge which understands the syntax of what’s being merged (much better than line-by-line merging).

            My point is, people hate VCSes, and rightfully so. There’s much improvement to be had. We shouldn’t take the current state of git or hg as inevitable complexity.

            1. 2

              I always had the feeling that Darcs “it’s just patches” had a lot less complexity down the line, but still, it seemed to run into weird cases.

              And that’s - IMHO - one of the big problems there. All VCSes are fine until 5 developers enter a room and don’t work quite in the pattern you expect.

              1. 2

                It was my understanding that Darcs fixed the exponential-time merge problem. Is this incorrect? Or are you referring to other problems?

                1. 2

                  I’m referring to the “it’s all clean and simple two-way-merges until we have so many development branches open that the whole thing turns into a mess”-problem :). It’s not a very technical problem.

            2. 1

              I agree. My point was simply that full automation of the process is probably impossible.

            3. 1

              Wouldn’t commit-on-save lead to code that doesn’t build being committed?

              1. 1

                Fix up your commits with squash or rebasing would be the norm in that case. I personally have run commit-on-save (and periodically) for projects that I had a lot of small performance improvements that I wanted to make sure didn’t introduce other bugs. Then with the magic of bisect you can find the bugs super fast (as long as you have good pre-commit hooks in place).

        2. 3

          ZFS solves snapshotting state, but it does not handle telling a story. Commit histories are almost as important as the code its self – knowing why a hack exists is one Git blame away.

          If nothing else, you’d need to pair a snapshotting filesystem with a code review system, then you’re back to needing to specify two revisions of a filesystem to diff.

          Git’s model, while meta-filesystem, is superior IMO. I could imagine marrying Git with something like ZFS, but the portability of the .git directory is pretty excellent as opposed to being locked to a single filesystem.

          1. 3

            ZFS solves snapshotting state, but it does not handle telling a story.

            I understand, but the kind of person who hates a VCS is the kind of person who does not want to read or write stories. And that’s perfectly fine. If you don’t want the feature, you shouldn’t have to pay for it either. I see a good chunk of VCS users also really dislike it, writing commit messages like “fixed stuff” or “changes” and never really wanting to take the time to understand branches, merging, and all that.

            It should be an option to be able to code without a VCS but with the ability to roll back to any version and to share your changes. That’s all that some people want: code, not commits.

        3. 2

          although these systems could use better default options so that people don’t have to spend time learning a workflow in an industry standard tool, there’s not much of a way to magically do version control, without telling the system what you’re doing

        4. 2

          You’re insane. Version control systems make collaboration manageable.

      2. 6

        Obviously. The analogy was only meant to better illustrate the features of ZFS. The author even says so explicitly:

        Using ZFS as a replacement of Git for is probably not a good idea, but just to give you a sense of what ZFS supports at the file system level, let me go through a few typical git-like operations:

      3. 3

        Counterpoint: this would make for fantastic bin file version control. Say maybe you had a repository filled with 3d models for a video game. the file sizes would be large and the text diffs would be useless. ZFS snapshots would be a slightly more space-saving way to handle versioning assets like that than simply making archives or full filesystem backups. Granted, it’d most likely require a knowledgeable dev-ops on the team to make and maintain those snapshots unless there was a nice frontend for the artists to use.

    17. 15

      It’s almost as if human brains are some kind of neural network or something.

    18. 9

      Software using Semantic Versioning MUST declare a public API. This API could be declared in the code itself or exist strictly in documentation. However it is done, it should be precise and comprehensive.

      I think maybe there’s a typo here? Should this be Monotonic Versioning, not Semantic Versioning?

      1. 11

        Aaaaaaaaaand 1.1 is released.

      2. 6

        It looks like copy paste from the Semantic Versioning page ( for the first few rules. Oops.