1. 10

    If maintaining a popular free and open source software project is producing stress… don’t do it!

    Really, just stop. Maintaining it, I mean. Unless you have contractual obligations or it’s a job or something, just tune it all out. Who cares if people have problems. Help if you can, help if it makes you happy, and if it doesn’t, it’s not your problem and just walk away. It’s not worth your unhappiness. If you can, put a big flag that says “I’m not maintaining this, feel free to fork!” and maybe someone else will take it over, but if they don’t, that’s fine too. It’s also fine if you don’t put a flag! No skin off your nose! You don’t owe anything to anyone!

    Now I’m gonna grump even more.

    I think this wave of blog posts about how to avoid “open source burnout” and so forth might be more of a Github phenomenon. The barrier to entry has been set to too low. Back in the day, if you wanted to file a bug report, you had to jump through hoops, and those hoops required reading contributor guidelines and how to submit a bug report. Find the mailing list, see which source control they used (if they used source control), see what kind of bug tracker they used (if they use one), figure out a form to see what to submit where… Very often, in the process of producing a bug report that would even pass the filters, you would even solve the problem yourself or at the very least produce a very good bug report that nearly diagnosed the problem.

    Now all of this “social coding” is producing a bunch of people who are afraid of putting code out there due to having to deal with the beggar masses.

    Just don’t.

    1. 7

      I totally agree that your own needs are the top priority if you are an OSS provider. Nobody has a divine right to your time.

      I do think that having people be able to report bugs easily is really good. For even relatively small projects, this also serves as a bit of a usability forum, with non-maintainers able to chime in and help. This can give the basis for a supportive community so the owner isn’t swamped with things. Many people want to help as well.

      Though if this is your “personal project”, then it could be very annoying (I think you can turn off issues in GH luckily?).

      Ultimately though, the fact that huge projects used by a bazillion tech companies have funding of around $0 is shameful. Things like Celery, used by almost every major Python shop, do not have the resources to package releases because it’s basically a couple people who spend their time getting yelled at. We desperately need more money in the OSS ecosystem so people can actually build things in a sustainable way without having to suffer all this stress.

      Hard to overestimate how much a stable paycheck makes things more bearable

      1. 5

        “Back in the day, if you wanted to file a bug report, you had to jump through hoops”

        This is where I disagree. Both maintainer and other contributors’ time are valuable. Many folks won’t contribute a bug report or fix if you put time wasting obstacles in their path. Goes double if they know it was there intentionally. I remember I did one for Servo on Github just because it was easy to do so. I didnt have time to spare to do anything but try some critical features and throw a bug report on whatever I found.

        I doubt Im the only one out there that’s more likely to help when it’s easy to do so.

        1. 5

          This is where I disagree. Both maintainer and other contributors’ time are valuable.

          !!!!!

          I remember I did one for Servo on Github just because it was easy to do so. I didnt have time to spare to do anything but try some critical features and throw a bug report on whatever I found.

          @manishearth, who set up http://starters.servo.org, dropped this very nice sentence about contribution: “People don’t start out serious, they start out curious.”

          1. 4

            The problem is that projects don’t survive on such drive-by fixes alone. Yes, you fixed a bug and that’s a good thing, but the project would probably still run along just fine without that fix. And you never came back. In the long term, what projects have to care about are interested people who keep coming back. The others really don’t matter that much.

            1. 5

              I think this is a bit like a consumer acquisition funnel.

              Every contributor first started off by providing a drive-by fix. If they do it enough, now they’re contributing a lot. Now you have full-time contributors.

              1. 1

                Sure but the question was about how high the bar for such drive-by contributions can be while still keeping a project healthy, based on the premise that making drive-by contributions too easy can result in toxic community behaviour overwhelming active maintainers.

                1. 3

                  The “height of the contribution bar” as quality control is - in my experience - a myth. The “denying low quality contributions” is not.

                  I’ll explain why: the bar to unfounded complaints and troll is always very low. If you have an open web form somewhere, someone will mistake it for a garbage bin. And that’s what sucks you down. Dealing with those in an assertive manner gets easier when you have a group.

                  The bar to attempting contribution should be as low as possible. You’d want to make people aware that they can contribute and that they can get started very easily. You will always have to train - projects got workflows, styles, etc. that people can’t all learn in one go. Mentoring also gets somewhat easier as a group.

                  Saying “no” to a contribution is a hard. Get used to it, no one takes that off you. But it must be done.

                  Also, there’s a trend to have people voicing their frustrations blamed as “no respecting the maintainers”. There’s pretty often complaints that have some truth in them. Often, a “you’re right, can we help you with fixing it on your own?” is better then throwing stuff screenshots on Twitter.

                  1. 1

                    I agree with you but quality control is, again, a separate question. I wasn’t talking about quality control. The question is about how to best attract only those people with an appropriate kind of behaviour that won’t end up burning out maintainers, and whether a bar to contribution can factor into this.

                    I think JordiGH’s point was that if someone has to jump through some hoops to even find the right forum of communication to use (which mailing list and/or bug tracker, etc.), just by showing up at a place where maintainers will listen a contributor shows they have spent time and enganged their brains a bit to read a minimum necessary amount of text about how the project and its community works. This can be achieved, for instance, with a landing page that doesn’t directly ask people to submit code by pushing a simple button, but directs them to a document which explains how and where to make contributions.

                    If instead people can click through a social media website they sign up on only once and then have their proposed changes to various projects appear in every maintainer’s face right away with minmal effort because that’s how the site was designed, it’s no surprise that mentoring new contributors becomes relatively harder for maintainers, isn’t it? I mean, seriously, blog posts about depressed open source maintainers seem to mostly involve people using such sites.

              2. 1

                Id considered this but do we really have data proving it? And on projects trying to cast a wide net vs those that dont? I could imagine that scenario would be fine for OpenBSD aiming for quality but Ruby library or something might be fine with extra little commits over time.

                1. 2

                  I think you’ll always need at least one developer dedicated enough to give the project a home, integrate changes, drive releases, and so on.

                  A pile of drive-by patches and pull requests with nothing holding them together is not a “project”.

                  Edit: BTW you said “extra little commits” and i said “drive-by fixes alone” so we may be talking past each other a bit… :)

            2. 3

              Really, just stop. Maintaining it, I mean. Unless you have contractual obligations or it’s a job or something, just tune it all out. Who cares if people have problems. Help if you can, help if it makes you happy, and if it doesn’t, it’s not your problem and just walk away. It’s not worth your unhappiness. If you can, put a big flag that says “I’m not maintaining this, feel free to fork!” and maybe someone else will take it over, but if they don’t, that’s fine too. It’s also fine if you don’t put a flag! No skin off your nose! You don’t owe anything to anyone!

              Totally. In this scenario, you should just quit cold turkey.

              The rest of the post is more advice that I’ve found myself giving multiple times to people who do want to keep maintaining the project, or be active in their larger community, but aren’t super focused on that particular library anymore.

              1. 2

                There’s a lot of poor communication out there with unstated assumptions on each side for relationships not just open source and that drives a lot of frustration and resentment. There are dozens of books on the subject in the self-help aisle of bookstores. The points in the article are all good advice but I think the best advice is to make it clear what on terms you volunteer your work and not be ashamed to say “I don’t want to do this but feel free to do it or fork it” if it’s not scratching your itch.

                Personally, I’ve turned away issues resulting from old and on bleeding-edge compiler or library releases and on OS’s or equipment I don’t run (doesn’t behave on Windows XP? doesn’t work with Chinese clone of hardware? Hell if I know…)

              1. 15

                Motivation seems to be insurance against a trusting trust attack: https://www.reddit.com/r/rust/comments/718gbh/comment/dn90vo1

                Really awesome project!

                1. 18

                  It’ll also be generally useful for bootstrapping without needing a previous Rust binary blob.

                  1. 4

                    I considered doing that if I got resources. My idea was to just port the Rust compiler code directly to C or some other language. Especially one with a lot of compilers. BASIC’s and toy Scheme’s are the easiest if you want diversity in implementation and jurisdiction. Alternatively, a Forth, Small C, Tcl, or Oberon if aiming for something one can homebrew a compiler or interpreter for. Far as certifying compilers, I’d hand-convert it to Clight to use CompCert or a low IR of CakeML’s compiler to use that. Then, if the Rust code is correct and Clight is equivalent, then the EXE is likely correct. Aside from Karger-Thompson attack, CSmith-style testing comparing output of reference and CompCert’d compiler could detect problems in reference compiler where its transformations (esp optimizations) broke it.

                    rain1 and I got a lot more tools for bootstrapping listed here:

                    https://bootstrapping.miraheze.org/wiki/Main_Page

                  1. 14

                    I just finished Greg Egan’s new novel, Dichronauts.

                    The world the book is set in has crazy physics:

                    The four-dimensional universe we inhabit has three dimensions of space and one of time. But what would it be like to live in a universe where the roles were divided up more evenly, so that there were two of each: two dimensions of space, and two of time?

                    Here’s his intro to the physics: http://gregegan.net/DICHRONAUTS/00/DPDM.html

                    He also has a little interactive sandbox simulator for the world: http://gregegan.net/DICHRONAUTS/02/Interactive.html

                    But what I really like about Egan is that despite having some of the wildest “hard SF” ideas in SF, he still has really novel social arrangements and characters. In this case, the main characters are a pair of symbiotic organisms, one that can walk around and the other that is immobile but can echolocate for the other. Can you imagine what kind of relationship they might have? Egan actually answers this question really well, and in ways I didn’t see coming.

                    Egan definitely isn’t for everyone, but if your interested is piqued, then I would whole heartedly recommend the book (and his earlier novels as well! Diaspora, Permutation City, pretty much all of them) To you.

                    1. 6

                      Permutation City is still one of my favorite novels. Greg, if you’re on here, know that you’ve given me a lot of thoughtful enjoyment over the years. I owe you a beer.

                      1. 4

                        Greg Egan is great. Diaspora remains my favorite sci-fi book of all time. As you say, for the unique social arrangements as much as for the hard-SF. Polises are a very interesting idea.

                      1. 2

                        I’m not that big a fan of the dragon book. Spends way too much time on parsing and compiler frontends. I think Engineering a Compiler is a better choice.

                        1. 1

                          I agree that parsing theory is overdone. Personally, I’d say skip LR (and LALR etc) completely. For quick prototyping use a parser generator like AntLR. For production quality (good error messages etc), use a hand-written recursive-descent approach with precedence climbing.

                        1. 1

                          Great read, sounds fun, and I’m glad patches are going upstream so we all benefit.

                          Oh, and I write. A lot. But it’s nearly all internal. So, hey, if you want to know where most of my output has been going, it’s in there. If you’re an employee, then there you go.

                          Too bad it’s not public :-/

                          1. 4

                            I’m in the middle of first project with Rust. It’s a small compiler for a tiny functional language. I recently got the parser (handwritten recursive descent) working. Including tests, the project is currently ~650 LOC. I haven’t written anything significant in Rust outside of this.

                            Full algebraic data types + pattern matching add so much to compiler code. It provides a very natural way to build and work with ASTs and IRs. I’ve the experience Rust to be roughly on par with the ADT experience in OCaml. I will say that pattern-matching on Boxes is a little annoying. (Though this is probably a product of my inexperience writing Rust). I like having full ADTs much more than templated classes/structs in C++.

                            Also, the build system and general environment for Rust has been great. I have a single C++ project that’s nicely set up, and I usually just copy that directory over, rm a bunch of stuff, and start fresh if I need to start something in C++. Getting anything nontrivial working in OCaml is also a huge pain. I believe every time I’ve installed OCaml on a machine, I’ve needed to manually revert to an older version of ocamlfind. Cargo is an incredible tool.

                            I chose to use Rust because I felt like compilers are a good use-case for Rust + I wanted to learn it. It’s really nice to have pattern matching + for loops in the same language. (Yes, OCaml technically has for-loops as well, but it really doesn’t feel the same. It’s nice to be able to write simple imperative when you need to).

                            This all being said, I’ve had plenty of fights with borrow-checker. I still don’t have a good grasp on how lifetimes + ownership work. I was a bit stuck on how to approximate global variables for the parser, so had to make everything object-oriented, which was a bit annoying. I would also love love love to be able to destructure Boxes in pattern matching without having to enable an experiment feature (I understand that this can cause the pattern matching to be expensive, as it’s a dereference, but I almost always wind up dereferencing it later in the code).

                            1. 5

                              I’ve done a fair bit of parsing with Rust, mostly DWARF but also some other things.

                              I tend to write parsers with signatures like so:

                              pub enum Error {
                                  // Different kinds of errors for this crate...
                                  UnexpectedEof,
                              }
                              
                              pub type Result<T> = ::std::result::Result<T, Error>;
                              
                              pub struct Parseable<'a> {
                                  // Just here to show how a zero-copy approach would
                                  // work with lifetimes on the struct...
                                  subslice: &'a [u8]
                              }
                              
                              impl<'a> Parseable<'a> {
                                  fn parse(input: &'a [u8]) -> Result<(Parseable<'a>, &'a [u8])> {
                                      // ...
                                  }
                              }
                              

                              The &'a [u8] in the tuple is the rest of the input that was not consumed while parsing the Parseable<'a>.

                              Regarding variables that are “global” for the parser (they probably aren’t really “global”, b/c you probably don’t want two threads parsing independent things to stomp on each others' toes…), I would make something like a ParseContext and thread a &mut ParseContext as the first parameter to all the parser functions:

                              pub struct ParseContext {
                                  // Whatever state needs to be shared when parsing goes here...
                              }
                              
                              // ...
                              
                              impl<'a> Parseable<'a> {
                                  fn parse(ctx: &mut ParseContext, input: &'a [u8])
                                      -> Result<(Parseable<'a>, &'a [u8])>
                                  {
                                      // ...
                                  }
                              }
                              

                              If you’re working with UTF-8, you can use &'a str instead of &'a [u8].

                              1. 3

                                Thanks for the tips! I really like the enum of parse-specific errors; I’ll probably implement that in my own project. Interesting how rust makes explicit the zero-copy approach. That’s pretty slick.

                                Once I have a better understanding of lifetimes, I’ll give this another look.

                                Also, parsing DWARF info sounds really cool :D

                              2. 1

                                Do you have the code hosted somewhere? I would love to read it.

                                1. 1

                                  Just realized this was in response to my comment and not fitzgen’s. I keep it in a private github repo, but don’t want to make it public until it’s “working.”

                                  I’ll probably post it here once the first go is complete. :)

                              1. 4

                                Just yesterday I setup racket again and started going through the redex tutorial. All I can say is “wow!” This is perhaps the best introduction I’ve had to any software library. The documentation is absolutely fantastic, and everything is going perfectly smooth.

                                Thanks to everyone involved in the racket community!

                                1. 8

                                  The way the book is presented as building a series of small libraries that you then leverage to make a larger more complex application is absolutely wonderful. The best done “practical” book I’ve read.

                                  1. 9

                                    One really cool “observation” paper to come out of the memory management community is A Unified Theory of Garbage Collection by Bacon, Cheng, and Rajan

                                    This is one of my favorite papers of all time, highly recommended for everyone!

                                    1. 1

                                      Michael Bernstein does a nice overview of that very paper here: https://www.youtube.com/watch?v=XtUtfARSIv8

                                    1. 4

                                      Extremely excited about this. Does anyone know more about this part?

                                      Even when Electrolysis is finally released into the wild, though, Mozilla will be exceedingly cautious with the ramp-up. At first, e10s will only be enabled for a small portion of Firefox’s 500 million-odd users, just to make sure that everything is working as intended.

                                      Will there be an about:config setting for the rest of us to use if we want e10s?

                                      1. 14

                                        Yeah there is a config for it, you can actually do it right now if you want. At first, only users who have no extensions installed will have it enabled by default. If you do have extensions installed and want to try e10s anyway, check this page out to see if they are all compatible. If you see “shimmed” it means the extension should work, but will likely slow things down a lot.

                                        1. 2

                                          When you say “will likely slow things down a lot”, are you comparing against the current baseline single-processor experience, or against the improved performance of e10s?

                                          1. 10

                                            It will be slower than baseline/non-e10s performance. Traditionally, addons in the privileged chrome context could synchronously access JS objects/methods/whatever in content. Those two contexts are now in different processes, so shimming the access patterns some addons used involves blocking on IPC calls.

                                      1. 3

                                        Does anyone have experience with the rust port of quick check?

                                        https://github.com/BurntSushi/quickcheck

                                        1. 4

                                          A fair number of people are using it: https://crates.io/crates/quickcheck/reverse_dependencies — I’m actually really happy that so many projects are using property based testing!

                                          It’s a pretty faithful port of Haskell’s QuickCheck, and even shares a similar implementation strategy with Arbitrary and Testable traits. (Traits are similar in many respects to Haskell’s typeclasses.)

                                          1. 1

                                            Just to make it clear BTW: My general impression of your work is that it is very good. I just haven’t put in enough time and don’t have enough familiarity with Rust (yet!) to properly commit to saying that in the post. I’d like to at some point.

                                            1. 1

                                              TBH now that I’m looking at the reverse dependency list I’m just going to mark it as “Probably very good”

                                            2. 1

                                              Awesome! We’re possibly making a rust API for SpiderMonkey’s Debugger API (the only interface is in JS right now, but Servo doesn’t want to support privileged JS) and the JS fuzzer has been incredibly helpful for catching and fixing bugs for the existing interface. My thinking is that to get the equivalent for the Rust interface, we should be using quickcheck.

                                          1. 4

                                            Working on emulating MESI (the memory cache coherence protocol) in Rust to get a better understanding of how it works. Have it mostly working but the miss rates reported in my benchmark/exercising code seems to be off or something. For example, my false sharing test case is way slower than when each cache is operating on a unique block (as expected), but despite that it isn’t reporting the higher miss rates I would expect from getting cache lines invalidated by other caches' writes. Need to dig in more.

                                              1. 14

                                                Interesting tidbits I’ve found (or others have found and shared with me) so far:

                                                1. 15

                                                  Dangers of UB, not super smart compilers. I’d personally lay blame on poor language specifications.

                                                  also: http://developerblog.redhat.com/2014/10/16/gcc-undefined-behavior-sanitizer-ubsan/

                                                  1. 8

                                                    Curating my own collection of rss/atom feeds gives me the highest signal/noise ration.

                                                      1. 1

                                                        Thanks! This is a good list!

                                                        I see you’re interested in D. What about D appeals to you? I have one friend who’s been a D enthusiast for a while, but I’m curious to get another perspective on why D is an appealing language.

                                                        1. 5

                                                          My “earn my bread & butter” languages are C/C++ and Ruby.

                                                          I work with embedded devices because I figure that’s where the largest future growth for the industry is.

                                                          So I am always squeezed for ram / rom / mips and always will be.

                                                          Yes, I know Moore’s Law, but in the embedded realm that just means they want to make it physically smaller, cheaper unit cost, longer battery life and doing more stuff.

                                                          So I always will be programming “close to the metal”.

                                                          Conversely when a recall costs millions, or worse, a bug can get someone killed….. you get pretty paranoid about bugs and testing.

                                                          D has a largish list of features that address all sides of the problem.

                                                          • Features that make the produced code as efficient as possible,
                                                          • and features that make it as safe as possible,
                                                          • and things that make the programmer as productive as possible.

                                                          All of these things really matter.

                                                          I also use Ruby for “Glue and String”. Build systems, data mining, global code analysis, one liners …..

                                                          Why? The dynamic typing / duck typing allows the code to “just flow” from my fingers, and I build up the code progressively from something that just copies stdin to stdout, to something that with each tiny change does more and more of what I need, each run having negligible compile / link / run time .

                                                          Curiously enough, D’s “auto” keyword and “generic all the time” and fast compile times allows me to do the same…. but in a “type safe at compile time” manner. And is way faster.

                                                          So I’m starting to use D instead of Ruby as well…

                                                          1. 2

                                                            Thanks for the reply! Your reasons are very similar to the reasons that I am such a big fan of Rust. I’m really happy to see the ongoing resurgence of languages interested in close-to-the-metal performance (D, Rust, Nim). Most of my day-to-day programming is in C++, and I would love to see a more modern alternative with stronger safety guarantees gain widespread popularity. That language may be D, or Rust, or Nim (or some mix of all three), but no matter what I think it will be net gain for the world when a variety of basic memory safety problems can be statically eliminated in popular languages without a major performance hit.

                                                            1. 3

                                                              I’m quite excited that there is an mini-“Cambrian Explosion” in non-academic industrial strength languages.

                                                              I’m betting on D, but watching Rust.

                                                              The next few years are going to be critical in whether we can avoid the “Worse is Better” effects of market forces.

                                                              Half of me is glad to see competition, half of me fears the split forces may lack the strength to unseat the incumbent (C++).

                                                    1. 11

                                                      This has been coming for so long it’s not even funny. I was surprised they claimed to still be maintaining it.

                                                      1. 4

                                                        Its fate has been clear since the day Mozilla decided to stop supporting Xulrunner. They had a vision of a rich portable application platform that was actually pretty compelling (you can build really cool alternate browsers like Conkeror using only JS on top of the Mozilla runtime) but since they’ve also decided to kill the extension mechanism it feels like any general-purpose functionality that isn’t needed to build their specific vision for Firefox is a casualty.

                                                        It’s a shame, because there are loads of people in the community with great ideas that wouldn’t be appropriate for mainline FF but can greatly enhance the browsing for some subset of people. For instance, when I have to use Firefox without the keysnail extension, I feel like I’ve lost twenty-seven IQ points and half my appendages.

                                                        1. 9

                                                          I was working on a xulrunner based open source product (Songbird) at the time. The cancellation was preceeded by the kind of neglect we’ve seen of thunderbird. It sucked to be abandoned but even at the time I thought Mozilla was right to be focussing on the web platform rather than native cross-platform apps.

                                                          1. 3

                                                            I used to use Songbird way back when. It was awesome, thanks for reminding me of that time and developing it back then :)

                                                      1. 6

                                                        Given the size of the repository, it’s not clear that Git would be significantly better or different.

                                                        In all the really big repos I’ve used, a limit gets hit and some wacky customizations are applied. The alternative being that you just have to put up with the sluggishness.

                                                        1. 3

                                                          Facebook actually hit git’s limit a while back and contributed patches, etc to Mercurial to work with it. Really interesting stuff. But, stemming from that observation and other experiences, I am a superfan of breaking up repos in DVCS systems. I maintain a mercurial extension to coordinate many repos in a friendlier fashion than hg subrepos (guestrepo!).

                                                          I’m kind of persuaded that dvcs is a smell at a stereotypical company though, I think there’s room for an excellent central VCS out there.

                                                          1. 2

                                                            I think where we’re heading with Mercurial over the long term is a set of tools that makes doing centralized-model development painless with DVCS tools, while retaining most of the benefits (smaller patches, pushing several in a group, etc) of a DVCS workflow. I don’t think it’s a smell at all.

                                                            As for splitting repositories, there are definitely cases where it makes sense, but there’s also a huge benefit to having everything be in one giant repository.

                                                            (Disclaimer: I work on source control stuff for a big company, with a focus on Mercurial stuff whenever possible.)

                                                          2. 1

                                                            FWIW, I use git with mozilla-central and find it a much more pleasing experience than hg (which I still export to when pushing to shared remote repos). That said, it is also what I am more familiar with, although I did use hg exclusively for a year or so.

                                                            I really enjoy having everything in the game repo for many reasons such as the lack of syncing overhead, but it does tend to push performance of version control.

                                                          1. 1

                                                            Nice work medium with the unique hashes. I also found three different postings of the same article on HN, with votes and comments split three ways between them. Maximum engagement.

                                                            1. 1

                                                              It automatically added it when I went to the article and didn’t think too much of it at the time :-/

                                                              I did find it an interesting article though. Sorry for duping.

                                                            1. 1

                                                              Nice read :)

                                                              I didn’t know that there were video lectures and notes available for MIT’s advanced data structures course, that’s pretty neat: http://courses.csail.mit.edu/6.851/spring14/lectures/

                                                              Can anyone comment on what I might get from these lectures that I missed in Okasaki’s Purely Functional Data Structures?

                                                              My gut reaction to partial persistence (which I admit I don’t have any experience with): The reference management sounds like a nightmare. I’d rather be fully persistent and leverage a proper GC since you probably end up implementing your own, crappy GC to deal with those references to older versions (kind of like Greenspun’s tenth rule and lisp x “A Unified Theory of Garbage Collection”).

                                                              I’m curious about what the interaction between non-determinism and the retroactive data structures backing a time traveling debugger would look like.