I’d really appreciate it if you took notes on what made sense and what was unclear as you go through the tutorial. That kind of feedback is super helpful for us! Thanks :)
I followed this tutorial to get started on my falling sand game project*.
It’s a concise introduction to some really useful tools - using this ecosystem has been a total joy and has enabled me to build things in the browser with incredible performance, without sacrificing the web niceties I’m used to like quick feedback cycles and performance tracing via devtools.
The browser support is also really strong, my game (mostly, WIP!) works on mobile and most browsers (I think)
Highly recommend this book!
Holy crap. That game is awesome. The way the different elements interact so intuitively is just incredible. I was playing around with destroying things with acid, but then I realized half my screen was acid and it was fun trying to get rid of it. I love how gas and lava interact, and also how ice will cause surrounding water to freeze. And also, putting a bit of lava under some wood and then using wind on the wood actually scatters the embers of the wood. Wow.
thank you so much!
falling sand games were a part of my childhood, I love their mode of play through experiment-building.
My eventual goal is to allow the user to program, fork, and share new elements of their own design, and mix them. Defining an element right now has an intentionally simple cellular automata api surface, and I hope to eventually figure out how to compile and link wasm modules in the browser to allow hundreds of elements, so you can try out usernameFoo’s “Alien Plant v4” against usernameBar’s “pink super acid”
I’ll need to understand the wasm toolchain a lot better to make that happen though
Thank you! This is a bit silly considering that I posted this on a public forum, but my one request is to please not share the game more broadly yet, I have a lot of things I still want to implement before I show it to people outside the context of it being a tech demo. Posted it here because I appreciate this learning resource so much!
I particularly liked @burntsushi’s response, which begins like this:
I feel like you’re missing the forest for the trees. In particular, you seem to be toting balance as a counter to absolutist arguments that I don’t think actually exist. e.g.,
ergonomics are good, but not at any cost
ergonomics is not the only thing
[ergonomic is not] the most important one
I think you’d have a really hard time finding someone that would defend these principles. To me, that means you’ve built up a straw man.
Is there any way to specify the current project is using the wasm target so one could just use cargo build instead of relying on npm? I tried rustup override but I keep having an error about the wasm target not found, even though I just installed it on nightly.
So, yes, you can use cargo build to create the .wasm binary, you just have to supply the --target wasm32-unknown-unknown. However, to get the generated JavaScript API glue, you need to also run wasm-bindgen.
The npm run build-* commands just package them both up in one step for convenience.
If maintaining a popular free and open source software project is producing stress… don’t do it!
Really, just stop. Maintaining it, I mean. Unless you have contractual obligations or it’s a job or something, just tune it all out. Who cares if people have problems. Help if you can, help if it makes you happy, and if it doesn’t, it’s not your problem and just walk away. It’s not worth your unhappiness. If you can, put a big flag that says “I’m not maintaining this, feel free to fork!” and maybe someone else will take it over, but if they don’t, that’s fine too. It’s also fine if you don’t put a flag! No skin off your nose! You don’t owe anything to anyone!
Now I’m gonna grump even more.
I think this wave of blog posts about how to avoid “open source burnout” and so forth might be more of a Github phenomenon. The barrier to entry has been set to too low. Back in the day, if you wanted to file a bug report, you had to jump through hoops, and those hoops required reading contributor guidelines and how to submit a bug report. Find the mailing list, see which source control they used (if they used source control), see what kind of bug tracker they used (if they use one), figure out a form to see what to submit where… Very often, in the process of producing a bug report that would even pass the filters, you would even solve the problem yourself or at the very least produce a very good bug report that nearly diagnosed the problem.
Now all of this “social coding” is producing a bunch of people who are afraid of putting code out there due to having to deal with the beggar masses.
I totally agree that your own needs are the top priority if you are an OSS provider. Nobody has a divine right to your time.
I do think that having people be able to report bugs easily is really good. For even relatively small projects, this also serves as a bit of a usability forum, with non-maintainers able to chime in and help. This can give the basis for a supportive community so the owner isn’t swamped with things. Many people want to help as well.
Though if this is your “personal project”, then it could be very annoying (I think you can turn off issues in GH luckily?).
Ultimately though, the fact that huge projects used by a bazillion tech companies have funding of around $0 is shameful. Things like Celery, used by almost every major Python shop, do not have the resources to package releases because it’s basically a couple people who spend their time getting yelled at. We desperately need more money in the OSS ecosystem so people can actually build things in a sustainable way without having to suffer all this stress.
Hard to overestimate how much a stable paycheck makes things more bearable
“Back in the day, if you wanted to file a bug report, you had to jump through hoops”
This is where I disagree. Both maintainer and other contributors’ time are valuable. Many folks won’t contribute a bug report or fix if you put time wasting obstacles in their path. Goes double if they know it was there intentionally. I remember I did one for Servo on Github just because it was easy to do so. I didnt have time to spare to do anything but try some critical features and throw a bug report on whatever I found.
I doubt Im the only one out there that’s more likely to help when it’s easy to do so.
This is where I disagree. Both maintainer and other contributors’ time are valuable.
!!!!!
I remember I did one for Servo on Github just because it was easy to do so. I didnt have time to spare to do anything but try some critical features and throw a bug report on whatever I found.
@manishearth, who set up http://starters.servo.org, dropped this very nice sentence about contribution: “People don’t start out serious, they start out curious.”
The problem is that projects don’t survive on such drive-by fixes alone. Yes, you fixed a bug and that’s a good thing, but the project would probably still run along just fine without that fix. And you never came back. In the long term, what projects have to care about are interested people who keep coming back. The others really don’t matter that much.
I think this is a bit like a consumer acquisition funnel.
Every contributor first started off by providing a drive-by fix. If they do it enough, now they’re contributing a lot. Now you have full-time contributors.
Sure but the question was about how high the bar for such drive-by contributions can be while still keeping a project healthy, based on the premise that making drive-by contributions too easy can result in toxic community behaviour overwhelming active maintainers.
The “height of the contribution bar” as quality control is - in my experience - a myth. The “denying low quality contributions” is not.
I’ll explain why: the bar to unfounded complaints and troll is always very low. If you have an open web form somewhere, someone will mistake it for a garbage bin. And that’s what sucks you down. Dealing with those in an assertive manner gets easier when you have a group.
The bar to attempting contribution should be as low as possible. You’d want to make people aware that they can contribute and that they can get started very easily. You will always have to train - projects got workflows, styles, etc. that people can’t all learn in one go. Mentoring also gets somewhat easier as a group.
Saying “no” to a contribution is a hard. Get used to it, no one takes that off you. But it must be done.
Also, there’s a trend to have people voicing their frustrations blamed as “no respecting the maintainers”. There’s pretty often complaints that have some truth in them. Often, a “you’re right, can we help you with fixing it on your own?” is better then throwing stuff screenshots on Twitter.
I agree with you but quality control is, again, a separate question. I wasn’t talking about quality control. The question is about how to best attract only those people with an appropriate kind of behaviour that won’t end up burning out maintainers, and whether a bar to contribution can factor into this.
I think JordiGH’s point was that if someone has to jump through some hoops to even find the right forum of communication to use (which mailing list and/or bug tracker, etc.), just by showing up at a place where maintainers will listen a contributor shows they have spent time and enganged their brains a bit to read a minimum necessary amount of text about how the project and its community works. This can be achieved, for instance, with a landing page that doesn’t directly ask people to submit code by pushing a simple button, but directs them to a document which explains how and where to make contributions.
If instead people can click through a social media website they sign up on only once and then have their proposed changes to various projects appear in every maintainer’s face right away with minmal effort because that’s how the site was designed, it’s no surprise that mentoring new contributors becomes relatively harder for maintainers, isn’t it? I mean, seriously, blog posts about depressed open source maintainers seem to mostly involve people using such sites.
Id considered this but do we really have data proving it? And on projects trying to cast a wide net vs those that dont? I could imagine that scenario would be fine for OpenBSD aiming for quality but Ruby library or something might be fine with extra little commits over time.
Really, just stop. Maintaining it, I mean. Unless you have contractual obligations or it’s a job or something, just tune it all out. Who cares if people have problems. Help if you can, help if it makes you happy, and if it doesn’t, it’s not your problem and just walk away. It’s not worth your unhappiness. If you can, put a big flag that says “I’m not maintaining this, feel free to fork!” and maybe someone else will take it over, but if they don’t, that’s fine too. It’s also fine if you don’t put a flag! No skin off your nose! You don’t owe anything to anyone!
Totally. In this scenario, you should just quit cold turkey.
The rest of the post is more advice that I’ve found myself giving multiple times to people who do want to keep maintaining the project, or be active in their larger community, but aren’t super focused on that particular library anymore.
There’s a lot of poor communication out there with unstated assumptions on each side for relationships not just open source and that drives a lot of frustration and resentment. There are dozens of books on the subject in the self-help aisle of bookstores. The points in the article are all good advice but I think the best advice is to make it clear what on terms you volunteer your work and not be ashamed to say “I don’t want to do this but feel free to do it or fork it” if it’s not scratching your itch.
Personally, I’ve turned away issues resulting from old and on bleeding-edge compiler or library releases and on OS’s or equipment I don’t run (doesn’t behave on Windows XP? doesn’t work with Chinese clone of hardware? Hell if I know…)
I considered doing that if I got resources. My idea was to just port the Rust compiler code directly to C or some other language. Especially one with a lot of compilers. BASIC’s and toy Scheme’s are the easiest if you want diversity in implementation and jurisdiction. Alternatively, a Forth, Small C, Tcl, or Oberon if aiming for something one can homebrew a compiler or interpreter for. Far as certifying compilers, I’d hand-convert it to Clight to use CompCert or a low IR of CakeML’s compiler to use that. Then, if the Rust code is correct and Clight is equivalent, then the EXE is likely correct. Aside from Karger-Thompson attack, CSmith-style testing comparing output of reference and CompCert’d compiler could detect problems in reference compiler where its transformations (esp optimizations) broke it.
rain1 and I got a lot more tools for bootstrapping listed here:
I just finished Greg Egan’s new novel, Dichronauts.
The world the book is set in has crazy physics:
The four-dimensional universe we inhabit has three dimensions of space and one of time. But what would it be like to live in a universe where the roles were divided up more evenly, so that there were two of each: two dimensions of space, and two of time?
But what I really like about Egan is that despite having some of the wildest “hard SF” ideas in SF, he still has really novel social arrangements and characters. In this case, the main characters are a pair of symbiotic organisms, one that can walk around and the other that is immobile but can echolocate for the other. Can you imagine what kind of relationship they might have? Egan actually answers this question really well, and in ways I didn’t see coming.
Egan definitely isn’t for everyone, but if your interested is piqued, then I would whole heartedly recommend the book (and his earlier novels as well! Diaspora, Permutation City, pretty much all of them) To you.
Permutation City is still one of my favorite novels. Greg, if you’re on here, know that you’ve given me a lot of thoughtful enjoyment over the years. I owe you a beer.
Greg Egan is great. Diaspora remains my favorite sci-fi book of all time. As you say, for the unique social arrangements as much as for the hard-SF. Polises are a very interesting idea.
I’m not that big a fan of the dragon book. Spends way too much time on parsing and compiler frontends. I think Engineering a Compiler is a better choice.
I agree that parsing theory is overdone. Personally, I’d say skip LR (and LALR etc) completely. For quick prototyping use a parser generator like AntLR. For production quality (good error messages etc), use a hand-written recursive-descent approach with precedence climbing.
Great read, sounds fun, and I’m glad patches are going upstream so we all benefit.
Oh, and I write. A lot. But it’s nearly all internal. So, hey, if you want to know where most of my output has been going, it’s in there. If you’re an employee, then there you go.
I’m in the middle of first project with Rust. It’s a small compiler for a tiny functional language. I recently got the parser (handwritten recursive descent) working. Including tests, the project is currently ~650 LOC. I haven’t written anything significant in Rust outside of this.
Full algebraic data types + pattern matching add so much to compiler code. It provides a very natural way to build and work with ASTs and IRs. I’ve the experience Rust to be roughly on par with the ADT experience in OCaml. I will say that pattern-matching on Boxes is a little annoying. (Though this is probably a product of my inexperience writing Rust). I like having full ADTs much more than templated classes/structs in C++.
Also, the build system and general environment for Rust has been great. I have a single C++ project that’s nicely set up, and I usually just copy that directory over, rm a bunch of stuff, and start fresh if I need to start something in C++. Getting anything nontrivial working in OCaml is also a huge pain. I believe every time I’ve installed OCaml on a machine, I’ve needed to manually revert to an older version of ocamlfind. Cargo is an incredible tool.
I chose to use Rust because I felt like compilers are a good use-case for Rust + I wanted to learn it. It’s really nice to have pattern matching + for loops in the same language. (Yes, OCaml technically has for-loops as well, but it really doesn’t feel the same. It’s nice to be able to write simple imperative when you need to).
This all being said, I’ve had plenty of fights with borrow-checker. I still don’t have a good grasp on how lifetimes + ownership work. I was a bit stuck on how to approximate global variables for the parser, so had to make everything object-oriented, which was a bit annoying. I would also love love love to be able to destructure Boxes in pattern matching without having to enable an experiment feature (I understand that this can cause the pattern matching to be expensive, as it’s a dereference, but I almost always wind up dereferencing it later in the code).
I’ve done a fair bit of parsing with Rust, mostly DWARF but also some other things.
I tend to write parsers with signatures like so:
pub enum Error {
// Different kinds of errors for this crate...
UnexpectedEof,
}
pub type Result<T> = ::std::result::Result<T, Error>;
pub struct Parseable<'a> {
// Just here to show how a zero-copy approach would
// work with lifetimes on the struct...
subslice: &'a [u8]
}
impl<'a> Parseable<'a> {
fn parse(input: &'a [u8]) -> Result<(Parseable<'a>, &'a [u8])> {
// ...
}
}
The &'a [u8] in the tuple is the rest of the input that was not consumed while parsing the Parseable<'a>.
Regarding variables that are “global” for the parser (they probably aren’t really “global”, b/c you probably don’t want two threads parsing independent things to stomp on each others' toes…), I would make something like a ParseContext and thread a &mut ParseContext as the first parameter to all the parser functions:
pub struct ParseContext {
// Whatever state needs to be shared when parsing goes here...
}
// ...
impl<'a> Parseable<'a> {
fn parse(ctx: &mut ParseContext, input: &'a [u8])
-> Result<(Parseable<'a>, &'a [u8])>
{
// ...
}
}
If you’re working with UTF-8, you can use &'a str instead of &'a [u8].
Thanks for the tips! I really like the enum of parse-specific errors; I’ll probably implement that in my own project. Interesting how rust makes explicit the zero-copy approach. That’s pretty slick.
Once I have a better understanding of lifetimes, I’ll give this another look.
Just realized this was in response to my comment and not fitzgen’s. I keep it in a private github repo, but don’t want to make it public until it’s “working.”
I’ll probably post it here once the first go is complete. :)
Just yesterday I setup racket again and started going through the redex tutorial. All I can say is “wow!” This is perhaps the best introduction I’ve had to any software library. The documentation is absolutely fantastic, and everything is going perfectly smooth.
Thanks to everyone involved in the racket community!
The way the book is presented as building a series of small libraries that you then leverage to make a larger more complex application is absolutely wonderful. The best done “practical” book I’ve read.
Extremely excited about this. Does anyone know more about this part?
Even when Electrolysis is finally released into the wild, though, Mozilla will be exceedingly cautious with the ramp-up. At first, e10s will only be enabled for a small portion of Firefox’s 500 million-odd users, just to make sure that everything is working as intended.
Will there be an about:config setting for the rest of us to use if we want e10s?
Yeah there is a config for it, you can actually do it right now if you want. At first, only users who have no extensions installed will have it enabled by default. If you do have extensions installed and want to try e10s anyway, check this page out to see if they are all compatible. If you see “shimmed” it means the extension should work, but will likely slow things down a lot.
When you say “will likely slow things down a lot”, are you comparing against the current baseline single-processor experience, or against the improved performance of e10s?
It will be slower than baseline/non-e10s performance. Traditionally, addons in the privileged chrome context could synchronously access JS objects/methods/whatever in content. Those two contexts are now in different processes, so shimming the access patterns some addons used involves blocking on IPC calls.
It’s a pretty faithful port of Haskell’s QuickCheck, and even shares a similar implementation strategy with Arbitrary and Testable traits. (Traits are similar in many respects to Haskell’s typeclasses.)
Just to make it clear BTW: My general impression of your work is that it is very good. I just haven’t put in enough time and don’t have enough familiarity with Rust (yet!) to properly commit to saying that in the post. I’d like to at some point.
Awesome! We’re possibly making a rust API for SpiderMonkey’s Debugger API (the only interface is in JS right now, but Servo doesn’t want to support privileged JS) and the JS fuzzer has been incredibly helpful for catching and fixing bugs for the existing interface. My thinking is that to get the equivalent for the Rust interface, we should be using quickcheck.
Working on emulating MESI (the memory cache coherence protocol) in Rust to get a better understanding of how it works. Have it mostly working but the miss rates reported in my benchmark/exercising code seems to be off or something. For example, my false sharing test case is way slower than when each cache is operating on a unique block (as expected), but despite that it isn’t reporting the higher miss rates I would expect from getting cache lines invalidated by other caches' writes. Need to dig in more.
AFAIK, they are the only folks doing concurrent marking (marking happening concurrently with the mutator) and parallel marking (more than one marking thread). SpiderMonkey does much of sweeping concurrently with the mutator, and compaction is done in parallel but not concurrent with the mutator. We don’t do parallel or concurrent marking; we’re looking into concurrent marking as the next big architectural change for the collector.
Just thought it would be interesting to share it, as I am planning to go through it properly myself :)
it’s a great resource, and I recommend it highly!
I’d really appreciate it if you took notes on what made sense and what was unclear as you go through the tutorial. That kind of feedback is super helpful for us! Thanks :)
I followed this tutorial to get started on my falling sand game project*. It’s a concise introduction to some really useful tools - using this ecosystem has been a total joy and has enabled me to build things in the browser with incredible performance, without sacrificing the web niceties I’m used to like quick feedback cycles and performance tracing via devtools. The browser support is also really strong, my game (mostly, WIP!) works on mobile and most browsers (I think) Highly recommend this book!
* https://maxbittker.github.io/sandtable/
https://github.com/MaxBittker/sandtable
This is so so so awesome <3
Holy crap. That game is awesome. The way the different elements interact so intuitively is just incredible. I was playing around with destroying things with acid, but then I realized half my screen was acid and it was fun trying to get rid of it. I love how gas and lava interact, and also how ice will cause surrounding water to freeze. And also, putting a bit of lava under some wood and then using wind on the wood actually scatters the embers of the wood. Wow.
That’s a really incredible project!
thank you so much! falling sand games were a part of my childhood, I love their mode of play through experiment-building.
My eventual goal is to allow the user to program, fork, and share new elements of their own design, and mix them. Defining an element right now has an intentionally simple cellular automata api surface, and I hope to eventually figure out how to compile and link wasm modules in the browser to allow hundreds of elements, so you can try out usernameFoo’s “Alien Plant v4” against usernameBar’s “pink super acid”
I’ll need to understand the wasm toolchain a lot better to make that happen though
This game is amazing. thank you. Also, I want the last two hours of my life back 😅
Thank you! This is a bit silly considering that I posted this on a public forum, but my one request is to please not share the game more broadly yet, I have a lot of things I still want to implement before I show it to people outside the context of it being a tech demo. Posted it here because I appreciate this learning resource so much!
I love the smoke effect!
As I sit here posting my girlfriend is whispering in my ear, “what is clone?”
Thanks! I adapted most of the fluid simulation code from here, learned a lot about webgl doing so! https://github.com/PavelDoGreat/WebGL-Fluid-Simulation
holy shit the plant actually catches on fire when it touches lava this is awesome
the dust can explode…..
If any of y’all go through the tutorial, we would super appreciate it if you took notes along the way and shared your feedback with us!
There was hearty discussion of this post in the rust subreddit. https://www.reddit.com/r/rust/comments/8asb4i/dark_side_of_ergonomics/
I particularly liked @burntsushi’s response, which begins like this:
https://www.reddit.com/r/rust/comments/8asb4i/comment/dx1d32o
Is there any way to specify the current project is using the wasm target so one could just use
cargo build
instead of relying onnpm
? I triedrustup override
but I keep having an error about the wasm target not found, even though I just installed it on nightly.If you look at what
npm run build-debug
andnpm run build-release
are doing, you’ll see that it isn’t very magic:So, yes, you can use
cargo build
to create the.wasm
binary, you just have to supply the--target wasm32-unknown-unknown
. However, to get the generated JavaScript API glue, you need to also runwasm-bindgen
.The
npm run build-*
commands just package them both up in one step for convenience.If maintaining a popular free and open source software project is producing stress… don’t do it!
Really, just stop. Maintaining it, I mean. Unless you have contractual obligations or it’s a job or something, just tune it all out. Who cares if people have problems. Help if you can, help if it makes you happy, and if it doesn’t, it’s not your problem and just walk away. It’s not worth your unhappiness. If you can, put a big flag that says “I’m not maintaining this, feel free to fork!” and maybe someone else will take it over, but if they don’t, that’s fine too. It’s also fine if you don’t put a flag! No skin off your nose! You don’t owe anything to anyone!
Now I’m gonna grump even more.
I think this wave of blog posts about how to avoid “open source burnout” and so forth might be more of a Github phenomenon. The barrier to entry has been set to too low. Back in the day, if you wanted to file a bug report, you had to jump through hoops, and those hoops required reading contributor guidelines and how to submit a bug report. Find the mailing list, see which source control they used (if they used source control), see what kind of bug tracker they used (if they use one), figure out a form to see what to submit where… Very often, in the process of producing a bug report that would even pass the filters, you would even solve the problem yourself or at the very least produce a very good bug report that nearly diagnosed the problem.
Now all of this “social coding” is producing a bunch of people who are afraid of putting code out there due to having to deal with the beggar masses.
Just don’t.
I totally agree that your own needs are the top priority if you are an OSS provider. Nobody has a divine right to your time.
I do think that having people be able to report bugs easily is really good. For even relatively small projects, this also serves as a bit of a usability forum, with non-maintainers able to chime in and help. This can give the basis for a supportive community so the owner isn’t swamped with things. Many people want to help as well.
Though if this is your “personal project”, then it could be very annoying (I think you can turn off issues in GH luckily?).
Ultimately though, the fact that huge projects used by a bazillion tech companies have funding of around $0 is shameful. Things like Celery, used by almost every major Python shop, do not have the resources to package releases because it’s basically a couple people who spend their time getting yelled at. We desperately need more money in the OSS ecosystem so people can actually build things in a sustainable way without having to suffer all this stress.
Hard to overestimate how much a stable paycheck makes things more bearable
“Back in the day, if you wanted to file a bug report, you had to jump through hoops”
This is where I disagree. Both maintainer and other contributors’ time are valuable. Many folks won’t contribute a bug report or fix if you put time wasting obstacles in their path. Goes double if they know it was there intentionally. I remember I did one for Servo on Github just because it was easy to do so. I didnt have time to spare to do anything but try some critical features and throw a bug report on whatever I found.
I doubt Im the only one out there that’s more likely to help when it’s easy to do so.
!!!!!
@manishearth, who set up http://starters.servo.org, dropped this very nice sentence about contribution: “People don’t start out serious, they start out curious.”
The problem is that projects don’t survive on such drive-by fixes alone. Yes, you fixed a bug and that’s a good thing, but the project would probably still run along just fine without that fix. And you never came back. In the long term, what projects have to care about are interested people who keep coming back. The others really don’t matter that much.
I think this is a bit like a consumer acquisition funnel.
Every contributor first started off by providing a drive-by fix. If they do it enough, now they’re contributing a lot. Now you have full-time contributors.
Sure but the question was about how high the bar for such drive-by contributions can be while still keeping a project healthy, based on the premise that making drive-by contributions too easy can result in toxic community behaviour overwhelming active maintainers.
The “height of the contribution bar” as quality control is - in my experience - a myth. The “denying low quality contributions” is not.
I’ll explain why: the bar to unfounded complaints and troll is always very low. If you have an open web form somewhere, someone will mistake it for a garbage bin. And that’s what sucks you down. Dealing with those in an assertive manner gets easier when you have a group.
The bar to attempting contribution should be as low as possible. You’d want to make people aware that they can contribute and that they can get started very easily. You will always have to train - projects got workflows, styles, etc. that people can’t all learn in one go. Mentoring also gets somewhat easier as a group.
Saying “no” to a contribution is a hard. Get used to it, no one takes that off you. But it must be done.
Also, there’s a trend to have people voicing their frustrations blamed as “no respecting the maintainers”. There’s pretty often complaints that have some truth in them. Often, a “you’re right, can we help you with fixing it on your own?” is better then throwing stuff screenshots on Twitter.
I agree with you but quality control is, again, a separate question. I wasn’t talking about quality control. The question is about how to best attract only those people with an appropriate kind of behaviour that won’t end up burning out maintainers, and whether a bar to contribution can factor into this.
I think JordiGH’s point was that if someone has to jump through some hoops to even find the right forum of communication to use (which mailing list and/or bug tracker, etc.), just by showing up at a place where maintainers will listen a contributor shows they have spent time and enganged their brains a bit to read a minimum necessary amount of text about how the project and its community works. This can be achieved, for instance, with a landing page that doesn’t directly ask people to submit code by pushing a simple button, but directs them to a document which explains how and where to make contributions.
If instead people can click through a social media website they sign up on only once and then have their proposed changes to various projects appear in every maintainer’s face right away with minmal effort because that’s how the site was designed, it’s no surprise that mentoring new contributors becomes relatively harder for maintainers, isn’t it? I mean, seriously, blog posts about depressed open source maintainers seem to mostly involve people using such sites.
Id considered this but do we really have data proving it? And on projects trying to cast a wide net vs those that dont? I could imagine that scenario would be fine for OpenBSD aiming for quality but Ruby library or something might be fine with extra little commits over time.
I think you’ll always need at least one developer dedicated enough to give the project a home, integrate changes, drive releases, and so on.
A pile of drive-by patches and pull requests with nothing holding them together is not a “project”.
Edit: BTW you said “extra little commits” and i said “drive-by fixes alone” so we may be talking past each other a bit… :)
Totally. In this scenario, you should just quit cold turkey.
The rest of the post is more advice that I’ve found myself giving multiple times to people who do want to keep maintaining the project, or be active in their larger community, but aren’t super focused on that particular library anymore.
There’s a lot of poor communication out there with unstated assumptions on each side for relationships not just open source and that drives a lot of frustration and resentment. There are dozens of books on the subject in the self-help aisle of bookstores. The points in the article are all good advice but I think the best advice is to make it clear what on terms you volunteer your work and not be ashamed to say “I don’t want to do this but feel free to do it or fork it” if it’s not scratching your itch.
Personally, I’ve turned away issues resulting from old and on bleeding-edge compiler or library releases and on OS’s or equipment I don’t run (doesn’t behave on Windows XP? doesn’t work with Chinese clone of hardware? Hell if I know…)
Motivation seems to be insurance against a trusting trust attack: https://www.reddit.com/r/rust/comments/718gbh/comment/dn90vo1
Really awesome project!
It’ll also be generally useful for bootstrapping without needing a previous Rust binary blob.
I considered doing that if I got resources. My idea was to just port the Rust compiler code directly to C or some other language. Especially one with a lot of compilers. BASIC’s and toy Scheme’s are the easiest if you want diversity in implementation and jurisdiction. Alternatively, a Forth, Small C, Tcl, or Oberon if aiming for something one can homebrew a compiler or interpreter for. Far as certifying compilers, I’d hand-convert it to Clight to use CompCert or a low IR of CakeML’s compiler to use that. Then, if the Rust code is correct and Clight is equivalent, then the EXE is likely correct. Aside from Karger-Thompson attack, CSmith-style testing comparing output of reference and CompCert’d compiler could detect problems in reference compiler where its transformations (esp optimizations) broke it.
rain1 and I got a lot more tools for bootstrapping listed here:
https://bootstrapping.miraheze.org/wiki/Main_Page
I just finished Greg Egan’s new novel, Dichronauts.
The world the book is set in has crazy physics:
Here’s his intro to the physics: http://gregegan.net/DICHRONAUTS/00/DPDM.html
He also has a little interactive sandbox simulator for the world: http://gregegan.net/DICHRONAUTS/02/Interactive.html
But what I really like about Egan is that despite having some of the wildest “hard SF” ideas in SF, he still has really novel social arrangements and characters. In this case, the main characters are a pair of symbiotic organisms, one that can walk around and the other that is immobile but can echolocate for the other. Can you imagine what kind of relationship they might have? Egan actually answers this question really well, and in ways I didn’t see coming.
Egan definitely isn’t for everyone, but if your interested is piqued, then I would whole heartedly recommend the book (and his earlier novels as well! Diaspora, Permutation City, pretty much all of them) To you.
Permutation City is still one of my favorite novels. Greg, if you’re on here, know that you’ve given me a lot of thoughtful enjoyment over the years. I owe you a beer.
Greg Egan is great. Diaspora remains my favorite sci-fi book of all time. As you say, for the unique social arrangements as much as for the hard-SF. Polises are a very interesting idea.
I’m not that big a fan of the dragon book. Spends way too much time on parsing and compiler frontends. I think Engineering a Compiler is a better choice.
I agree that parsing theory is overdone. Personally, I’d say skip LR (and LALR etc) completely. For quick prototyping use a parser generator like AntLR. For production quality (good error messages etc), use a hand-written recursive-descent approach with precedence climbing.
Great read, sounds fun, and I’m glad patches are going upstream so we all benefit.
Too bad it’s not public :-/
I’m in the middle of first project with Rust. It’s a small compiler for a tiny functional language. I recently got the parser (handwritten recursive descent) working. Including tests, the project is currently ~650 LOC. I haven’t written anything significant in Rust outside of this.
Full algebraic data types + pattern matching add so much to compiler code. It provides a very natural way to build and work with ASTs and IRs. I’ve the experience Rust to be roughly on par with the ADT experience in OCaml. I will say that pattern-matching on Boxes is a little annoying. (Though this is probably a product of my inexperience writing Rust). I like having full ADTs much more than templated classes/structs in C++.
Also, the build system and general environment for Rust has been great. I have a single C++ project that’s nicely set up, and I usually just copy that directory over, rm a bunch of stuff, and start fresh if I need to start something in C++. Getting anything nontrivial working in OCaml is also a huge pain. I believe every time I’ve installed OCaml on a machine, I’ve needed to manually revert to an older version of ocamlfind. Cargo is an incredible tool.
I chose to use Rust because I felt like compilers are a good use-case for Rust + I wanted to learn it. It’s really nice to have pattern matching + for loops in the same language. (Yes, OCaml technically has for-loops as well, but it really doesn’t feel the same. It’s nice to be able to write simple imperative when you need to).
This all being said, I’ve had plenty of fights with borrow-checker. I still don’t have a good grasp on how lifetimes + ownership work. I was a bit stuck on how to approximate global variables for the parser, so had to make everything object-oriented, which was a bit annoying. I would also love love love to be able to destructure Boxes in pattern matching without having to enable an experiment feature (I understand that this can cause the pattern matching to be expensive, as it’s a dereference, but I almost always wind up dereferencing it later in the code).
I’ve done a fair bit of parsing with Rust, mostly DWARF but also some other things.
I tend to write parsers with signatures like so:
The
&'a [u8]
in the tuple is the rest of the input that was not consumed while parsing theParseable<'a>
.Regarding variables that are “global” for the parser (they probably aren’t really “global”, b/c you probably don’t want two threads parsing independent things to stomp on each others' toes…), I would make something like a
ParseContext
and thread a&mut ParseContext
as the first parameter to all the parser functions:If you’re working with UTF-8, you can use
&'a str
instead of&'a [u8]
.Thanks for the tips! I really like the enum of parse-specific errors; I’ll probably implement that in my own project. Interesting how rust makes explicit the zero-copy approach. That’s pretty slick.
Once I have a better understanding of lifetimes, I’ll give this another look.
Also, parsing DWARF info sounds really cool :D
Do you have the code hosted somewhere? I would love to read it.
Just realized this was in response to my comment and not fitzgen’s. I keep it in a private github repo, but don’t want to make it public until it’s “working.”
I’ll probably post it here once the first go is complete. :)
Just yesterday I setup racket again and started going through the redex tutorial. All I can say is “wow!” This is perhaps the best introduction I’ve had to any software library. The documentation is absolutely fantastic, and everything is going perfectly smooth.
Thanks to everyone involved in the racket community!
The way the book is presented as building a series of small libraries that you then leverage to make a larger more complex application is absolutely wonderful. The best done “practical” book I’ve read.
This is one of my favorite papers of all time, highly recommended for everyone!
Michael Bernstein does a nice overview of that very paper here: https://www.youtube.com/watch?v=XtUtfARSIv8
Extremely excited about this. Does anyone know more about this part?
Will there be an
about:config
setting for the rest of us to use if we want e10s?Yeah there is a config for it, you can actually do it right now if you want. At first, only users who have no extensions installed will have it enabled by default. If you do have extensions installed and want to try e10s anyway, check this page out to see if they are all compatible. If you see “shimmed” it means the extension should work, but will likely slow things down a lot.
When you say “will likely slow things down a lot”, are you comparing against the current baseline single-processor experience, or against the improved performance of e10s?
It will be slower than baseline/non-e10s performance. Traditionally, addons in the privileged chrome context could synchronously access JS objects/methods/whatever in content. Those two contexts are now in different processes, so shimming the access patterns some addons used involves blocking on IPC calls.
Does anyone have experience with the rust port of quick check?
https://github.com/BurntSushi/quickcheck
A fair number of people are using it: https://crates.io/crates/quickcheck/reverse_dependencies — I’m actually really happy that so many projects are using property based testing!
It’s a pretty faithful port of Haskell’s QuickCheck, and even shares a similar implementation strategy with
Arbitrary
andTestable
traits. (Traits are similar in many respects to Haskell’s typeclasses.)Just to make it clear BTW: My general impression of your work is that it is very good. I just haven’t put in enough time and don’t have enough familiarity with Rust (yet!) to properly commit to saying that in the post. I’d like to at some point.
TBH now that I’m looking at the reverse dependency list I’m just going to mark it as “Probably very good”
Awesome! We’re possibly making a rust API for SpiderMonkey’s Debugger API (the only interface is in JS right now, but Servo doesn’t want to support privileged JS) and the JS fuzzer has been incredibly helpful for catching and fixing bugs for the existing interface. My thinking is that to get the equivalent for the Rust interface, we should be using quickcheck.
Working on emulating MESI (the memory cache coherence protocol) in Rust to get a better understanding of how it works. Have it mostly working but the miss rates reported in my benchmark/exercising code seems to be off or something. For example, my false sharing test case is way slower than when each cache is operating on a unique block (as expected), but despite that it isn’t reporting the higher miss rates I would expect from getting cache lines invalidated by other caches' writes. Need to dig in more.
I’m reminded of operational transformation, which I suppose is a method of optimistically merging effects:
Interesting tidbits I’ve found (or others have found and shared with me) so far:
They incorporate some of our asm.js code, not sure how much altogether. Looks like their asm.js frontend may be based on ours. https://github.com/Microsoft/ChakraCore/blob/master/lib/Runtime/Language/AsmJSUtils.h#L8
AFAIK, they are the only folks doing concurrent marking (marking happening concurrently with the mutator) and parallel marking (more than one marking thread). SpiderMonkey does much of sweeping concurrently with the mutator, and compaction is done in parallel but not concurrent with the mutator. We don’t do parallel or concurrent marking; we’re looking into concurrent marking as the next big architectural change for the collector.
They use card marking-style write barriers. Each card is page-sized, so they are probably using write protection tricks to dirty cards. https://github.com/Microsoft/ChakraCore/blob/9229c3387b695b2e2fb247681b26d6e6514bc6d1/lib/common/Memory/RecyclerWriteBarrierManager.cpp#L31
There is a pull request open to add an initial wasm prototype. https://github.com/Microsoft/ChakraCore/pull/63
Dangers of UB, not super smart compilers. I’d personally lay blame on poor language specifications.
also: http://developerblog.redhat.com/2014/10/16/gcc-undefined-behavior-sanitizer-ubsan/