1. 4

    Some of this seems factually incorrect, or possibly just very outdated. For example, the PRNG was previously (FreeBSD10 at least, maybe 9..not sure) yarrow – now fortuna in FreeBSD11. Not rc4.

    Some are just personal preference (griping about periodic?).

    Other parts of this are sadly quite factual and indeed problematic (or outright dangerous). :(

    1. 3

      It was pointed out to me by TJ that rc4 is still being used in freebsd. https://en.wikipedia.org/wiki/RC4#RC4-based_random_number_generators

      There have been several attempts to convince freebsd to update to chacha20 like OpenBSD did. This is from back in 2013: https://lists.freebsd.org/pipermail/freebsd-bugs/2013-October/054018.html

      Here is a diff from tedu in 2014: https://lists.freebsd.org/pipermail/freebsd-hackers/2014-May/045235.html

      1. 3

        Ah, for arc4random. Yeah, that certainly needs to be fixed. I thought the original post was talking about the kernel PRNG, or the supported random devices (/dev/(u)random). I don’t believe FreeBSD uses arc4random for /dev/(u)random like OpenBSD does, instead it directly uses the yarrow/fortuna entropy pool. I honestly don’t know why FreeBSD doesn’t just have arc4random pull directly from the kernel entropy pool like /dev/(u)random does, given that they already use yarrow/fortuna. I think that would also obviate the weird fork safety issues with arc4random.

        Looks like OpenBSD switched to chacha20 in may 2014 (version 5.5)? NetBSD appears to have made the change to chacha20 as well. Not sure why, but they call arc4random “legacy” though.

        1. 2

          I think NetBSD is calling just the name “legacy”. NetBSD no longer supports an ARC4-based generator, but the new ChaCha20-based generator implements the old API and name “arc4random” for historical/compatibility reasons. The OpenBSD manpage takes a different strategy of suggesting a backronym, “A Replacement Call for Random”.

          Anyway it looks like that’s only in the NetBSD source code. The manpage doesn’t call it legacy or suggest that it’s deprecated, although it does gripe about the name,

          The name `arc4random' was chosen for hysterical raisins – it was originally implemented using the RC4 stream cipher, which has been known since shortly after it was published in 1994 to have observable biases in the output, and is now known to be broken badly enough to admit practical attacks in the real world. Unfortunately, the library found widespread adoption and the name stuck before anyone recognized that it was silly.

    1. 7

      I love this idea! A few years ago a friend of mine and I built this application called, “Managing the minutia.” The idea was that concepts were easy to get right, but most of the time was spent in the details. Similarly, most of our time isn’t usually spent on big things, it’s spent on thousands of small things. It made sense to us to build something to manage the thousands of small things. Cheers to you, looks good. p.s. we ran into too many details to make ours work ;)

      1. 1

        Thanks bmercer!

        I agree with you to be mindful of every new feature and change that could make the product more complicated. I have a clear vision on what I want Firesub to be, so that helps me when deciding on new features.

      1. 6

        Smaller, Faster Websites took Almost 3 seconds to do 74 requests and download 1.4MB of “small, fast content.” Listen folks, “Mobile Friendly” CSS does not mean 8 round trips, and 1.4MB of JS and Rubbish to give you a horrible useless version of the full site so the developer didn’t have to write a mobile version from scratch. Web developers are piling on more and more rubbish with fonts and slow fade click animations that add absolutely zero to the usability of a site, while putting an enormous burden on our computers. Last night my laptop went from a load of 0.34 to well over 7 from starting chrome… Stop lying to people about making a small fast website that is insanely large and ridiculously bloated and still doesn’t function well.

        BTW, I just put together a web stack that is 65kb for the javascript, html and css and it looks and functions reasonably well. Before I took the time to build this stack, the same application was 250 requests, 2.3MB of data and a long time to load. Ultimately, they look and behave exactly the same.

        1. 1

          I didn’t read the entire article, but I will say that there is perhaps a bit more to this story. I’ve been a fi member since July and the experience has been good for me. The registration process has been fantastic, much better than any att, tmobile, or verizon experience I’ve ever had. The hardware support and phone experience is simple and clear… dare I say I love less choices? Travelling to 4 countries and being charged less than $1 for data usage for the week… that’s awesome. So here’s where I think this is different. Comparing google as a carrier isn’t quite apples to apples. The Fi project must implement multiple carriers, and wifi. The hold up on the 6.0 OTA (it is available and you can install it) is that they are getting certified on Band 12 VoLTE for tmobile. There are also some other implications with that because of the 911/emergency calling (legal?). Anyhow, anytime you have to certify something it takes time. I’m not trying to justify this, just provide some perspective perhaps. They’re new to wireless, it’s not flawless, and they’re doing things no one else is.

          1. 6

            Sadly, the author never bothers to explain how fast “ludicrously fast” would be, except that it would be less than 10 seconds. Unless I missed it? Sometimes I miss things.

            It seems like a good guide to using Chromium’s profiling stuff though.

            1. 10

              Yes, and a profiling of the loading of his page shows a spritely 64 requests, for 5.3MB, in about 12.18 seconds.

            1. 6

              Now if only chrome could run without consuming 35% cpu while it isn’t even in focus…

              1. 13

                Browsers: ubiquitous, unavoidable, and atrocious no matter which flavor you choose.

                1. 5

                  Heh. Reminds me of this fix: http://marc.info/?l=openbsd-cvs&m=142831194326812&w=2

                  Every time Firefox got a mouse move event, it would essentially screen grab itself (!) which triggered a buffer grow/shrink loop in the X server, bouncing a pile of memory through mmap/munmap along the way.

                1. 3

                  There is a similar situation that we ran across for IIS. I can’t remember what the name of the default config file is, but IIS explicitly blocks access to it. What it did not block, however was the .bak file that ultra-edit left in place when you edited the file in production.

                  1. 6

                    For the love of all that is good, please don’t read the comments.

                    1. 9

                      You probably should avoid reading comments on any YouTube video.

                      1. 3

                        I never read the comments. Yet, because of your note I did. Thanks. Thanks a lot.

                        1. 3

                          The comments are a little harsh, but I’m curious if they’ve compared this method with typical soldering by hand. Are they really gaining anything with all of the effort?

                          I’m also a little surprised they even solder boards by hand. How does their technique compare to wave soldering in Nitrogen atmosphere?

                          1. 3

                            In aerospace in general the work is real engineering (and very conservative). It’s not too hard to find some references in a quick web search: - https://goo.gl/Yn06K1 - https://goo.gl/t1LsOv

                            1. 1

                              In aerospace in general the work is real engineering (and very conservative). It’s not too hard to find some references in a quick web search: - https://goo.gl/Yn06K1 - https://goo.gl/t1LsOv

                              The first paper claims there was no statistically significant reliability difference between convection reflow soldering, “JPL standard hand soldering”, and “optimized hand soldering”, and suggests a new study with a larger sample size.

                              The second paper seems to be about soldering leaded parts in general, and would presumably apply to any type of soldering.

                              1. 1

                                I only posted those to point out that there’s some evidence people have been looking. My guess is that there’s a body of work in journals or government archives from earlier in the space program, DTIC would probably be a good place to start. The current practice was probably built incrementally from learnings in many individual failures, if you were going to find a comparison to commercial techniques it would be of some kind of evaluation of a COTS module vs its mil-spec equivalent.

                            2. 2

                              In aerospace you follow the rules and know when your life depends on the rules or when you’re just following the guidebook. I was flying around one time on an 86F day and we still added carb heat on our decent, and kinda smiled at the thought of ice on that hot day. :)

                              1. 1

                                Don’t forget that if you are flying at 30,000ft the temperature could be below freezing. If you were in a prop though you probably weren’t that high. :)

                                1. 2

                                  Yeah at 30k feet you’re going to be very cold. We were probably right around 3500 feet up. We’re doing the avionics on a glasair III soon so this video was a good refresher.

                                2. 1

                                  I’m not saying they should do a half-assed job, but they seem to be choosing a weird middle ground.

                                  If they really want high quality solders, why not have a robot soldering in an inert atmosphere or clean room or something? There’s a higher up front cost, but in the long run it will give better, more precise, reproducible solders for a lower overall cost.

                                  Their current technique would be a lot more expensive in the long run because of all the extra steps and manual labor. Especially weird if there’s no evidence that their technique works any better than what any electronics tinkerer could do in ¼ the time.

                                  I’m not sure your example works because there’s only one list of steps for landing the plane. In the soldering case, there are multiple ways to do it, each with their own list of steps. I’m curious why they chose A, when B or C could achieve the same results cheaper or with stronger solders or both.

                                  1. 2

                                    “There’s a higher up front cost, but in the long run it will give better, more precise, reproducible solders for a lower overall cost.” You contradicted yourself in the same sentence. The reality is that robots do not do a better job for less cost. In the aviation industry the volume is far too low to tool something like that. The parts are too varied and they have already worked out the kinks of soldering by hand. As for the method of soldering, “strength of solder” and “cheap” are not the goal here. Joints that do not corrode matter. Not damaging a sensitive piece of electrical equipment matters. Visual inspection matters. Robots do not offer that same type of result. This is something that is expected to be in service for more than 30 years. Not a 5 dollar phone that lasts 3 years.

                            1. 35

                              Thank you so much to all of our community and contributors. It’s been a long time coming, but I’m really proud of what we’ve done together.

                              I’m finally going to get some real sleep tonight, I think.

                              1. 7

                                Congratulations to you and the entire team.

                                1. 6

                                  Congrats! Hah, and Rust1.1 goes beta today as well!

                                1. 9

                                  I think the author demonstrates the attitude that many Rust users have. The developers have been nothing but helpful in IRC and everything is transparent in the paths taken for the language. Rust has a bright future ahead and will continue to gather praise as long as the culture of those who use it persists with it.

                                  1. 5

                                    I propose that MVC is so ambiguous that it effectively means nothing. People always struggle to say what semantics should actually be in each letter but almost always reality strikes and the lines have to be blurred. Or at least, that has been my experience (which is admittedly not much).

                                    1. 1

                                      In all the web applications I have built, the MVC or pattern applied has more to do with organizing code to allow the application to be developed quickly, or grow properly. In some companies that means allowing one person on the team to do the DB layer, and another to implement message handling, validations, business implementations or any other libraries needed to make the application work. Similarly, when the application needs to change to handle future requirements, you don’t want to be ripping out large chunks of the application to handle that.

                                      You are completely correct that reality strikes for many people, however folks who have built enough applications to know what to watch for seem to have better ways to handle reality. To them, the lines are clear and can be defined even if it doesn’t seem to make sense at first blush.

                                    1. 9

                                      Author of that post. A friend let me know this was linked here.

                                      Any questions anybody wants to ask I’m here. :)

                                      1. 10

                                        Thanks for taking the time to document your experience. There are so many misnomers on the internet that keep people from trying or doing new things. Glad you’re enjoying openbsd for your dev environment. I’ve been building and deploying web applications on openbsd for a long number of years and it’s much more painful to go to another platform than it is to teach someone how to do things in openbsd. The startup scripts are so simple and allow you to easily run your apps as a non-priv user. There are great healthcheck features and HA tools that don’t require a phd to setup and use. Cheers.

                                        1. 7

                                          Indeed, so far it’s been a very rewarding experience. I dunno if I’ll be setting up a web server on OpenBSD any time soon, but I expect it to be less painful than with Linux.

                                          1. 2

                                            I am having a bit of trouble with the lid though. Closing it suspends, which is fine but then when I open it it’s all black.

                                            I’ve seen this issue on a few laptops - usually one of the following three fixes it:

                                            1. switching between consoles `[CTRL]+[ALT]+[Fn]` where Fn is between 1 and 6 (F5 ttyC5 is reserved for use by X)
                                            2. using `[FN]` and the internal / external monitor key, 
                                            3. `[FN]` and the brightness keys
                                            
                                            1. 1

                                              I just tried them all and none of them worked. :/ I’m fine with not being able to suspend, it’s just an inconvenience.

                                              1. 1

                                                I’ll try it tomorrow when I wake up. Thanks! :)

                                          2. 8

                                            Maintainer of the node.js port here. If you are interested in testing (and you are running -current), here is a diff for 0.12.2 :D

                                            1. 3

                                              I hadn’t realised that Node.js doesn’t provide pre-compiled binaries for OpenBSD.

                                              I now really appreciate your work on this. Thank you very much. :)

                                              1. 3

                                                Cool! I’m not going to run -current until after I upgrade to 5.7, just to be able to write about the upgrade experience.

                                                But yeah, once I’m running -current I’ll probably use nvm, because I need an environment with several versions of Node.

                                                Thank you for your hard work, and for the Node.js bin that is keeping me alive right now. ;)

                                              2. 4

                                                I installed OpenBSD this week because of your post. So far, I love it! ^_^

                                              1. 26

                                                Preventing stale reads requires coupling the read process to the oplog replication state machine in some way, which will probably be subtle–but Consul and etcd managed to do it in a few months, so the problem’s not insurmountable!

                                                Consul and etcd implement Raft which is a proven consensus algorithm. Part of the issue with MongoDB, based on my following it from a distance, is they seem to be building their own distributed systems algorithms and they don’t appear to have the talent to accomplish it.

                                                And remember: always use the Majority write concern, unless your data is structured as a CRDT! If you don’t, you’re looking at lost updates, and that’s way worse than any of the anomalies we’ve discussed here!

                                                Does this actually help you in MongoDB? I am under the impression MongoDB does not support CRDTs so it will simply drop writes, as shown in the analysis.

                                                On an emotional note, it’s so distressing reading about MongoDB. It would be one thing if they were just reimplementing the last 30 years of database technology because of NIH. But they are reimplementing it wrong. Yet it is massively popular. These are the things that make me depressed about the software industry and want to move to a farm.

                                                1. 9

                                                  +1 for wanting to move to a farm. And not just because some people are doing stupid things, because well seasoned developers are being ignored. It’s no longer that people just want to do the right thing, it’s that people want to be doing something so long as there is motion! More lines of code, more bug trackers, more issues fixed, more complexity, more features… less problems solved.

                                                  1. 4

                                                    But MongoDB is web scale.

                                                  2. 7

                                                    Have I ever told you of the merits of raising goats?

                                                    1. 2

                                                      Consul and etcd implement Raft which is a proven consensus algorithm.

                                                      There really aren’t any proven consensus algorithms running in the wild – and absent a system which mechanically and correctly translates proofs into code, there probably won’t be. Etcd and Zookeeper both have had consistency bugs, despite their theoretical backgrounds, owing to implementation errors. The often forgotten part of every distributed system is the ability of each and every human implementor of code in the critical path to have fully understood all of the possible failure cases at the time they wrote the code.

                                                      CRDTs can be application-resolved so every database ‘supports’ CRDTs, with smart enough applications. The implementation is often profoundly unpretty, though.

                                                      1. 7

                                                        There really aren’t any proven consensus algorithms running in the wild – and absent a system which mechanically and correctly translates proofs into code, there probably won’t be.

                                                        This is exactly why I said the algorithm is proven, not the implementations. MongoDB is not even running a theoretically proven algorithm, it appears to be the a patch-work of attempts to get something working.

                                                        CRDTs can be application-resolved so every database ‘supports’ CRDTs, with smart enough applications

                                                        How is this possible if the database drops your writes?

                                                        1. 2

                                                          Sorry, I thought you were using the argument to authority, and wanted to highlight the difference.

                                                          CRDTs don’t have anything to do with consistency in the face of failed writes; they’re merely a technique for resolving differences between two apparently correct values with data structures.

                                                          1. 2

                                                            CRDTs don’t have anything to do with consistency in the face of failed writes; they’re merely a technique for resolving differences between two apparently correct values with data structures.

                                                            If I’m dropping writes then I’ve lost the other value, which is the problem.

                                                            1. 0

                                                              that’s an orthogonal problem. Every database can drop writes given sufficient partition; that doesn’t stop some from having CRDT implementations.

                                                              1. 0

                                                                Your statement was:

                                                                CRDTs can be application-resolved so every database ‘supports’ CRDTs, with smart enough applications.

                                                                The application cannot resolve anything if it does not have all of the writes because they have been dropped by the database.

                                                                So no, it is not an orthogonal problem.

                                                                Every database can drop writes given sufficient partition

                                                                Dropping writes doesn’t have to have to do with partitions, it’s about accepting a write then throwing it away.

                                                                1. 2

                                                                  I feel like you’re wilfully ignoring the causal arrow in my statements, so I’m ending the conversation. Good luck!

                                                                  1. 1

                                                                    I am sorry you feel that way, you could simply explain how a CRDT helps when the database is discarding writes.

                                                            2. 2

                                                              CRDTs don’t have anything to do with consistency in the face of failed writes; they’re merely a technique for resolving differences between two apparently correct values with data structures.

                                                              They are data structures that consistently resolve causally parallel modifications into a single successor. If your DB does’t natively support them or expose the conflicts in any way, you cannot apply that technique.

                                                              For example, if you have a value A in the DB and then it accepts two with that single ancestor, let’s call the writes A' and A'‘, and then internally resolves the conflict to either of them, where do you apply the CRDT merging logic?

                                                          2. 3

                                                            CRDTs can be application-resolved so every database ‘supports’ CRDTs, with smart enough applications. The implementation is often profoundly unpretty, though.

                                                            You can only use them if the DB offers some sort of control for conflict merging, right? If it drops the conflicts on the floor or just resolves to an arbitrary write CRDTs would not help you in any way.

                                                            1. 1

                                                              That’s what CRDTs are: a conflict resolution mechanism. You can implement CRDTs with dumb pencil and paper, if you like.

                                                              1. 0

                                                                But you need the conflicts in order to resolve them, which you don’t have if the database throws them out.

                                                                1. 1

                                                                  How in earth is this starement incorrect?

                                                        1. 12

                                                          I feel like I can’t open a web browser these without reading about some conflict in the Great Demographic War between men in their 20s and everyone else (that is, some version of Luis versus the OP). I personally think it’s great that Luis is interested enough in his work that he wants to spend part of his weekend on the problem, and the dire admonitions to the contrary sound to me like people are afraid that the Luises of the world are out to make everyone else look like Skid Row bums and good-for-nothing layabouts.

                                                          It’s amusing to me that the commenters on that post immediately feign concern over programmer burnout, as if poor Luis is going to stumble into work on Monday morning a haggard and broken man, requiring weeks of recovery before he is well enough to utter the one-word solution to the problem. I guess Luis just needs to learn that the local politics require that he pretend to discover the solution at exactly 9:15 AM on Monday morning, and then move on to the next bug.

                                                          1. 40

                                                            I don’t think people are afraid that Luis will make them look bad. I think they worry that the higher-ups, after seeing Luis put in a weekend here and there, will come to expect it and begin to feel that they are entitled to having him (and everyone else) work weekends whenever they want. In my experience, this fear is entirely justified at most companies.

                                                            “If you give an inch, they’ll take a mile” has been the defining characteristic of labor-management relations at the majority of my employers, and has led to my being very careful, when I do get interested in a bug and poke at it on my own time, to make sure that nobody finds out.

                                                            David Graeber, in his book on debt, talks about how gifts, when repeated, became obligations–debts–in feudal societies:

                                                            But this introduces another complication to the problem of giving gifts to kings-or to any superior: there is always the danger that it will be treated as a precedent, added to the web of custom, and therefore considered obligatory thereafter. Xenophon claims that in the early days of the Persian Empire, each province vied to send the Great King gifts of its most unique and valuable products. This became the basis of the tribute system: each province was eventually expected to provide the same “gifts ” every year.

                                                            He describes in more detail than I can summarize here how this is the consistent result of gifts between people in a hierarchical relationship (such as employer-employee).

                                                            1. 10

                                                              Part of it is about establishing a clean boundary for your employer. Once you’ve established that you’ll allow them to invade your personal time, you can expect it to be the norm from then on.

                                                              1. 9

                                                                I guess Luis just needs to learn that the local politics require that he pretend to discover the solution at exactly 9:15 AM on Monday morning, and then move on to the next bug.

                                                                Oh, would that it worked like that.

                                                                Instead, what happens is Luis’s co-workers arrive on Monday to find out that Luis, self-directed millennial autodidact that he is, hasn’t just discovered the solution, he landed it in production around 4am on Sunday. His co-workers, freshly rested and sipping coffee, slowly realize that Luis’s “solution” is actually a depth-first exploration of the problem space that bottoms out kind of near a solution, but if we’re honest—and Monday morning’s an excellent time to be honest—it’s a rolling clusterfuck. It has no tests, it has no separation of concerns, the code has some, uh, Luis-isms in it, it adds a number of unorthodox dependencies, it tightly couples a number of orthogonal services which results in overall SLA degradation, its sweeping changes have resulted in breakages of three feature branches other engineers had planned to merge this afternoon, and there isn’t a scrap of documentation to be found for the thing.

                                                                Luis rolls into the office late on Monday to find his co-workers angrily planning how best to roll back his changes and is hurt and dismayed. He’s hurt because he spent all that time fixing the problem and he learned so much about the new library he found to implement some of that. He can’t understand why his co-workers are being so territorial. “I don’t get it,” he says, “why would we waste all this time tearing down my changes? I’ve already fixed the problem! Let’s move on to the next bug!”

                                                                I, for one, am not worried about Luis burning out. I’m worried about his co-workers.

                                                                1. 2

                                                                  I worked at a company where the CEO (and founder) would log on and do changes to production on Sunday afternoon, but without going via dev/staging so systems broke hard on the first regular deployment the following week…

                                                                2. 8

                                                                  I agree with the OP. Most of us sell our time for money. Ideally we’re selling about 40 hours a week in return for X salary. However many companies find ways to “encourage” people to get 60-80 hours a week of work in return for X salary. Not only does this give poor results in terms of quality and productivity, but it effectively puts a pretty strong downward pressure on salary and upward pressure on hours worked per week.

                                                                  Commenters aren’t afraid of looking bad. They are here representing the interests of working programmers rather than the interests of employers.

                                                                  Also. I didn’t see a mention of age in the op. Probably not appropriate to bring that into the conversation. I know plenty of older programmers who burn the midnight oil (on billable hours).

                                                                  1. 17

                                                                    Also. I didn’t see a mention of age in the op. Probably not appropriate to bring that into the conversation.

                                                                    I wouldn’t discount that just yet. I think age intersects importantly here.

                                                                    Now, consider: the median age of a Google employee is 29. Is it any surprise that the Googolplex features three free meals at their cafeteria, on site haircuts, on site healthcare… literally any possible perk they can to throw at you? Including some targeted towards younger employees? Shouldn’t be: it’s in Google’s best interest to hire young programmers and keep them at work.

                                                                    This is also the reason companies that advertise “we hire based on your Github!” are actually saying “we hire based on your lack of external commitments.”

                                                                    Ageism in tech is a massive problem. I picked Google because they call their older employees “Greyglers” on their own damn diversity webpage! At one point they had a featured ‘Greygler’ who was 46. Google considers you “old and grey” at fucking 46.

                                                                    1. 1

                                                                      Meanwhile, at BMW:

                                                                      In 2007, the luxury automaker set up an experimental assembly line with older employees to see whether they could keep pace. The production line in Dingolfing, 50 miles northeast of BMW’s Munich base, features hoists to spare aging backs, adjustable-height work benches, and wooden floors instead of rubber to help hips swivel during repetitive tasks.

                                                                      The verdict: Not only could they keep up, the older workers did a better job than younger staffers on another line at the same factory. Today, many of the changes are being implemented at plants across the company.

                                                                      Via Bloomberg news.

                                                                  2. 3

                                                                    Agreed. This post seems to assume a lot about Luis' motivation and what causes burn out.

                                                                    Firstly, being interested in a problem and wanting an answer doesn’t mean you’ve fallen victim to the sunk cost fallacy. Sometimes you’re just interested. I don’t usually work extra hours myself, but once in a while, I get to work on a cool problem and my brain just wants more. Similarly for tricky bugs. Why deny it? It’s no biggie.

                                                                    As for burn out… I’ve experienced burn out and I’ve also sustained a pace that involves writing a lot of code for a long time without getting burn out. For me, the conditions that lead to burn out are extreme overwork on something that I don’t want to be doing. It just so happens that I really like building software, so it consumes a lot of my time, and there’s no risk of burnout because it’s something I want to do.

                                                                    1. 4

                                                                      I imagine the OP is looking back on experience and trying to do Luis a favor. But then, we all need to get our first speeding ticket sometime.

                                                                    1. 7

                                                                      Nice! @bcantrill how did you find the AMA format and experience?

                                                                      1. 16

                                                                        It was actually refreshingly good. To be totally honest, I went in assuming that this was the opportunity for every skeleton to come out of the closet (especially when the skeletons are so searchable!) and had prepared diplomatic answers to many mean-spirited questions. But surprisingly, the questions were (generally) earnest and thoughtful – and I especially enjoyed the questions from people who are contemplating computer science and/or software engineering and are wondering how to get started. Overall, a positive experience – and much, much more positive than I had anticipated.

                                                                        1. 5

                                                                          FWIW thanks! I quite enjoyed reading though it this morning.

                                                                          Thanks also for listing some recent talks in your opening. I admit to having been a bit baffled by the popularity of docker (seemed like a convoluted/glorified chroot), and am going to watch your talk on it and see if anything resonates.

                                                                          edit clarification: popularity of docker vs something “better” like jails/zones, or even using lxc directly

                                                                          1. 3

                                                                            Out of curiosity, since it doesn’t seem super likely that io.js and node are getting together anytime soon (and that’s beside the point anyways)…would Joyent be interested in moving off of v8 as a JS runtime if a better platform, like an open-sourced engine from Microsoft or duktape or something, became available?

                                                                            1. 3

                                                                              One vision for future development of Node that we’ve spitballed around the office is definitely to refactor the platform so that it does not depend on a particular Javascript runtime. I’ve personally played around with Duktape and I think it’s eminently embeddable and quite neat. Obviously the performance will not be anywhere near that of a JIT-capable VM like V8 or Spidermonkey, but it’s interesting nonetheless.

                                                                              Though there are chunks of the codebase that are written in C++ today, and clearly the entire platform is relatively wedded to running on top of V8 and libuv, at its heart Node is really a sort of “Javascript standard library”. There’s no reason not to write (or rewrite) most of that in Javascript, with a small, well-defined C (not C++) layer underneath that exposes the parts of a Javascript VM and the underlying OS that are commonly available.

                                                                              It would then be reasonable to use Node as a Javascript library/platform on top of basically any VM – whether on top of V8 (with a stable C API/ABI on top), or Duktape, or JSC, etc. The node binary could even load different interpreters at runtime from shared libraries.

                                                                              Though, obviously this is not just a decision for Joyent. As we move toward a Node Foundation it is even more a community-centric decision that ever before, both in terms of setting the direction and doing the work required.

                                                                              1. 1

                                                                                I think that Node-The-Standard-Library and Node-The-Runtime-Implementation is an important distinction to make.

                                                                                I think that building a minimal C scaffolding (libuv + a JS runtime) would a great place to go. Is there anyone currently working on that? Is there a test suite for Node or IO.js that verifies the Node-The-Standard-Library behavior (as seen from JS userland), and which we could use to test conformance of an alternative Node-The-Runtime-Implementation?

                                                                                1. 2

                                                                                  There are many challenges with providing an API compatibility test (or certification) suite. Some recent related work is StrictEE, a modified EventEmitter that can make assertions about event firing cardinality, ordering, etc. This is a critical step towards being able to prove what the API of Node even is, given how much of it is exposed through the EventEmitter pattern.

                                                                                  If you’re keen to work on things like that, there is definitely interest!

                                                                            2. 1

                                                                              Thanks for doing that @bcantrill! I know there are all sorts of interesting folks doing open source stuff, but I found it refreshing to see your name out there. Cheers!

                                                                          1. 2

                                                                            I’m reading this on the same day I’m implementing JWT (json web tokens.) It is, perhaps the happy medium between the two. For my scenario I’m using it for both a browser based implementation and an api. Because this is my first attempt at implementing such a thing I’m not sure what to expect. It seems reasonable in concept… I knew I wasn’t going down the oauth path before I started this implementation.

                                                                            1. 8

                                                                              They lost me at “install jdk” :(

                                                                              1. 1

                                                                                Oh the tangled web2.0 we weave :)

                                                                                1. 4

                                                                                  I worked at a place that said, “We’re looking for the right type of person for our environment. Then we can train and shape them in the areas they need help.” It was probably one of the most pleasurable places I’ve worked. They made good on their promise. The first few weeks were really intimidating… you start off wondering if you’re going to have the chops to hang with the others that work there. In the end, it makes you think about things before you do them, it makes you care about your work, and you tend to come out the other side a better engineer.

                                                                                  1. 2

                                                                                    I’ve recently done a couple file processors at work that have to rip through a large amount of data (to us) and I paid attention to things like this while I was working on it. Reading in the file and keeping things as byte array significantly simplifies working with the data. The other area where I was careful was to make sure I wasn’t allocating things in my processing loops. Everything gets setup once and reused or set during processing. This allows the program to process something like 650k records in just under 2 seconds. There are many useful features built into the Go toolchain to make sure you’re not leaking things that the GC has to pick up. The benchmark tests are very convenient too. Nice article.