1. 24
  1. 13

    FWIW, while I do like this list, I feel compelled to note that Fog Creek, as late as 2014, did not actually do 2, 3, 5, 6, or 7. Sure, any given project, at any given point, might’ve, but they certainly weren’t part of the culture. I have no idea what Glitch does these days on that front.

    1. 3

      So, did anyone bring this up at the lunch table or anything?

      “Hey about that Joel test, people on the internet actually think we do this stuff!”

      [Hearty laughter all around table]

      “Yeah those yokels will believe anything”

      Maybe a “fire and motion” tactic to bog down would be competitors?


      1. 18

        Yes and no.

        First, “we didn’t do them” does not mean “we didn’t want to do them.” We, like most teams in real life, valued and tried to do things, but might fail because the reality of shipping got in the way. So, while I was there, at any given point, you probably could’ve found us doing all twelve of these…but not all in one project. And when one of these points slid by for long enough, it gradually dropped off the radar.

        Take FogBugz. FogBugz routinely had one-step builds, but not one-step deploys. Deploys, for a long time, consisted of manually, locally building a COM object, copying it to each web server, and registering it by hand with regsrv32. This continued after FogBugz On Demand was launched, and we did have outages from this. (One I remember specifically was Copilot getting taken down one day because someone had reordered database columns in SQL Server, by hand, for better aesthetics. They were in there in the first place because Copilot’s schema management at the time could only add columns, not delete, and they wanted to delete some extraneous ones.) Does that count as a violation of making a build in one step?

        Copilot never had daily builds, even when Joel was directly overseeing us. I don’t think Kiln did, either. But we had one-click builds and would deploy fairly often. That’s definitely a literal violation of making daily builds, but maybe it doesn’t count? (Especially when I could trivially have cron’d daily builds for both!)

        I could go on. Initial phases of projects often had “specs,” but they were rarely followed, and the finished project was often wildly different. Specs were rarely updated as the product was, so the result is that they were basically frozen-in-time musings about what we thought maybe things should look like. I actually have the Kiln 1.0 Spec in my office, and just looked at it, out of curiosity. A lot of these features did ship, but quite a few worked differently, a few so differently I’m not entirely sure it counts. And I don’t remember this spec being updated once we got going. (Something kind of evidenced by the fact that it was distributed on paper, in a binder, to the team.) Likewise, we had testers, but they couldn’t test the entire project. We kind of dogfooded, which kind of avoided this, but our dogfooding was done on a special server running a special build of the product that was built in a special way, and so its bug collection would frequently be different than what customers saw. And so on and so forth.

        I am not saying I don’t think the Joel Test has value. I actually think it does: specifically, I think it’s a great list of some important things I sure hope most dev teams are trying to do. (Except item 11. That can go die in a fire.) My issue with the Joel Test is that, in real life, I have never seen any single company actually pass. That’s fine if it’s an aspirational target, but too often it’s instead used as a way to judge. (StackOverflow Careers, in fact, at least used to do this explicitly, showing the Joel Test rank for each company. Fog Creek inevitably had a 12 because of course it did, incidentally.)

        I think the only one of these I genuinely found comical, and I do remember making fun of, is “Fix bugs before you make new ones.” If we actually did that, FogBugz 6 for Unix would never have shipped. “Keep your bug count from climbing too high” was definitely A Thing™, but the reality is that if you can ship, I dunno, file transfer in Copilot 2, but you still have ghosting issues, you’ll ship it.

        1. 5

          This is such a good comment that provides that provides a foundation for empathy for teams that try to perpetually improve their own process, even while publishing publicly about their process. Sometimes “the grass is greener” even applies to a software shop you might have idolized in your youth. As I did, for Fog Creek. Thank you for sharing these details!

          I feel like “The Joel Test” was a real accomplishment at the time. These days, its lasting impact is much more “meta” than “concrete” – simply the idea that you should evaluate the “maturity” of a software team by the ubiquity of their (hopefully lightweight) processes, and the way it assists programmers in shipping code. I could even make a “2.0” version right now, modernized for 2020. I left some unchanged.

          1. Do you use git or another distributed VCS and is it integrated with a web-based tool?
          2. Can you run any project locally in one command?
          3. Can you ship any project to production in one command? (Or, do you use continuous integration?)
          4. Do you track bugs using an issue tracker to which everyone has read/write access?
          5. Do you tame your bug count weekly?
          6. Do you have a real roadmap and is there rough team-wide agreement on what it is?
          7. Does the value of a feature get elaborated in writing before the feature is built and shipped?
          8. Do programmers have quiet working conditions?
          9. Do you use the best tools money can buy?
          10. Do you have separate alpha/beta/staging environments for testing?
          11. Do new candidates write code during their interview?
          12. Do you watch users using your software and analyze usage data?
          13. Does your team dogfood early builds of the next version of your software?
          1. 3

            (Except item 11. That can go die in a fire.)

            Are you talking about the specific interviewing practices that Joel recommends (e.g. his Guerrilla Guide), or writing code during interviews at all? I do think whiteboard coding should die in a fire (even for people who, unlike me, can actually do it; see my profile). But writing code on an actual computer seems a lot more reasonable.

            1. 3

              I don’t like “whiteboard” coding interviews, but I do like basic coding interviews with a real development environment and think they should be a requirement for programming teams.

              1. 3

                White-boarding should definitely die. But I’m not sure I like coding in real time, either. Code submissions sure—especially if there’s a good write-up of your approach. But coding on a foreign laptop with someone staring at you is not how most people code, and I’ve seen great devs flail in this situation, and (when testing this technique) reject candidates pass. So the signal to noise just seemed really, really low.

                Nowadays, I do a take-home and then do behavioral and structural interviews. That seems to work far more reliably.

                1. 2

                  interviewing practices that Joel recommends (e.g. his Guerrilla Guide)

                  I clicked through to that when I read the article, and I have to say I disagreed with a lot of what I read. For example:

                  Firing someone you hired by mistake can take months and be nightmarishly difficult, especially if they decide to be litigious about it. In some situations it may be completely impossible to fire anyone.

                  Maybe this is different in the US. Here in the UK, the norm is to start everyone with 3 months probation, with a week’s notice during that period. If during the 3 months you decide they’re not a good hire, you just let them know, pay them their weeks notice (you wouldn’t want them to actually work for the remainder of the week) and you’re done. The risk of litigation is very low, unless you do something stupid. There is a small cost associated with trying someone out and letting them go, but you get to find all the good candidates who haven’t devoted years to studying interviewing.

                  recursion (which involves holding in your head multiple levels of the call stack at the same time)

                  In my mind, that’s exactly the opposite of what recursion is about. Using recursion allows you to take a problem and focus on a tiny bit of it, without too much big picture thinking. For example, if you’re recursing over a tree, you don’t have to worry about the different levels of the tree: you just focus on what to do with the current node, and pass the remaining subtree on to the next level. As long as you end up with a base case (which is usually fairly obvious) eventually, there really isn’t a lot of complexity involved.

                  Look for passion. Smart people are passionate about the projects they work on. They get very excited talking about the subject. They talk quickly, and get animated. Being passionately negative can be just as good a sign.

                  I would say a bit of passion is good, but people who are too passionate have difficulty working as part of a team. They want it to work just so, and don’t appreciate their manager or the customer telling them that it needs to work a different way. You don’t want to work with someone who is sulking or trying to undermine things because they didn’t get their way on a subject they care deeply about. Someone whose main motivation is their passion may also lose interest if assigned tasks which are necessary but not directly related to their area of interest.

                  The article also seems to change its tone half way through: At the start, he’s determined to only hire the superstars. Later, he wants the far more modest “smart and gets things done”. It depends on your definition of superstars, but often the term is used for someone who can produce amazing work, but isn’t really a team player and can’t deal with the more mundane aspects (like getting stuff done).

                  1. 3

                    Joel’s posts are relatively dated at this point. When he wrote the guerrilla guide, it was definitely uncommon to have probation periods in the UK and these are a relatively new phenomenon.

                    At the time, getting rid of bad hires was incredibly difficult in the UK and EU, compared to the US. I’d consider bad hires to not be doing something egregious like assaulting other staff, but are those with behaviour that impacts forward progress. The type that would require quite a long chat with a lawyer to explain. Fog Creek’s was based in New York which may also have influenced Joel’s writing. Different states have different provisions in employment law. Without research, I suspect that people hired in NY have much more protection than someone hired in Florida, or even here in Pennsylvania. The canonical example is California which has significant protections for employees.

                    I’m glad you picked up on the “superstars” part. I don’t know if Joel would consider this a mistake, but his writing has been misinterpreted by many. This spawned many articles later on which fed into the cult of the rockstar programmer. I don’t think he had a desire to do this, but it’s interesting to see which ideas have proliferated and how the modest tones are lost.

                    People also fail to look at Joel’s environment and culture at Fog Creek. This is not the universal environment or situation where programmers will exist. Some will be working in academia, others may be running a rivet company (a friend of mine.) The Fog Creek approach can’t just be applied in whole to these situations. There is now a much broader range of material on managing programmers, but it was relatively limited back in the early 2000s, especially if you could exist mostly in a technical bubble. There were some great books on managing creative people (think: design and advertising) that applied to programmers in a lot of ways, but these were easy to ignore. Programmers had no exposure to interview training. Now there is much more discourse on various options from hiring through to deployment of software.

          2. 4

            Suggested adding “ (2000)” to the title. The points have a different after-taste when read with that in mind.

            1. 4

              On the Excel team we had a rule that whoever broke the build, as their “punishment”, was responsible for babysitting the builds until someone else broke it.

              I’m torn between curiosity of wanting to know what “babysitting the builds” could possibly mean and the fear of actually finding out.

              1. 3

                You kids and your CI tools …

                The daily build, particularly on a large project (and I’m sure Excel was a prime example of a large project), wasn’t entirely automated. I mean sure, there are makefiles and scripts and everything, but nothing like the kind of automation that we take for granted nowadays. So someone had to watch over the build process, and take note if and when it failed. They would then have to figure out if it failed because some transient issue, like a network router was temporarily insane, or if someone had actually checked in a bug. If the latter looked more likely, that person had to track down the actual offender.

                The above-quoted rule provides some necessary motivation for the current babysitter to actually take this job seriously and follow through on what can often be a tedious process.

              2. 2

                This has been seen a lot, probably more interesting to ponder how many of these are still relevant in today’s environments.

                Many of these are built around the ideas of desktop software, distributed physically. Releases are thus infrequent, and it’s easy to miss or let slide things like daily builds. With web software being more the norm now, it’s a much bigger advantage and much easier to to Continuous Deployment, an even better version.

                Writing specs seems to call back to the era of Waterfall, which admittedly did fit somewhat better with shrinkwrap software. It seems rather less relevant in an Agile era, where we now look to get a MVP out as fast as possible and iterate rapidly on user feedback, based on the fact that we probably don’t know enough about the business domain to write a good spec. It certainly helps if you can push new releases and get customers using them in hours instead of months.

                In theory it’s still nice to have professional testers, but many businesses seem to have replaced that with having the customer do the test, using feature flags to expose features to a small percentage of users, etc.

                Having quiet working conditions never goes out of style, but also never got much easier to actually get in a real business.

                Getting the best tools is still true, though somewhat less relevant today with how much more slowly the hardware world seems to move.

                Source control is a big duh nowadays. But it seems weird to not mention automated testing. Nowadays, you’d expect anything to have some sort of automated test suite, even if it may not cover as much as you would like.

                1. 3

                  Writing specs seems to call back to the era of Waterfall, which admittedly did fit somewhat better with shrinkwrap software. It seems rather less relevant in an Agile era, where we now look to get a MVP out as fast as possible and iterate rapidly on user feedback

                  I used to think this, but in retrospect “we don’t need a spec” is more often used as an excuse to avoid hard thinking about the problem. Just because you can’t have all the answers up-front doesn’t mean you shouldn’t at least try to come up with a coherent design.

                  1. 1

                    This. Consider two people working on a client and a server or two micro services. Building those without a specification would be like building a bridge from two ends, hoping that the ends line up without measuring before laying the foundations.

                    1. 1

                      It might be apocryphal since I can’t find a good source, but I recall hearing about basically this happening on the St. Louis Loop Trolly project: they built the rail line from two ends heading towards each other, and upon meeting in the middle discovered that they didn’t in fact meet.

                2. 2

                  Interesting to compare with practices today, not to criticize but to see what changed. I remember reading this much nearer to 2000 than to now, and my young self found it basically credible; maybe I was too inclined to believe the grown ups at “real” software companies. :)

                  Users of GitHub (or a GitHub-alike) and a CI tool can check off a few items, plus some things not mentioned like code review tools. Test suites are a widespread expectation today but weren’t mentioned. Static analyzers/linters/autoformatters are kinda common, and we have cool dynamic profilers/fuzzers/etc.

                  A modern list might have more about operating a service not strictly code: easy, automated deploys, a good staging env, monitoring that can quickly track down new problems, and for big services maybe practices like canary servers, accelerating rollout, and open beta programs. Not only for SaaS: Chrome and Windows have frequent updates, telemetry, and beta programs too.

                  The paragraph about an up-to-date schedule now sounds kinda 90s. The intervening decades have largely been a move away from “here’s the feature set of the next big version and here’s when it launches” hinted at. Not that businesses don’t have schedules or product launches, but so much more of the work is just shipping as early, often, and incrementally as possible.

                  And quiet working conditions: man, so many folks at huge companies in open offices. Hope lots of folks keep the option of remote work.

                  1. 2

                    I remember this getting passed around at my first job. It was like a bolt of lightning in the early 2000s.

                    Now I hope it’s less striking (who doesn’t use version control?). Hope being the operative word.