Do programmers have quiet working conditions?
[…]
Proposed update: Remove rule
Sorry, but that removal is nonsense. Especially the statements about noise canceling headphones. Requiring headphones, while not listening to anything is nonsense. What I’d do is also extending it to visual interference though.
Do new candidates write code during their interview?
For that one I think there are three good solution. Homework code, looking at GitHub and others, coding in a quiet room. The reason why I really dislike the other stuff is that you want to know what that person does on a daily basis, not how well he scribbles something on a whiteboard.
Also all the interview coding that was with someone else looking after stuff usually means that there isn’t enough time to even sit down and get started properly, meaning that the coding test is like a school test, which again isn’t what your company should look for.
Also a lot of these things seem like dogmas and while they are surely done with good intentions that’s not what makes things better. I actually would suggest that there currently is a plague of dogmas ignoring real life and how it often redefines these dogmas to be countering the very thing they try to achieve.
That said I think that most/all of the statements in both the Joel Test and the updated version (esp. with the explanations) are true. However, you kind of want to make sure that you have a team that knows why and how those are true rather than just setting these things up. That just doesn’t work in real life.
Chanting a lot about those things isn’t the same as embracing them and not having them become dogmas that are used to justify stupidity.
Anyway, the world would be a lot greater if more people truly embraced those things. And quality would probably go up. However, all of those could be and have been criticized exactly because they have been turned into formal protocols that hinder quality, efficiency and productivity. I think that a lot of how these things should be implemented depends on circumstances. A lot of those things will be implemented by a team on their own, if the overall motivation exists.
A great example are non or barely funded successful open source projects.
I have to agree that the Joel test feels dated in places. I think the problem is more that it’s written with building desktop software with statically-typed languages and little to no automated testing in mind. Nothing wrong with all that, but a lot of the industry has moved to the Web, using interpreted, dynamically-typed languages with robust automated test suites, with related changes in process.
I’d agree with the article that one-step and daily build seem to be kind of missing the point with web development and interpreted languages. I think I’d replace it with whether you run a comprehensive CI suite on every commit to master, or whatever your VCS equivalent is. And actually fix any broken tests instead of commenting them out or ignoring the failure. And you should be able to deploy your code to any testing or prod environments you have with a single command somewhere.
Fixing bugs before writing new code seems a little strange too. What’s the severity of the bug? Any software with actual users probably has at least some bugs that you don’t intend to fix anytime soon because they don’t affect many people, have minor consequences, or easy workarounds. And especially on the web, there are certainly bugs that you drop everything and fix RFN, even if it’s 3AM. I don’t really know how to extract anything meaningful from this question.
I can agree with having an up-to-date schedule not really being meaningful in an Agile world. What I’d replace it with is whether the time required for stories is determined by estimation by the engineers who will actually be working on the code. Management creates stories and sets priorities, but they need to respect the estimates the engineers make.
There’s been a lot of controversy about candidates writing code in interviews. I think I’d update it to ensure the code is related to what they’ll be doing on the job. If the job is writing basic corporate CRUD apps, don’t bother judging candidates on whether they can write working code for balancing binary trees on a whiteboard.
Having testers can be a bit dicey, depending on who your software is intended for. Lots of domain-specific stuff can only really be tested by the people who will be using it. One project I was on, working on a program for setting up and running complex engineering calculations, we actually gave up having a testing environment and moved all of our testing to a limited production environment, as it was too expensive to have engineers do tests realistic enough to expose bugs on a test environment over and over again. My current project does have dedicated testers, but all our team builds is web APIs, so testers have to be about as qualified as full developers to test it. It’s hard to come up with a good general question for this one, since it’s so dependent on the type of software being developed.
After writing all that, I’m starting to think I ought to write my own blog post on this.
Fixing bugs before writing new code seems a little strange too.
I was of that opinion until a team I was on adopted this policy. We discovered that our unfixed small/unimportant bugs were actually a huge, hidden timesink.
The best explanation I can offer is that people were frequently:
Rediscovering bugs (only to realize they had been reported)
Re-reporting bugs (only to have someone else merge them as duplicate),
Re-discussing the importance of a given bug.
None of these seemed like they took very long, but a 2-minute distraction at the wrong time can cost 20-30 minutes of productivity.
I would replace the “Do interviewees write code during the interview?” question with “Are interviewees never asked to code on a whiteboard?”
Sorry, but that removal is nonsense. Especially the statements about noise canceling headphones. Requiring headphones, while not listening to anything is nonsense. What I’d do is also extending it to visual interference though.
For that one I think there are three good solution. Homework code, looking at GitHub and others, coding in a quiet room. The reason why I really dislike the other stuff is that you want to know what that person does on a daily basis, not how well he scribbles something on a whiteboard.
Also all the interview coding that was with someone else looking after stuff usually means that there isn’t enough time to even sit down and get started properly, meaning that the coding test is like a school test, which again isn’t what your company should look for.
Also a lot of these things seem like dogmas and while they are surely done with good intentions that’s not what makes things better. I actually would suggest that there currently is a plague of dogmas ignoring real life and how it often redefines these dogmas to be countering the very thing they try to achieve.
That said I think that most/all of the statements in both the Joel Test and the updated version (esp. with the explanations) are true. However, you kind of want to make sure that you have a team that knows why and how those are true rather than just setting these things up. That just doesn’t work in real life.
Chanting a lot about those things isn’t the same as embracing them and not having them become dogmas that are used to justify stupidity.
Anyway, the world would be a lot greater if more people truly embraced those things. And quality would probably go up. However, all of those could be and have been criticized exactly because they have been turned into formal protocols that hinder quality, efficiency and productivity. I think that a lot of how these things should be implemented depends on circumstances. A lot of those things will be implemented by a team on their own, if the overall motivation exists.
A great example are non or barely funded successful open source projects.
I have to agree that the Joel test feels dated in places. I think the problem is more that it’s written with building desktop software with statically-typed languages and little to no automated testing in mind. Nothing wrong with all that, but a lot of the industry has moved to the Web, using interpreted, dynamically-typed languages with robust automated test suites, with related changes in process.
I’d agree with the article that one-step and daily build seem to be kind of missing the point with web development and interpreted languages. I think I’d replace it with whether you run a comprehensive CI suite on every commit to master, or whatever your VCS equivalent is. And actually fix any broken tests instead of commenting them out or ignoring the failure. And you should be able to deploy your code to any testing or prod environments you have with a single command somewhere.
Fixing bugs before writing new code seems a little strange too. What’s the severity of the bug? Any software with actual users probably has at least some bugs that you don’t intend to fix anytime soon because they don’t affect many people, have minor consequences, or easy workarounds. And especially on the web, there are certainly bugs that you drop everything and fix RFN, even if it’s 3AM. I don’t really know how to extract anything meaningful from this question.
I can agree with having an up-to-date schedule not really being meaningful in an Agile world. What I’d replace it with is whether the time required for stories is determined by estimation by the engineers who will actually be working on the code. Management creates stories and sets priorities, but they need to respect the estimates the engineers make.
There’s been a lot of controversy about candidates writing code in interviews. I think I’d update it to ensure the code is related to what they’ll be doing on the job. If the job is writing basic corporate CRUD apps, don’t bother judging candidates on whether they can write working code for balancing binary trees on a whiteboard.
Having testers can be a bit dicey, depending on who your software is intended for. Lots of domain-specific stuff can only really be tested by the people who will be using it. One project I was on, working on a program for setting up and running complex engineering calculations, we actually gave up having a testing environment and moved all of our testing to a limited production environment, as it was too expensive to have engineers do tests realistic enough to expose bugs on a test environment over and over again. My current project does have dedicated testers, but all our team builds is web APIs, so testers have to be about as qualified as full developers to test it. It’s hard to come up with a good general question for this one, since it’s so dependent on the type of software being developed.
After writing all that, I’m starting to think I ought to write my own blog post on this.
I was of that opinion until a team I was on adopted this policy. We discovered that our unfixed small/unimportant bugs were actually a huge, hidden timesink.
The best explanation I can offer is that people were frequently:
None of these seemed like they took very long, but a 2-minute distraction at the wrong time can cost 20-30 minutes of productivity.