This article seems to assume that “process” is nothing but an abbreviation for “by-the-book Scrum.”
You can have an engineering process that doesn’t involve story points or constant prioritizing of a backlog. Does your team require code review before changes are merged? You have a process. Have a QA team that sanity-checks your changes before they’re released to customers? You have a process. Are you expected to discuss the design of a significant change with your teammates before diving into implementation? You have a process.
I think review/qa/code design are subject to similar arguments. You can force your reviewers and testers to review according to a predetermined rubric, or you can allow them to review/test in a freeform fashion. You can try and formalize the design review process or make it informal.
Lightweight processes are good because they prompt people to ask “is this change a good idea” at many stages. Process for the sake of process is bad because it displaces “is this change a good idea?” with “did I follow the correct process with respect to this change?”
I think having kind of checklist (or “process”) is not necessarily a bad idea, as it ensures that things actually get done. Humans are rather prone to forgetting things and the like. If you look at avionics then there are a huge amount of checklists for everything, ranging from standard procedures to emergencies. This is good, because it’s just too easy to miss something, with potentially disastrous consequences; quite a few crashes could have been prevented if the pilots has followed the checklist.
Writing software is not avionics, but it’s still interesting to look at it, as it does highlight the value of checklists/processes.
I think having some sort of review process is a good idea. My own rather anti-authoritarian nature hates processes, but I’m also not blind to the limited capabilities of humans, and having a vague “is this a good idea?”-kind of review makes it much easier to make “oops, didn’t think of that!” mistakes. There is some overhead, yes, but it also comes with some advantages.
I don’t disagree that following process for the sake of it is not a good idea, or that disallowing any and all deviation from it is bad, but for some things at least, there are some advantages.
A similar example are the issue templates that many projects have; for some issues, such a template just doesn’t make much sense (“Description”: “Copy/paste doesn’t work”; What happened?”: “It didn’t work”; “What did you expect instead?”: “That it works”), and deviating from that should be okay in those cases. But it’s still a good idea to have the template/process in place, as it’s a good default that works well for many issues, and it prevents things like people just posting “X doesn’t work”, or forgetting to post stuff like the full error message or version (well, most of the time anyway; some people seem unteachable in this regard).
Treating process as a law is a bad idea because you keep running in to edge cases where it doesn’t 100% fit, but if you treat it as a default template that works well for most cases then usually it works quite well.
I think the biggest problems from processes comes when people (usually managers or lead devs) treat it as law because they don’t trust developers, but the issue there isn’t really the process but the lack of trust (which, sometimes with some devs, it not entirely undeserved); the rigid process is just a symptom of that lack of trust.
Process kills momentum. I know that “well begun is half done” but with modern software tooling it is often better to iterate, throw away prototypes, etc. I have a colleague who is very process-driven. I’ve written a two or three prototypes and found issues and discovered solutions while he’s still drawing diagrams and writing unit tests for his first attempt. By the time he’s done, the requirements have changed and his work is for naught.
Not saying that my way is better or his way is worse, but, in my experience, fast iteration is more valuable than stable process.
EDIT: Obviously, there’s a time and a place. If you’re designing avionics software, embedded medical device controllers, etc…the calculus is different.
There’s an important point buried in there somewhere, but it gets all muddled-up, probably due to lack of understanding or experience. Don’t know.
First, there is no decision between process and no-process. Why? Because it’s impossible to live in a world without process. Say you come in each day, figure out in your head what you want to do, then do it. Guess what? That’s a process. Just because you don’t write anything down, follow a checklist, or do a bunch of other stuff doesn’t mean you have no process. That’s impossible.
So perhaps the real danger here is something more like “too much process”, or “too much formal process”. Starting from there might lead to better conclusions.
As somebody who’s written a lot about this, I think the saddest part about the whole process-pain discussion is this: we do it to ourselves. That is, in your company while it may look like outsiders or management has brought in this thing that’s killing you, in reality most of the time it’s just other technical people who created this stuff and some point, and other technical people who thought it looked good and decided to do it. There is no “other”
The overall problem is that the mental model we all carry around for programming and optimizing your work, whatever it is, does not apply to solving computer problems creatively. Yet that’s our default setting. So you could wipe away all process tomorrow, come back in a year or three, and most of us would be back to the same exact situation. That’s a much longer story, but until that’s fixed we’re just going to continue going round and round chasing our tails, sadly.
I generally agree with you. The challenge is that sometimes process rules become obsolete and it is hard to dig up the original reasons.
As an example, I work in automotive so we use coding rules like MISRA C. There is rule 14.7 “A function shall have a single point of exit at the end of the function”. So you are only allowed to use a single return in the function body. The purpose is to defend against cases where you forget to clean up resources. For C++ we decided that this rule is more harmful than helpful. Using RAII ensures that resources get cleaned up and multiple return statements can improve readability. Not an easy discussion and it comes up repeatedly because developers know MISRA.
Currently I’m in a discussion if implicit conversion between signed and unsigned integers is always bad and explicit casts are better. My opinion is that our checker should only complain about implicit casts if information can be lost. I don’t see any advantage in an explicit cast from uint8 to int32. The downside is slightly worse readability and some risk that the explicit cast is wrong after future changes.
And a third example: It is “state of the art” to shut down the device in case of the floating point exceptions division-by-zero, invalid operation, and overflow. The reasoning seems to trace back to ISO26262 which mentions division-by-zero. I believe the intention back then was about integer division where this is undefined behavior. For floating point these operations are well defined and result in Inf or NaN. We recently “fixed” a function which worked perfectly fine but contained a division by zero. How do you argue against “state of the art”?
In my experience there are two reasons for creating formal process.
Reason 1: we want to speed-up making the things we make
Reason 2: there is a high risk of screwing up your own code and the code of others, so as a risk-limiting measure, we’re going to make sure we avoid that possibility
Both of these are good reasons to create process but both also have a lot of downsides. It sounds like your example is more like reason 2. The problem for both of these reasons, as you point out, is that things change, sometimes quite rapidly. Conditions change at one speed. Process changes about 100x slower. There’s something about writing things down that starts locking people into thinking one way or another. It’s almost like “Ok, we ‘solved’ that problem by making process/paperwork, now we can forget about it and work on more important things”
We both know, however, that they haven’t solved anything. Yet humans can’t think of everything all the time, so from 50,000 feet it looks good. It’s all very natural for us. It’s also something we need to continually fight against.
I don’t think that OP is necessarily against processes. It’s more like a provoking and default position because the existing processes are not satisfying to him. Personally I could relate to a lot of the observations being made in the article. A lot of those scrum/agile-branded processes that I have encountered are actively hurting my productivity and my brain.
Slavishly following a process is one thing (which I tend to agree is not a good thing), but there’s definitely something to be said for ensuring that the annoying tasks get done too. For example, if you never groom the backlog, it will start to grow out of control. It can be really depressing to see a huge backlog you know you’ll never get around to anyway.
There will be tasks on the backlog that are simply not important anymore, but also things you know you should do but never get around to. Especially the latter can be important to put back up on the roadmap once in a while, to ensure tech debt doesn’t grow out of control. The former can be dropped, to clean up the ticket tracker. This way, with a smaller backlog, people might be more motivated to pick tasks from it once in a while.
I think this also boils down to discipline; if you can be disciplined enough to keep the backlog in check it doesn’t need grooming. But in my experience, this doesn’t happen when you’re knee-deep in code.
This vision makes sense if you view software teams as a job scheduling system that can be optimized by choosing the right sequence of tasks. But in reality, as I argued before, whether or not that new task is more important than your ongoing project work is either blatantly obvious or completely unknowable. “The project tracker says doing the security hardening now will delay the project by three weeks, but our team’s judgement is that it’s only worth a delay of two weeks,” said nobody with a straight face.
Somehow I believe that “completely unknowable” is wrong. Using some simple statistics you should be able to estimate stuff to some certainty. Maybe doing this is to much effort in most cases. Maybe it isn’t if you know enough shortcuts and tricks.
This article seems to assume that “process” is nothing but an abbreviation for “by-the-book Scrum.”
You can have an engineering process that doesn’t involve story points or constant prioritizing of a backlog. Does your team require code review before changes are merged? You have a process. Have a QA team that sanity-checks your changes before they’re released to customers? You have a process. Are you expected to discuss the design of a significant change with your teammates before diving into implementation? You have a process.
I think review/qa/code design are subject to similar arguments. You can force your reviewers and testers to review according to a predetermined rubric, or you can allow them to review/test in a freeform fashion. You can try and formalize the design review process or make it informal.
Lightweight processes are good because they prompt people to ask “is this change a good idea” at many stages. Process for the sake of process is bad because it displaces “is this change a good idea?” with “did I follow the correct process with respect to this change?”
I think having kind of checklist (or “process”) is not necessarily a bad idea, as it ensures that things actually get done. Humans are rather prone to forgetting things and the like. If you look at avionics then there are a huge amount of checklists for everything, ranging from standard procedures to emergencies. This is good, because it’s just too easy to miss something, with potentially disastrous consequences; quite a few crashes could have been prevented if the pilots has followed the checklist.
Writing software is not avionics, but it’s still interesting to look at it, as it does highlight the value of checklists/processes.
I think having some sort of review process is a good idea. My own rather anti-authoritarian nature hates processes, but I’m also not blind to the limited capabilities of humans, and having a vague “is this a good idea?”-kind of review makes it much easier to make “oops, didn’t think of that!” mistakes. There is some overhead, yes, but it also comes with some advantages.
I don’t disagree that following process for the sake of it is not a good idea, or that disallowing any and all deviation from it is bad, but for some things at least, there are some advantages.
A similar example are the issue templates that many projects have; for some issues, such a template just doesn’t make much sense (“Description”: “Copy/paste doesn’t work”; What happened?”: “It didn’t work”; “What did you expect instead?”: “That it works”), and deviating from that should be okay in those cases. But it’s still a good idea to have the template/process in place, as it’s a good default that works well for many issues, and it prevents things like people just posting “X doesn’t work”, or forgetting to post stuff like the full error message or version (well, most of the time anyway; some people seem unteachable in this regard).
Treating process as a law is a bad idea because you keep running in to edge cases where it doesn’t 100% fit, but if you treat it as a default template that works well for most cases then usually it works quite well.
I think the biggest problems from processes comes when people (usually managers or lead devs) treat it as law because they don’t trust developers, but the issue there isn’t really the process but the lack of trust (which, sometimes with some devs, it not entirely undeserved); the rigid process is just a symptom of that lack of trust.
Process kills momentum. I know that “well begun is half done” but with modern software tooling it is often better to iterate, throw away prototypes, etc. I have a colleague who is very process-driven. I’ve written a two or three prototypes and found issues and discovered solutions while he’s still drawing diagrams and writing unit tests for his first attempt. By the time he’s done, the requirements have changed and his work is for naught.
Not saying that my way is better or his way is worse, but, in my experience, fast iteration is more valuable than stable process.
EDIT: Obviously, there’s a time and a place. If you’re designing avionics software, embedded medical device controllers, etc…the calculus is different.
Sounds like he needs to incorporate prototyping into his process.
There’s an important point buried in there somewhere, but it gets all muddled-up, probably due to lack of understanding or experience. Don’t know.
First, there is no decision between process and no-process. Why? Because it’s impossible to live in a world without process. Say you come in each day, figure out in your head what you want to do, then do it. Guess what? That’s a process. Just because you don’t write anything down, follow a checklist, or do a bunch of other stuff doesn’t mean you have no process. That’s impossible.
So perhaps the real danger here is something more like “too much process”, or “too much formal process”. Starting from there might lead to better conclusions.
As somebody who’s written a lot about this, I think the saddest part about the whole process-pain discussion is this: we do it to ourselves. That is, in your company while it may look like outsiders or management has brought in this thing that’s killing you, in reality most of the time it’s just other technical people who created this stuff and some point, and other technical people who thought it looked good and decided to do it. There is no “other”
The overall problem is that the mental model we all carry around for programming and optimizing your work, whatever it is, does not apply to solving computer problems creatively. Yet that’s our default setting. So you could wipe away all process tomorrow, come back in a year or three, and most of us would be back to the same exact situation. That’s a much longer story, but until that’s fixed we’re just going to continue going round and round chasing our tails, sadly.
I generally agree with you. The challenge is that sometimes process rules become obsolete and it is hard to dig up the original reasons.
As an example, I work in automotive so we use coding rules like MISRA C. There is rule 14.7 “A function shall have a single point of exit at the end of the function”. So you are only allowed to use a single return in the function body. The purpose is to defend against cases where you forget to clean up resources. For C++ we decided that this rule is more harmful than helpful. Using RAII ensures that resources get cleaned up and multiple return statements can improve readability. Not an easy discussion and it comes up repeatedly because developers know MISRA.
Currently I’m in a discussion if implicit conversion between signed and unsigned integers is always bad and explicit casts are better. My opinion is that our checker should only complain about implicit casts if information can be lost. I don’t see any advantage in an explicit cast from uint8 to int32. The downside is slightly worse readability and some risk that the explicit cast is wrong after future changes.
And a third example: It is “state of the art” to shut down the device in case of the floating point exceptions division-by-zero, invalid operation, and overflow. The reasoning seems to trace back to ISO26262 which mentions division-by-zero. I believe the intention back then was about integer division where this is undefined behavior. For floating point these operations are well defined and result in Inf or NaN. We recently “fixed” a function which worked perfectly fine but contained a division by zero. How do you argue against “state of the art”?
In my experience there are two reasons for creating formal process.
Reason 1: we want to speed-up making the things we make
Reason 2: there is a high risk of screwing up your own code and the code of others, so as a risk-limiting measure, we’re going to make sure we avoid that possibility
Both of these are good reasons to create process but both also have a lot of downsides. It sounds like your example is more like reason 2. The problem for both of these reasons, as you point out, is that things change, sometimes quite rapidly. Conditions change at one speed. Process changes about 100x slower. There’s something about writing things down that starts locking people into thinking one way or another. It’s almost like “Ok, we ‘solved’ that problem by making process/paperwork, now we can forget about it and work on more important things”
We both know, however, that they haven’t solved anything. Yet humans can’t think of everything all the time, so from 50,000 feet it looks good. It’s all very natural for us. It’s also something we need to continually fight against.
I don’t think that OP is necessarily against processes. It’s more like a provoking and default position because the existing processes are not satisfying to him. Personally I could relate to a lot of the observations being made in the article. A lot of those scrum/agile-branded processes that I have encountered are actively hurting my productivity and my brain.
Slavishly following a process is one thing (which I tend to agree is not a good thing), but there’s definitely something to be said for ensuring that the annoying tasks get done too. For example, if you never groom the backlog, it will start to grow out of control. It can be really depressing to see a huge backlog you know you’ll never get around to anyway.
There will be tasks on the backlog that are simply not important anymore, but also things you know you should do but never get around to. Especially the latter can be important to put back up on the roadmap once in a while, to ensure tech debt doesn’t grow out of control. The former can be dropped, to clean up the ticket tracker. This way, with a smaller backlog, people might be more motivated to pick tasks from it once in a while.
I think this also boils down to discipline; if you can be disciplined enough to keep the backlog in check it doesn’t need grooming. But in my experience, this doesn’t happen when you’re knee-deep in code.
Somehow I believe that “completely unknowable” is wrong. Using some simple statistics you should be able to estimate stuff to some certainty. Maybe doing this is to much effort in most cases. Maybe it isn’t if you know enough shortcuts and tricks.