Furthermore: I try to give people the benefit of the doubt. It’s easy to succumb to the temptation to joke around with fellow devs on a team about a given bit of supposedly bad code (e.g. “not sure what this guy was smoking…” or “this code was quite the mess before we refactored it”), even joking at the expense of the original author(s) of the lines of code. Instead, though, I try to assume that the original authors did more or less the best they could, given what they knew at the time, where their skill level was at at that point in their careers, or how familiar they might have been with the domain back then, or given the constraints set from outside of the team (deadlines, budget, hiring choices, external temporary contractors, insufficient person hours, etc.).
Benefit of the doubt is a biiiig thing for me when it come to code.
To be fair, I have seen my fair share of head-scratchers, tho, where it was hard to know what to assume about the code, other than time pressure.
Some code is unequivocally bad. Maybe not most, but some is. Examples I’ve seen in real life:
#define PROJECT_NAME_TWO 2
Unreasonable constraints unfortunately happen, and someone somewhere was just trying to do their job. But it’s still bad code and there really is no excuse for any of the above. We don’t make excuses for road workers when we go over a pothole.
I think the point of the article is not to deny that the code is bad, but to avoid saying that it’s bad and instead explain what’s wrong with it, and even ask the author for clarification about why they wrote it that way.
Never mind that clang is a C++ compiler that could already do type inference
Never mind that clang is a C++ compiler that could already do type inference
To be fair, they couldn’t just switch Clang to C++ mode and use that to compile their C-plus-auto code. Even though it’s popular to say C++ is a superset of C, it isn’t, and most C code (even most C99 or C89 code) won’t compile in a C++ compiler.
So you can kind of see how it makes some kind of twisted sense to patch Clang if you want to write C but also want type inference…
All you wrote is true.
I’d have to go into detail about how bad this idea was though. It was one of the dumbest things I’ve ever seen.
This is bad advice. Code quality failures come in groups, because they are generally the result of misapplying habits that would make sense in another context, and identifying that there are problems with a piece of code doesn’t prevent you from later being more specific about those problems when there’s disagreement about them. On the other hand, when there is disagreement, you tend to get more problems fixed than you would if you were more specific, because it means there were multiple different types of things wrong with the code.
As an example:
Sometimes, code gets expanded unnecessarily, and something that’s only one or two conceptual chunks gets strewn across many files or broken into many independent components out of an expectation that it would be more complicated than it really was or a slavish dedication to ideas about encapsulation or the mere fact that the original developer was a lot less experienced than the maintainers & couldn’t mentally chunk the logic as coarsely. This is a common ‘enterprise’ smell – it happens when structure is defined in a contract written by managers or when new graduates recently steeped in classes about java end up doing all the implementation.
That code can get edited by somebody with a different set of habits or impulses that seems contradictory – somebody who, because their tooling doesn’t have autocomplete or they’re a slow typist or they have an 80x25 display or poor short term memory or they’re sleep deprived and on a deadline, prefers to write overly terse & hacky code that combines multiple operations into a single expression or uses complicated tricks to replace two or three lines of idiomatic code with one line of noise.
Code written by this person from scratch might actually be terse, but they won’t necessarily have the time or energy to actually refactor code written in an overly expansive style, so instead the code becomes rock soup – a thin broth of thirty layers of wrapper classes that do almost nothing spread across twelve different directories named things like ‘AbstractManagerSupervisorFactoryFactoryManagerFactory’ and 98% of the instructions in the actual compiled object are actually accounted for by five lines in some static function called ‘q’ that combines duff’s device with a cascade of ternary operators and fast inverse square root to convert locales or something with an opaque array of undocumented constants.
(This example is fictional, but I have in fact encountered equally crazy examples of this kind of behavior in production code.)
This is ‘bad code’ in two different ways: the dense parts are too dense, and the thin parts are too thin. It needs to be rewritten from scratch, & can probably be turned into a screenful of code that both the original developer & the later maintainer would consider acceptable. If I said “this is too dense” then odds are that the least dense parts would get expanded while the dense parts were untouched (because the original developer couldn’t understand them); if I said it wasn’t dense enough, all the scaffolding would be demolished and there would be no means to determine the intended behavior of the single-letter function names. If I say “this is bad code”, then the inconsistent density is the obvious reason, and the obvious course of action is to make it of consistent density.
Sometimes, people even have conflicting ideals, just from what they are used to.
Or different priorities:
There’s also a case where code that was good is now “bad”, only because external requirements have changed.
If you had a code that only needed to handle 2 widget types, it’d be entirely sensible to have if/else or a bit of copy-paste. But the requirements change later to handle 7 widget types, the previously adequate approach won’t be able to meet these requirements, and will need to be refactored to use some kind of WidgetFactory. But if you started with an abstract factory for just 2 cases, that would be another case of “bad” code.
The only thing that stands out in my mind as commentary to add.to this idea Is one that was previously written about by @gregdoesit. Basically, don’t make your teammates feel stupid for not knowing specific jargon for why a bit of code needs to be worked on. If anything, making sure to not have your criticisms lost in jargon should help keep them more concrete, and focused on why a change helps. Perhaps it makes code more testable, or more consistent, or otherwise helps move it along an axis that the team values.
I was actually thinking something very like this just yesterday. My conclusion is that, like dogs, no code is intrinsically bad. It might be ill treated, it might be poorly made, it may be incorrect or buggy or be hard to modify or just be messy because we didn’t have time or energy to make it better. But it’s always the way it is because a human decided to make it that way for a reason, and so the real problem is knowing what the reason is.
Even Brainfuck just wants to be a good code.
The first fizzbuzz is not bad. It correctly gives you fizz buzz sequence. There could be a comment that it gives you it in a list and you have to give an upper bound. The code is itself very practical though.
The later code has merged the first condition into %15 but supplies no proof that the logic is equivalent to the earlier. That’s a bit shitty and redundant.
Did throw the code into https://jsben.ch and tried few setups. On 5 items the shorter code is twice faster, but when it gets to 50 it’s only 75% faster. At 500 it starts being slower. After that the performance just keeps increasing on the first thing. The later code messes up something. This surprised me a bit. I thought the first one would have been faster by all means.
Is the author seriously impliying that not following style guide belongs to “bad” while he misses these kind of things?
They list several failings in both code examples.
I’m sure you know, but it is true that any number factored by both 3 and 5 must be factored by 15.
A rough informal proof: all multiples of three can be expressed as n*15 + x where n is an integer and x is in [0, 3, 6, 9, 12] (I don’t know how you’d prove this formally, but it is true). Divide that by 5 and there can only be an integer answer when x is 0.
n*15 + x
I think this is true in general: if gcd(x, y) = 1; then n % xy = 0 implies n % x = 0 = n % y. For x, y > 0.
If anyone fancies showing me how to construct a proof of that (or shows it to be false), then I’d appreciate that.
The point is. The use of this fact is redundant because no performance concern with fizzbuzz. It’s shitty because the
proof is not supplied or referenced.
Look at what’s being written down. (c % 15 == 0) You have to prove this is same as (c % 3 == 0) && (c % 5 == 0).
(c % 15 == 0)
(c % 3 == 0) && (c % 5 == 0)
You can go quite directly about this. If (c % 15 == 0), can you conclude that c % 3 and c % 5 are zero as well? These form functions that count and cycle hitting 0 on regular intervals. The c%3 hits 0 at same time as c%15, then it hits 0 at c=15, so it forms a cycle that’s on same phase as c%15. Same for c%5. So we know the condition covers the previous condition.
Next you got to prove that (c%15 == 0) is false when the expression it replaced is false. So you know there’s a cycle going on. You prove there is no moment when both c%3 and c%5 sequence are zero, before the c%15.
c % 3
c % 5
(c%15 == 0)
As general rule you’d probably state something like (c % LCM(x,y) == 0) = (c % x == 0) && (c % y == 0)), depending if that’s true for anything you can pass into the ‘x’,‘y’ and ‘c’. But nothing like this was stated or proven. Software is supposed to be correct still after you optimize it you know.. Even if you prematurely optimized.
(c % LCM(x,y) == 0) = (c % x == 0) && (c % y == 0))
I don’t think the proof is nearly that complicated. 5 and 3 are coprime, so if something is divisible by both then it’s divisible by their product.