I think you are way, way off. Cellular data is still quite expensive even though it’s cheaper than it was. WiFi is almost free and almost as widely available as cellular data unless you’re actually driving in a car or something.
WiFi is also still much faster than LTE, and in a lot of places more reliable.
If carriers actually built phones, sure they would have all the reason in the world to get rid of the WiFi radios. But they don’t. The phone manufacturers would be shooting themselves in the foot to drop the WiFi radios.
Ah, is it yet another article suggesting that I should work free overtime and sacrifice my health and other interests for the sake of becoming a more efficient exploitable commodity for employer? Why yes, yes it is!
I wish people would stop writing this nonsense.
That’s not at all what I suggested. You absolutely should not sacrifice your overtime or health for your job. If you read the article I don’t advocate anything of the sort.
My point was that a career in programming is not an easy career. If you don’t enjoy programming for programming’s sake I think it’s very hard to do it long term.
It seems like most commenters here had a very uncharitable interpretation of your blog. Which is somewhat understandable given the amount of bad experience folks have had with bosses exploiting “passion” on at least a few different levels. I do think the comments here are an overreaction though.
I will say that I’m personally not a fan of these types of posts. When reading it, I feel like I’m being should'ed to death, which I personally hate even if I identify with the author’s experience (and I do, to a degree, in this case).
I think this type of topic is way too personal for a writing style like this. My unsolicited advice is to frame this type of writing as an experience report rather than a persuasive essay. Unless, of course, you really do want to try to tell people what to do, in which case, good luck. ;-)
I will add one more perspective to this that turns this topic a bit on its head. Instead of doing what you love, one can, indeed, love what they do. The latter may require some serious perspective taking though!
I am someone who is passionate about programming, but for some folks, some of the time- you just need a goddamn job. Hell, while I’m passionate about programming, I’ve worked a lot of gigs that I was not passionate about. I’m never going to be excited about developing an Angular front end, but I’ll do it ‘cause it needs done and people will pay me to do it.
I’d much rather be working on my multi-player choose-your-own-adventure-book engine (someplace between a MUD and CYA), but ain’t nobody gonna pay me to do that. Or I’d love to be refining my dancing drone software (but the drone I own isn’t responsive enough, and I can’t justify buying a better one, because again- ain’t nobody payin' me for that).
I can relate to this. I spend a lot of time learning about new things in my field for fun, out of curiosity, and out of a desire to perfect my craft; I bring some of that to the office, but I also am aware that some of the stuff needs to stay home. It’s unreasonable to use all the new things you learned about over the weekend/evenings just because ZOMG SOO COOOOOOL.
For sure, it informs decisions I make on a day-to-day basis; the different paradigms I’m exposed to when I’m learning new concepts allow me to see some things I could have missed otherwise, and I like to think it makes me better at my day job as a result.
That being said, the 9-to-5 is PHP-only for now, and I can’t say I’m especially fond of that.
I’d argue you could get paid to work on a multi-player choose-your-own-adventure-book engine. Maybe not your engine specifically, but Failbetter games has built what looks like a successful business on top of a multiplayer CYA engine.
I’d say that with either project you could build sustainable businesses, even the dancing drones, but it sounds like you are interested in stable employment over entrepreneurship, which I completely understand.
Ironically, I do run my own business- I’m a consultant, trainer and contractor. The idea is that I spend a week or two a month doing work I don’t like so that I can spend the remaining time doing work I do like. But I can pull that off because I’ve got contacts who like my work enough to book me gigs and take a little off the top. If I had to really push and drive sales, I’d probably starve to death.
I don’t mean you need to be passionate about every project you work on. I just meant you need to have enthusiasm for programming as a field.
And I’m pointing out that a lot of passions don’t pay the bills. So what if someone’s passion is to raise a family on ten acres of woods in the Ozarks, and they decide to fund that lifestyle by programming for money?
I came here to comment that while passion isn’t always scalable, I don’t know another way to motivate myself but you answered that question/objection beautifully.
Some people are passionate about having children and being able to afford a good education and experiences for their kids. Programming seems like a great way to do that.
As for me, I am passionate about programming. But I’m also passionate about retiring early, traveling in my camper van, and getting 10 acres of woods with chickens and a goat. Those passions influence some of my career choices.
I’d say they’d probably be happier farming but they can do that.
I didn’t say “You can’t program without passion”, I said “Don’t”, as in I don’t think you should, because you’ll be happier pursuing something else.
because you’ll be happier pursuing something else.
This strikes me as incredibly arrogant, though I’m not sure if that’s how you meant it. How can you possibly know for sure what will make someone happiest?
People don’t just derive happiness from the tasks they perform, there’s a whole context that you have to consider. I’m passionate about programming, but I would be less happy if the office I work in wasn’t air conditioned, or if I didn’t get free snacks, or if a thousand other things were true or false.
Let’s say someone is passionate about interpretive dance. But no matter how passionate they are, they can’t handle being a starving artist, they want a nice place to live and good food to eat. In this case, their overall happiness might actually be maximized by working as a programmer during the day and dancing interpretively at night.
Now, in a world without scarcity, or even a world with a rock solid basic income, I would agree with you. In that case the context wouldn’t be as important because it would be much more similar across different activities (e.g. switching from programming to dance wouldn’t imply such a massive change in lifestyle). But that’s not the world we live in.
I recall seeing an article, probably from here, that made a point of saying that “passionate” is often a job offer trait that employers look for a a way to get programmers to invest their free time in the product too; does anyone have that handy? Otherwise : yeah, beware, people will abuse of your passion. Besides that, cool article.
Avdi Grimm probably: http://www.virtuouscode.com/2014/01/31/the-moderately-enthusiastic-programmer/
He’s a huge enemy of the word “passionate” for job work.
Exactly that, thanks.
I have seen those articles too. You can read “passion” as “would be doing this even if nobody paid you for it.” Or “have a strong interest and level of enjoyment in this subject.” In my case. I’m not referring to the “passion for our mission so you will work 80 hours per week” that employers sometimes talk about.
Ahhh the Joys of Spinning the Truth…..
The government-approved software that powers such machines gives the house a fixed mathematical edge, so that casinos can be certain of how much they’ll earn over the long haul—say, 7.129 cents for every dollar played.
It’s OK if the casino’s cheat…. but people playing the machines are cheating if they win.
This why “Alternative Facts” can survive….there is so much “spin” on the truth these days, alternative facts seem almost straight by comparison.
It depends. If the odds are posted then it’s fair, as in, you know what you are getting in to. Same with state run lotteries. If every spin/ticket has the same chance of winning (and you know the odds) it’s fair.
The whole topic of whether it should be legal to exploit the weaknesses of those with a gambling addiction (real addiction, as in they mortgaged their house to play) same as alcoholics at the bar and drug addicts, is a different question we are not discussing here.
If any hacking is going on, it’s hacking of the human reward / risk psychology.
Although I’d argue from speaking to various people attracted to Lotto and the like, their understanding of odds and the implications thereof are minimal and heavily skewed by massive saturation advertizing of the upside.
I totally agree, you can’t sell alcohol to minors, because they’re not intellectually equipped to make that decision (as the theory goes). The same goes with gambling and many adults. You can prove that they don’t posses the necessary understanding of probability theory to decide whether they should gamble.
You can prove that they don’t posses the necessary understanding of probability theory to decide whether they should gamble.
Yes you can.
Step 1: explain the odds.
Step 2: ask if they want to play.
Step 3: if “yes”, they don’t understand.
Err, usually the payout rates of these machines are not only posted but advertised prominently. When customers know what the rate is it is hardly cheating.
Actually let me hammer on a little.
That fine print statement of the odds, and a flashing neon ALL CAPS sign yelling, “WIN A MILLION!!” is exactly the cognitive gap that is thrown at Joe Average every blooming day, which is why we’re now in a world where “Alternative Facts” can flourish.
It’s where when casino owners are making a fortune from Joe Average through software that cheats is “not a cheat”, and another guy, using exactly the same software making a smaller fortune “is a cheat”.
In my book both are cheats and deserve each other.
But the spin in the article makes me angry.
[Comment removed by author]
Oh I hear you.
But let’s be graphically clear that casino’s don’t give a shit whether it was with electronic aid or bare human skill.
Their definition of a cheater is “any Schmoe we’re not stripping money off.”.
Still, casinos object to the practice, and try to prevent it, banning players believed to be counters. In their pursuit to identify card counters, casinos sometimes misidentify and ban players suspected of counting cards even if they do not.
Casinos have spent a great amount of effort and money in trying to thwart card counters.
Still, casinos object to the practice, and try to prevent it, banning players believed to be counters. In their pursuit to identify card counters, casinos sometimes misidentify and ban players suspected of counting cards even if they do not.
Casinos have spent a great amount of effort and money in trying to thwart card counters.
Conversely estimating blackjack probabilities incorrectly is quite ok.
As I said, casino’s and these guys heartily deserve each other.
That’s said, I’d bet you a beer that embedded processors with hardware random number generators are about to become more common / cheaper…
Maybe in your jurisdiction. Certainly not on any I have walked passed.
Even then I bet they are not nearly as prominently as “WIN A MILLION!!! JACKPOT!!”
the fact that discussions like this happen at all is what led me to quit the 40-50 hr/wk job market. I think there is something fundamentally odd about the way we set a firm price (salary) for tech labour but the actual required duties (hours) are only vaguely defined and then used as an element of pseudo-competition within the micro market that is the office.
working independently and getting paid by the hour or feature feels so much more fundamentally logical, at least in the sense that the formula for converting time into money has fewer hidden variables. of course, that pales in importance compared to the flexibility to only work 30 or 20 hours a week and spend the balance of time supporting people and communities and projects in ways that aren’t purely financial.
In theory this argument makes sense but I think in practice you find that in the freelance market you are expected to do a lot of stuff for free / outside of normal billing as well. I think it’s just the nature of work, a lot of times people expect to get things from you for free. For salaried employees, that means working outside then normal 40 hours when it is “required” (and different employers have different ideas of what that means.)
The key regardless of whether you’re working hourly on contract or salaried or whatever is to work with people who’s expectations match up with your own.
I think putting in long hours is often wasteful and less productive. When new developers start on our team I will tell them they won’t get extra credit for working on the weekend. We want them resting up to be productive on Monday.
That said this blog post comes off as extremely defensive. Which naturally leads me to believe the accusation struck home with the OP. That usually only happens if there’s some truth to it.
Working long hours != productivity and working normal hours != laziness. But there are definitely people who skate by doing the bare minimum. I think usually those are the people that are being referred to as 5:01ers. It’s a bad term though.
Disagree. As you get older (and I’m only 33) you get sick of certain shit– in this case, namely, being marked down for irrelevant cosmetic nonsense.
There really are companies where decisions about whom to fire are made based on face time (as a proxy for dedication). I’ve worked in them. Of course, since people rarely know why decisions are actually made, the atmosphere is more one of suspicion and adversity– you never know that you were fired because you left “early” at 6:30– than certain knowledge. But once you’re 30+ and have an eye for the patterns and unlock the “Sick of This Shit” merit badge, I think you have a right to complain. Open-plan high-frequency politics are not an important CS skill and shouldn’t have the influence over job/career outcomes that they do.
The truth is that we’re an industry where the people in management roles (even if they were engineers at one point) have no ability to evaluate the work. That means that subjective impressions matter and it generates politics– and while your political knowledge goes up with age, your political stamina (read: ability to deal with bullshit) goes down.
But there are definitely people who skate by doing the bare minimum.
Harmless low performers are a lot less toxic than people who generate politics. This is why stack ranking always backfires. The loss incurred by paying a market salary (presuming 25-50th percentile performance) for 5th-percentile people is minor in comparison to the much more severe cost of a politically charged environment. Instead of subtractors, you’re now dealing with dividers.
I’d actually be hesitant to fire the harmless low performers. Why? Because people are mostly context. The people in the bottom 20% know that they’re in the bottom 20%, therefore not going to get promoted, and therefore they try to skate. If you get rid of them, you’re just going to have a new bottom 20%. In most circumstances, they will eventually take stock of their promotion potential and become minimum-effort players.
In fact, I think the harmless low performers play a valuable social role. It’s better to know who the marginally productive gamma pups (who usually do at least a couple things well) are than to have the productive betas worried about their standing.
Well I’m nearly 40 and allow me to tell you that as you get older you stop getting bent out of shape about stuff that doesn’t apply to you. If some other people think working long hours is required to be a great coder then that’s their problem.
There are indeed companies, big and small who think working more than 40 hours a week is a good thing. That’s not going to be solved by angry blog posts.
If you are unfortunate enough to work at one of those places you have two options; try to change the culture or find a new job. if you are a good coder who truly is an asset you should be able to do one of those two things.
As far as people who skate by doing the minimum not being a problem; I strongly disagree. The fact that they don’t get promoted doesn’t make it ok. They still get paid and help set the norms for the company. They will weigh in on policies and influence culture with their laziness and disinterest.
The idea that they are “less harmful” than toxic people who like to cause drama is both irrelevant and a false dichotomy. You don’t have to tolerate either type of person in your organization and I dont at mine.
You’re not the author, though. Michael was saying the author might be one of the set of people whose brains are hardwired to get irritated over seeing same bullshit. That might soften or worsen with age. There’s also my concept that the author thinks of it as a personal attack which itself can get a person to lash out in defensiveness. That’s what I saw in the few paragraphs I read.
Too ranty to waste more time on…
I disagree. People usually know who they are and don’t want to be in the same bucket. Those people aren’t looked up to for guidance. If they offer suggestions on policies, they’ll be ignored.
The idea that they are “less harmful” than toxic people who like to cause drama is both irrelevant and a false dichotomy.
I didn’t say “people who like to cause drama”. I’m talking about people who generate politics, not because they “like drama” but for personal benefit. And sure, there are people who fit into neither category.
I’d never say that it’s wrong to get rid of low performers. Sometimes it’s what you have to do, just to survive. However, the process of looking around to “catch” low performers is likely to put everyone on notice and make everything more opaque as everyone (even good performers) looks for protection and leverage.
For example, if you impose stack ranking, you’re more likely to generate politics than to gain anything by rooting out low performers. Sure, you’ll cut some salary, but at the expense of turning people against each other. Once you implement stack ranking, the idea that everyone’s on the same team goes out the window, the knives come out, communication breaks down all over the place and your whole company becomes inefficient.
This basically implies that nothing can ever be wrong… Your opinions are bad and if it upsets you that I said this it means it must be true. That all aside I think if you’re working longer hours in programming you really must be doing something tragically wrong.
That all aside I think if you’re working longer hours in programming you really must be doing something tragically wrong.
I personally average about 50-60 hours per week, but a lot of that time is reading papers and books. That’s sustainable. I certainly couldn’t code for 12 hours straight, though, and I would never work more than 40 if not working on my own terms and pursuing projects that I’m interested in.
If “work” is being in an office, doing things that a bunch of managers (who don’t care about your career) decide is important, it’s hard as hell to do that for 40 hours much less 60. On the other hand, if you have a good job and if the scope of “work” is broadened to include learning/following the field, I think that 50-65 becomes sustainably feasible. Of course, you still need to take vacations, have enough time for exercise and family, and have other out-of-work pursuits to keep sane.
That’s a real stretch dude. I said that usually when someone gets bent out of shape about an opinion someone else holds that wasn’t explicitly addressed to them, it’s usually because it struck a nerve. That’s a far, far cry from your silly interpretation.
You also said this:
That usually only happens if there’s some truth to it.
The author is actually pretty honest about the fact that he used to judge new parents at work. Maybe it strikes a nerve here because it’s hard to let go of those old attitudes even though he now has a different perspective after having his own children. He says he didn’t even feel comfortable using all of his paternity leave.
So regarding this comment:
That’s not going to be solved by angry blog posts.
Of course it’s not going to be solved by angry blog posts, but the author seems self-aware enough to recognize his own ambivalence on the issue after his major life changes. Perhaps he’s sharing his experience in hope that some readers would be able to relieve a little bit of their own guilt about having time obligations outside of work. And when others are feeling less guilty, maybe they’ll be less likely to throw around epithets like ‘5:01er’.
It appears that you’ve proven me correct.
You are being unreasonable here. You draw conclusions from a fairly simplified logic here (if someone defends against accusations, there is truth to the accusation). Which might be true, but we all know it just as well isn’t.
That’s a terrible argument, and a huge, unjustified jump to a conclusion.
The article is defensive, but entirely justified IMO. It can be infuriating to be treated like a slacker when you’re working as hard or harder than anybody else. Even more so when it’s such a useless and stupid criteria like when a person leaves the office.
you can’t be serious about that line
This article is really bad.
Goto should not be used. It is a worse way of doing logic control than the other options available to you (if/else, switch, looping.) It has a lot of issues not least of which is that it doesn’t bring with it any code structure. When you use other flow control methods the code involved gets formatted to make this clear. There’s many other reasons not to use Goto but that’s one people often overlook.
Similarly if you are using eval there is almost certainly a better (safer, more reliable, less prone to hidden behaviors) way to do whatever you are trying to do.
Multiple-inheritance is mostly a problem because it goes against one of the core principles of OOP; encapsulation. In order to do multiple inheritance right you have to know both of the classes you are inheriting from in detail to understand what you’re going to end up with. There’s more beyond that but I think it mostly should be avoided as well.
Recursion does not belong with the rest of these and its inclusion shows the author’s lack of knowledge. People do sometimes misuse it but only as much as any other standard feature of programming like a standard for loop.
This is as good of an idea of root folder based file system. Don’t use other folders just put everything in root.
Branches are really useful. If you are having an issue with them it’s because you’re using the tool wrong, not because it’s a bad tool. Throwing the tool out is not a good solution. If you cut yourself with a knife you don’t switch to only using spoons in the kitchen.
Trunk-based development doesn’t preclude using branches. That is specifically called out early on.
Depending on the team size, and the rate of commits, very short-lived feature/task branches are used for code-review and build checking (CI) to happen before commits land in the trunk for other developers to depend on.
It precludes real utilization of branches. The phrase “very short-lived” is key here. If you only used “very short lived” sub folders root based filesystem would still be just as stupid.
To me this “trunk based development” idea is a terrible solution to a real problem (poor use of branches) because the real solution (being intelligent about how you use branches) is more complicated than the bad solution (avoid using branches.)
This article seems to be written by someone young enough to not remember what the reality of PCMCIA was. The idea of the PCMCIA sounds great but the reality was miserable. Drivers were always an issue, ports would often break, and the cards would often break. At least that was my experience.
As for why it died; that’s simple. Manufacturers started including everything you needed in the laptops. I’ve never once pined for a PCMCIA slot in the last 10 years.
Back in the early 2000’s the only thing I usually used one for was a wireless card on old Thinkpads. Once wireless networking became a standard feature it really had only niche uses any more and none of them strong enough to be worth giving up all that space which could be devoted to a bigger battery or something else useful to more users.
Oh and also the whole title of the article is baloney. PCMCIA slots were a standard feature on laptops for quite a number of years. It did take off. Then it died. Rightfully. Just like so many other outdated technologies.
Yeah, great analysis - it’s only obvious if you were there, so it’s useful to say it.
One thing I’d add is the observation that the only reason expandable hardware is ever a thing is when either it’s something not everyone has a use for, or it’s expensive enough that people want to do without it to save money. For network hardware in the 90s, both were true. And it wasn’t as simple as having wifi and that being everything you needed - depending on where you habitually used your computer, you’d need a dialup modem, a wired Ethernet card, or later a wifi card. So there was real market pressure to leave it out of the base machine.
To some extent, also, USB took over the role of PCMCIA. And we should be happy that it did, because it’s dramatically more secure, although that concern was barely on anybody’s radar at the time.
USB becoming used for general-purpose expansion couldn’t really have happened before it did; older serial ports weren’t fast enough. Also, older serial ports were a horrible experience mechanically, electrically, and with regard to software, but that applies to everything from that era, as you noted. :)
People Can’t Memorize Computer Industry Acronyms. The best of all expansion card formats, except for all the others. I can’t recall how many PCMCIA cards I broke the fiddly little pop-out or hard-attached external port off of, or how many proprietary dongles I lost. I do remember that the hotplug process was horribly broken on Linux for the complete duration of the technology. Good riddance.
First, the two female characters shown were actresses around 30. There are some good coders by 30. Not a huge number, and anyone who’s a good coder at 29 is likely to be even better at 35 and 45 and so on, but it does exist. You can pretty quickly (2-3 years?) get up to the ~95th percentile by landing in the right community (i.e. your first job being at a company doing FP or machine learning instead of an enterprise Java shop). Getting to the 99th percentile takes a lot more work and many more years and exposure to a lot of different stuff. I haven’t met a programmer under 40 that I’d put in that category.
Anyway, I agree that this stereotype is harmful and stupid. I wonder if this applies to other industries. For example, comedy writers on TV are usually young and hungry, even though writers peak quite late. Similarly, TV shows and rom-coms and the like portray “work” as this place where people in their 20s get paid to have ideas. Pop culture has to make Corporate (which includes “startup life”) look fresher and sexier than it is because two-and-a-half thousand hours of despair annually just doesn’t make for good movies. Hence we have a world full of open-plan tech companies that literally produce nothing of value except for a “scene”, i.e. an office full of young people that can be acqui-hired at $3M/head if someone at Google or Facebook owes the VC a favor. No one knows what these companies do, but it has nothing to do with technical excellence. It just looks like technical excellence to MBAs at Google and to VCs, and that’s close enough.
Can an abrasive 30-year-old like the fictional Carla (Silicon Valley) be a great coder? Absolutely. As I said, getting to the 95th percentile is quick, just because most programmers work in enterprise Java shops where little is learned and because most employers no longer invest in their people, so you can very quickly get ahead of the most of the pack; but getting into the true elite (99th percentile, then the Carmack level) takes a hell of a lot longer.
As for the teenage coding genius, well… that’s patently ridiculous.
Yeah the female characters in this article are far from teenagers. Plus Halt and Catch Fire takes place in the 80s and Cameron’s over confidence and recklessness as a young coder are a recurring theme in the show.
The premise of this article is legit but the examples given are terrible.
What a stupid click bait article. I’d hope we are above falling for this crap.
This is a good thing but it relies on the greater evil; WordPress requires write permission to your document root folder. That’s one of the WordPress Original Sins that creates so many of the other issues. Web applications with auto update are inherently insecure because they have to be able to write to their entire folder structure.
Verifying the integrity of the update archive is nice but that’s only one vector by which WordPress could be compelled to write to itself.
Submitted because just about every opinion in it is wrong, but Martin is still influential so we’re going to see this parroted.
Sadly yes. Most bizarre is that he seems to be directly contradicting some positions he’s held re:professionalism and “real engineering”.
A sampler to save people having to read through the thing:
If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.
At some point, we agreed to stop using lead paint in our houses. Lead paint is perfectly harmless–defect-free, we might even say–until some silly person decides it’s paintchip-and-salsa o'clock or sands without a respirator, but even so we all figured that maaaaybe we could just remove that entire category of problem.
My professional and hobbyist experience has taught me that it if a project requires a defect-free human being, it will probably be neither on-time nor under-budget. Engineering is about the art of the possible, and part of that is learning how to make allowances for sub-par engineers. Uncle Bob’s complaint, in that light, seems to suggest he doesn’t admit to realities of real-world engineering.
You test that your system does not emit unexpected nulls. You test that your system handles nulls at it’s inputs. You test that every exception you can throw is caught somewhere.
Bwahahahaha. The great lie of system testing, that you can test the unexpected. A suitably cynical outlook I’ve seen posited is that a well-tested system will still exhibit failures, but that they’ll be really fucking weird because they weren’t tested for (in turn, because they weren’t part of the mental model for the problem domain).
We now have languages that are so constraining, and so over-specified, that you have to design the whole system up front before you can code any of it.
Well, yes, that sort of up-front design is the difference between engineers and contractors.
More seriously, while I don’t agree with the folks (and oh Lord there are folks!) who claim that elaborate type systems will save us from having to write tests, I do think that we can make tremendous advances in certain areas with these really constrained tools.
It’s not as bad as having to up-front design “the whole system”. We can make meaningful strides at the layer of abstractions and system boundaries that we normally do, we can quickly stub in and rough in those things as we’ve always done, and still have something to show for it.
I’ve discussed and disagreed at length with at least @swifthand about this, the degree to which up-front design is required for “Engineering” and the degree to which that is even desirable today–but something we both agree on is that these type systems do have a lot to offer in making life easier when used with some testing. That’s a probably a blog post for another day though.
And so you will declare all your classes and all your functions open. You will never use exceptions. And you will get used to using lots and lots of ! characters to override the null checks and allow NPEs to rampage through your systems.
And furthermore, frogs will fill the streets, locusts will devour the crops, the water will turn to blood and sulfur will rain from the sky! You know, the average day of a modern JS developer.
More likely, you’ll start at the bottom, same as we’ve always done, and build little corners of your codebase that are as safe as possible, and only compromise in the middle and top levels of abstraction. A lot of people will write shitty unsafe code, but it’s gonna be a lot easier to check it automatically and say “Hey, @pushcx was drunk last night and made everything unsafe…maybe we shouldn’t merge this yet” than it is to read a bunch of tests and say “yep, sure, :shipit:”.
In general, this kinda feels like Uncle Bob is starting to be the Bill O'Reilly of software development–and that makes me sad. :(
For some languages that step is called compiling.
I’m generally not a fan of the “but types are tests”-argument, but you rightly call that out.
“Nullness” is something that can be modeled for the compiler to easily analyse, so I don’t understand why he calls that out. (especially as non-null is such a prevalent default case and most errors of not passing a value are accidents).
I wish I could upvote this comment a thousand times. Concise, funny, but also brutally true. You nailed it.
… plus a thorough type system lets the compiler make a whole bunch of optimizations which it might not otherwise be able to do.
Thank you for the thorough debunking I didn’t have the heart for.
Clean Code was a great, rightly influential book. But the farther we get from early 90s tools and understandings of programming, the less right Martin gets.
This post makes total sense if your understanding of types and interfaces is C++ and your understanding of safety is Java’s checked exceptions and both are circa 1995. I used them, they were terrible! But also great because they recognized a field of potential problems and attempted to solve them. Even if they weren’t the right solution (what first system is?), it takes years of experience with different experiments to find good solutions to entire classes of programming bugs.
This article attacks decent modern systems with criticisms that either applied to the problem 20 years ago or fundamentally misunderstand the problem. His entire case against middle layers of his system needing to explicitly list exceptions they allow to wander up the call chain is also a case in favor of global variables:
Defects are the fault of programmers. It is programmers who create defects – not languages.
Why prevent goto, pointer math, null? Why provide garbage collection, process separation, and data structures?
I guess that’s why this article’s getting such a strong negative reaction. The argument boils down Martin not understanding the benefits of features that are now really obvious to the majority of coders, then writing really high-flying moral language opposed to that understanding
It’s like if I opened a news site today to read an editorial about how not using restrictive seatbelts in cars is the only sane way to drive, and drivers who buckle their kids into car seats are monsters for deliberately crashing their cars. It’s so wrong I can barely figure out where the author started to hope explain the misunderstanding, but the braying moral condemnation demolishes my desire to engage. Martin’s really wrong, but he’s not working towards shared understanding so he’s only going to get responses from people who think that makes for a worthwhile conversation.
Java’s checked exceptions and both are circa 1995. I used them, they were terrible!
Interestingly for me, I came from a scripting language background and hated java checked exceptions with a passion. Because they felt tedious. It seemed lame that a large part of my programming involved IDE generated lists of exceptions. As I got more experienced and started writing software that I really want to not crash, I started spending a lot of mental effort tracking down what exceptions could be thrown in python and making sure I caught them all. Relying on/hoping documentation was accurate. I began to yearn for checked exceptions.
Ironically it seems like in java land they’ve mostly gone the route of magic frameworks and unchecked exceptions. So things like person.getName() can be used easily without worrying about whether or not the underlying runtime generated bytecode is using a straightup property access or if this attribute is being lazily initialized.
It seems like one of the simplest ways to retain your sanity is to uncouple I/O from your values and operate on simple Collections of POJOS. This gets into the arena of FP and monads, which use language level features to force this decoupling.
I also prefer the checked exception approach. Spent a lot of time with exceptions being thrown uncaught, got tired of it.
I would say that Go has show there is a middle ground somewhere between 100% type-proven safety, and unsafe yet efficient paradigms.
I’m pretty fond of Rust, or Haskell, but also enjoy less strict tools like JS or Ruby. Of course, I would rather like it if my auto-cruiser were written in Rust rather than Node, but one tool’s success does not mean the others are trash: I may be mistaken but if Martin’s point is “type-safety sucks”, it seems you are just saying “non-type-safety sucks more”. I’m not convinced by either arguments.
My point was that the people had deliberate reasons for the features they included or removed. I’m repeatedly asking “why” because Martin’s article dismisses the creators' reasons with an argument about personal responsibility and by characterizing them as punishments. The arguments Martin makes against these particular features also apply broadly to features he takes for granted.
I was writing entirely on the meta level of flaws in the article, not trying to argue for a personal favorite blend of safety/power features.
Yes. This. Exactly. Evolutionary language features AND engineering discipline. No need for either or, that’s just curmudgeonly.
then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.
Another argument to be made is productivity since he brought up a job. The productive programmers create maximum output in terms of correct software with minimum labor. That labor includes both time and mental effort. The things good, type systems catch are tedious things that take a lot of time to code or test for. They get scattered all throughout the codebase which adds effort if changing it in maintenance mode. Strong typing of data structures or interfaces vs manually managing all that will save time.
That means he’s essentially saying that developers using tools that boost their productivity should quit so less productive developers can take over. Doesn’t make a lot of business sense.
I wrote this a couple weeks ago, but I figure it’s worth repeating in this thread. I wrote a prototype in Rust to determine if using conjunctive-normal form to evaluate boolean expressions could be faster than naive evaluation. I created an Expr data type that represents regular, user-entered expressions and CNFExpr which forces conjunctive-normal form at the type system level. In this way, when I finished writing my .to_cnf() method, I knew that the result was in the desired form, because otherwise the type system would have whined. Great! However, it did not guarantee that the resulting CNFExpr was semantically equivalent to the original expression, so I had to write tests to give myself more confidence that my conversion was correct.
Testing and typing are not antagonists, they’re just different tools for making better software, and it’s extremely unnerving that someone like Uncle Bob, who has the ear of thousands of programmers, would dismiss a tool as powerful as type systems and suggest that people who think they are useful find a different line of work.
Thanks for the summary. Seems The Clean Coder has employed some dirty tricks to block Safari’s Reader mode, making this nigh on unreadable on my phone.
As a modern JS developer, I’ve started using Flow and TypeScript and have found that the streets have far fewer frogs now :)
More like the Ann Coulter of programming in so much that it is increasingly clear that they spout skin-deep ridiculous lines of reasoning to trigger people so that they gets more publicity!
Remember, when one retorts the troll has already won. Don’t feed the troll!
A passing thought
defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.
This brings to mind one of my Henry Baker’s taunting remark about our computing environments
computer software are “cultural” objects–they are purely a product of man’s imagination, and may be changed as quickly as a man can change his mind. Could God be a better hacker than man?
Has it not occurred to him that these languages come from programmers themselves? Of course it has. So sure, defaults are always the responsibility of people. Some are the fault of the application programmer; some are the fault of the people responsible for the language design. (And when one is knee deep writing some CRUD app whose technological choices are already set in stone, determining whose fault is it is of little use)
The entire point of software is to do stuff that people used to do by hand. Why on earth should we spend boatloads of hours writing tests to prove things that can be proved in milliseconds by the type system? That’s what type systems are for. If we were clever enough to write all the right tests all the time, we’d be clever enough to just not introduce NPEs in the first case.
I had the same reaction reading this. He’s off his rocker. The whole point of Swift being so strongly typed is that we’ve learned if the language does not enforce it, then it’s not a matter of if those bugs will happen but how often we will waste time dealing with them.
The worst part to me is that right off the bat he recognizes these languages aren’t purely functional; implying that there is a big difference between a language that enforces functional programming and one that doesn’t. Of course there is, and the same thing goes for typing.
He has just posted a follow up… http://blog.cleancoder.com/uncle-bob/2017/01/13/TypesAndTests.html
Alas, he says this…
Types do not specify behavior. Types are constraints placed, by the programmer, upon the textual elements of the program. Those constraints reduce the number of ways that different parts of the program text can refer to each other.
No Bob, Types are a name you give for a bundle of behaviour. It’s up to you to ensure that the type you name has the behaviour you think it has.
But whenever you refer to a type, the compiler ensures you will get something with that bundle of behaviour.
What behaviour exactly? That’s the business of you to decide and tests to verify or illustrate.
Whenever you loosen the type system…. you allow something that has almost, but not quite, the requested behaviour.
In every case I have investigated “Too Strict Type System”, I have come away with the feeling the true problem is “Insufficiently Expressive Type System” or “Type System With Subtle Inconsistencies” or worse, “My design has connascent coupling, but for commercial reasons I’m going to lie to the compiler about it rather than explicitly make the modules dependent.”
In which community is he influential, if I may ask? I’ve only learned of him through Lobsters.
I know him as a standard name in the Agile and Ruby communities, I think he’s well-known in Java but am not close enough to it to judge.
My college advisor loved talking about him and referencing him, but I think he’s mostly lost his influence with programmers today. At least, most people I know generally disagree with everything he’s written in the past decade.
Such blatant refusal (“everything is wrong”) seasoned with mockery (“parroted”) is exactly what has been stopping me from writing posts on this very topic.
Declaring that the responsibility for your inaction belongs to strangers leaps over impolite into outright manipulation. I pass.
That post doesn’t really make any sense to me. It’s time to end his blog because he doesn’t think of himself as a programmer? But it seems like he never has?
Good on him for shutting it down, but this post seems to be intended to shed light on why he’s doing it and I read the post and still have no real idea.
I can’t speak for James, but as he says, he expected this blog to have a limited run and now he feels finished with it.
Having followed him for a while, I would say he chafes at some aspects of programmer culture, especially the inward-facing nature of it. I think he feels that creativity of the type required to develop enjoyable games is sort of exclusive with the type required to be a programmer’s programmer—he wants to focus on the result rather than the method.
I have found his perspective very refreshing, but I guess he feels he’s said all he needs to say.
I got that part, but he doesn’t really answer “Why now?”, which seemed to be the whole purpose of this blog post. That’s what was weird to me.
There’s gotta be good money in separating older people from their retirement savings for retraining.
He mentioned Free Code Camp, which is online and free of charge. Hell, I could see myself going through the front-end modules just because front-end has always been a weak spot in my background. (I’ve always gravitated toward the low-level, mathematical “hard” stuff, but there’s value in being able to design an attractive website or app.) So I’m not sure that this snark applies, even though I’m generally very negative on the boot camp phenomenon.
I don’t know about retraining but all evidence points to easy money in selling doomsday prepper stuff to us. Like buckets of 20 year shelf life oatmeal and gold coins.
It’s ironic that older people are more into this than younger people, when their chances of seeing the times they are preparing for are less than said younger folk.
Older folks grew up in an age where those preparations made more sense and had a more obvious payoff.
It seems like he’s approaching this with the right mindset, so hopefully it works out for him.
People change careers late in life all the time, we just don’t glorify those people as much as the 20 year old who drops out of law school to start a web site that connects fans of Railroad Tycoon with actual railroad engineers or whatever.
We worship precociousness as a society because it excuses our mediocrity. If someone achieves something that would be mediocre by an adult standard, but while young, we put them on “30 Under 30” lists and they get venture funding. This also helps perpetuate the socioeconomic status quo, because ageism puts such a high weight on parental lift. Thus, we get people like Holmes and Spiegel and Duplan running the world.
The reason we value precociousness instead of upper-tier excellence is because people of average skill can evaluate precocious people whereas it takes decades to separate excellence from the chaff. College admissions officers can make a reasonable guess when it comes to the most precocious 17-year-olds, whereas the people tasked with spotting adult excellence are usually not up to the task (because the excellent people are out there doing stuff, instead of evaluating others' work, leaving that job to someone else) and it often shows, not only in the corporate world but also in the arts and politics.
Difficult but probably not impossible. Right now, though, we live in a world where everyone has to be out for him- or herself in an environment of high-frequency office politics and flash decisions. The metacognitive work isn’t going to be valued in that kind of environment.
I disagree that he has the right mindset. I find it a bit sad that, at 56, the author doesn’t “like activities that don’t pay” and that he “can’t keep doing something just for the fun of it.” While most folks in capitalistic societies grow up desiring money and all it represents, they eventually realize these material things only bring fleeting happiness. Lasting happiness comes from enjoying things that you are spending your time on, and finding an activity that allows one to enjoy the moment is a gift. Most young children and older folks know this. He has found that gift, but will ruin it if he doesn’t make it pay, in money, apparently.
If you can get monetarily rewarded for doing something you love, great. But insisting that every waking moment must bring a monetary reward is very limiting and likely a proposition that will only bring disappointment in the long run.
Disclaimer - I went to law school in my 40’s as a second career, and a couple of folks in my class were in their 60’s, however they were there primarily because they always wanted to go to law school, not because they thought they would be monetarily rewarded for doing so. Going to law school was the life-long dream, not the “making piles of money” afterword.
I find it a bit sad that, at 56, the author doesn’t “like activities that don’t pay” and that he “can’t keep doing something just for the fun of it.”
I think that’s part of his personality rather than his mindset, so while I disagree with his views there I am not going to make judgment calls on his own personal motivations for doing things.
Thanks - I wasn’t aware of a measurable difference between mindset and personality. I’d like to hear your perspective on the difference(s).
Here’s a start: http://www.progressfocused.com/2015/06/mindset-and-personality.html
Maybe I can illustrate the difference like this:
Mindset: “I am going to this conference to learn about Node and meet people in the community.”
Personality: “I love conferences, Node and meeting new people.”
The first example is how I’m thinking about going to the conference, the second is how I feel about the subjects. It’s about how you choose to approach something vs. what/how you think/feel about something.
Git is horribly broken
I hear this sentiment a lot, but I’ve never heard a detailed argument as to why or how it’s broken from people who have invested time into learning it. I use it daily even when working on solo projects. While it had an initial learning curve and caused a few headaches while diving in, now that I grok it pretty well I can’t think of a better way to implement most of its functionality. Are there any git detractors that have used git extensively here who’d like to weigh in with git’s downfalls or better ways to implement a distributed version control system?
On a side note, if one’s frustration is with git in the command line, magit is phenomenal and is a good enough reason in itself to install emacs.
Are there any git detractors that have used git extensively here who’d like to weigh in with git’s downfalls or better ways to implement a distributed version control system?
I use Git every day but I think the model could be better. I think the work in patch theory is useful:
Pijul lets you describe your edits after you’ve made them, instead of beforehand.
I think if you think people who struggle with git won’t struggle with emacs, you are about to be disappointed. Git has some rough edges with ux, and this is only made worse by the fact that the problem it solves isn’t trivial to understand. That being said I think the author is wrong, people will use git despite its usability shortfalls. requiring “always connected to the internet” is a fundamental misunderstanding of why git is presently popular, and since the author is investing in this direction they are poised to lose a lot of money. I genuinely can’t imagine a non-opensource VCS toppling git, either it’s just silly.
I hear this sentiment a lot, but I’ve never heard a detailed argument as to why or how it’s broken from people who have invested time into learning it.
There have been papers about that. They were discussed previously along with the VCS built to adress the shortcomings:
Not sure how Gitless fits here. Gitless is based on Git, at the core of Gitless, is Git. If Git itself is broken, how Gitless can fix it?
Gitless’s authors main complaint is that Git’s interface is conceptually complex, Gitless reduces the API and combines some of the underlying concepts in Git. It seems like a reasonable goal although I haven’t actually used Gitless myself so I can’t really give a first hand account on if it’s successful. In any case, it seems reasonable to build a functional system by some criteria on top of a broken system by that same criteria.
Thank you! Exactly what I was looking for, checking these links out now.
The only issues I’ve heard relate to the command line being inconsistent.
while I think git is great at solving a complex problem and aside from the command line being inconsistent, the man pages have been awful for someone not knee deep in the nuances of how it works. For example, the man page of a rebase use to be laughably obtuse at best. It has gotten much better though as now it reads: git-rebase - Reapply commits on top of another base tip, which is about as succinct and clear as you can be for a complex feature.
git-rebase - Reapply commits on top of another base tip
As someone who uses git a lot and has spent time doing fancy tricks with it:
git checkout --
The staging area is inconsistent (in particular in how it interacts with stash) and not useful enough to be worth the complexity budget. I would remove it entirely.
Yikes, no thanks! The stage is one of my favorite features. Building up a commit piecemeal is soooo much better than the old subversion commit style that encouraged devs to just throw in every dirty file.
If you’re committing something that doesn’t include every dirty file you’re almost necessarily committing something you haven’t tested, which may well not even build. That’s a big drag when you come to bisect (which is one of the key advantages of git IME).
If you’re committing something that doesn’t include every dirty file you’re almost necessarily committing something you haven’t tested
Or, y'know, git stash -k -u && rspec
git stash -k -u && rspec
That’s possible, sure, but when one makes the bad practice easier than the good practice one is going to have a bad time.
There is absolutely nothing “bad practice” about building up a proposed commit, testing the staged commit, and then committing it.
It’s certainly a better practice than the “tweaked the renderer, adjusted the logging, added two new export formats, and some other stuff I forgot about” kitchen sink dogshit commits that “commit all dirty files” inevitably leads to.
The bad practice is committing something that doesn’t work. That’s much worse than a commit that contains two unrelated changes (which is not something I see very often - if you’re ending up with that try committing more frequently).
commit --amend accomplishes the same thing with much less confusion.
I like git, but I’ve had the chance to onboard smart developers who are more familiar with mercurial. They’ve made some convincing arguments as to why mercurial is nice to use.
I don’t know if the next billion programmers will use git, but I have no doubt they can use git. Most people come out of a 6 week coding bootcamp knowing how to use git and github. That’s a pretty low bar.
My main problem with just about every article I’ve ever seen espousing this sentiment is that it comes along with zero solutions.
If you’re going to say “Git is bad.” then you need to also offer an alternate model of collaborative version control that works better. So far I haven’t seen that.
I don’t think that’s true. What you’re saying is that I don’t have the right to say something is bad unless I also have the ability to fix it, which seems ridiculous to me. I hear that a lot with open source software, “if you don’t like it, just fork it and change it!”.
Just because I use Git doesn’t mean I know how to write a VCS. There are, however, people who do know how to do that, and I can appeal to them in the hopes that they might agree with me.
Sadly I hear the same quite often as well, our tech leads and architects prefer Accurev over git. While I can see some benefits for accurev, git was dismissed as being ‘too computery sciencey’.
git was dismissed as being ‘too computery sciencey’
Speaking as someone who writes software. That’s horrifying.
prefer Accurev over git
I just googled accurev. My gut reaction was that it looks a lot like a nightmare of a product I was once forced to use called IBM Jazz something.
I fundamentally disagree with the premise.
The next billion programmers won’t have computer science degrees or speak English. They’ll be on Chromebooks, or their phones. They’ll touch code in smaller ways and it will only be a part of their job: they might be considered marketers or analysts or designers. And they may not use Git or Github.
These people already exist, they aren’t programmers, and their interfaces are not going to change to look more like programming. They’re going to have version control, but it’s going to look like video games or Google Docs with a mix of auto and manual save points.
Exactly what I was going to say. The introductory sentence is totally contradictory. The author basically says “The next billion programers won’t be programmers.” Well, then they aren’t programmers. And the author is right, non-programmers don’t need Git. But programmers do.
And the trend in version control software if you draw a line from VCS to SVN to Git is for more features and more capabilities, not less.
We do pair programming so the first thing is I talk to my pair about what ticket we want to take next in JIRA (usually there’s more than one with similar priority, but obviously if one stands out that’s the one we take.)
We’ll choose either my or my pair’s cloud9 workspace to work in, create a branch in Git related to the JIRA ticket id, and then open/create relevant files for the ticket.
Make changes to the relevant files (for my job this is usually in PHP, sometimes JS, sometimes HTML/SCSS), update/create unit tests, commit the branch (all unit tests for the repo run on a pre-commit hook) and push it up to the main repo. Then we’ll post a code review, move the ticket into Review status and move on to another ticket.
I really like cloud9 for pair programming. I works really well for us, we use it in SSH workspace mode so we’re doing development work on our own development servers.