1. 3

    After this batch, we are not planning for a second run (as the production of the phone itself will replace the dev kit in 2019).

    I can tell from painful experience…. unless the production variant has the appropriate debug connectors for low level debugging…… that can be a Very Bad Idea.

    Usually the product build is all about physical size and and mechanical integrity…. which really wars with the need to tap out signals from wherever you need to.

    1. 10

      (Warning: not an embedded guy but read embedded articles. I could be really wrong here.)

      “This big push is causing a vacuum in which companies can’t find enough embedded software engineers. Instead of training new engineers, they are starting to rely on application developers, who have experience with Windows applications or mobile devices, to develop their real-time embedded software.”

      I don’t know. I was looking at the surveys that Jack Ganssle posts. They seem to indicate that the current embedded developers are expected to just pick up these new skills like they did everything else. They also indicate they are picking up these skills since using C with a USB library or something on Windows/Linux isn’t nearly as hard as low-level C and assembly stuff on custom I/O they’ve been doing. I’m sure there’s many companies that are either new not knowing how to find talent or established wanting to go cheap either of whom may hire non-embedded people trying to get them to do embedded-style work with big, expensive platforms.

      I think author is still overselling the draught of engineers given all the products coming out constantly indicate there’s enough to make them. Plus, many features the engineers will need are being turned into 3rd party solutions you can plug in followed by some integration work. That will help a little.

      “developers used “simple” 8-bit or 16-bit architectures that a developer could master over the course of several months during a development cycle. Over the past several years, many teams have moved to more complex 32-bit architectures.”

      The developers used 8-16 bit architectures for low cost, sometimes pennies or a few bucks a chip. They used them for simplicity/reliability. Embedded people tell me some of the ISA’s are easy enough for even assembly coding stuff to be productive on kinds of systems they’re used in. Others tell me the proprietary compilers can suck so bad they have to look at assembly of their C anyway to spot problems. Also, stuff like WCET analysis. The 8-16-bitters also often come with better I/O options or something per some engineers’ statements. The 32-bit cores are increasing in number displacing some 8-16 bit MCU’s market share, though. This is happening.

      However, a huge chunk of embedded industry is cost sensitive. There will be for quite a while a market for 8-bitters that can add several dollars to tens of dollars of profit per unit to a company’s line. There will always be a need to program them. If anything, I’m banking on RISC-V (32-bit MCU) or J2 (SuperH) style designs with no royalties being those most likely to kill most of the market in conjunction with ARM’s MCU’s. They’re tiny, cheap, and might replace the MIPS or ARM chips in portfolios of main MCU vendors. More likely supplement. This would especially be true if more were putting the 32-bit MCU’s on cutting-edge nodes to make the processor, ROM, and RAM’s cheap as 8-bitters. We’re already seeing some of that. The final advantage on that note of 8-16 bitters is they can be cheap on old process nodes that are also cheap to develop SOC’s on, do analog better, and do RF well enough. As in, 8-16-bit market uses that to create huge variety of SoC-based solutions customized to the market’s needs since the NRE isn’t as high as 28-45nm that 32-bits might need to target. They’ve been doing that a long time.

      Note to embedded or hardware people: feel free to correct anything I’m getting wrong. I’ve just been reading up on the industry a lot to understand it better for promoting open or secure hardware.

      1. 3

        nit: s/draught/drought

        1. 2

          Was just about to post the same thing. I don’t normally do typo corrections, but that one really confused me. :)

        2. 2

          Yup. Jack Gannsle is always a good read. I highly recommend anyone interested in embedded systems subscribing to his Embedded Muse news letter.

          Whether the Open Cores will beat ARM? Hmm. Arm has such a strangle hold on the industry it’s hard to see it happen…. on the other hand Arm has this vast pile of legacy cruft inside it now, so I don’t know what the longer term will be. (Don’t like your endianess, toggle that bit and you have another, want a hardware implemented java byte code, well there is something sort of like that available, …..)

          Compilers? It’s hard to beat gcc, and that is no accident. A couple of years ago Arm committed to make gcc as performant as their own compiler. Why? Because the larger software ecosystem around gcc sold more Arms.

          However, a huge chunk of embedded industry is cost sensitive.

          We will always be pushed to make things faster, cheaper, mechanically smaller, longer battery life,….. If read some of what Gannsle has being writing about the ultra-low power stuff, it’s crazy.

          Conversely we’re also always being push for more functionality, everybody walks around with a smartphone in their pocket.

          The base expectation these days a smartphone size / UI/ functionality / price / battery life ….. which is all incredibly hard to achieve if you aren’t shipping at least a 100000 units…

          So while universities crank out developers who understand machine vision, machine learning, and many other cutting-edge research areas, the question we might want to be asking is, “Where are we going to get the next generation of embedded software engineers?”

          Same place we always did. The older generation were a rag tag collection of h/w engineers, software guys, old time mainframers, whoever was always ready to learn more, and more, and more…

          1. 1

            This week’s Embedded Muse addresses this exact article. Jack seems to agree with my position that the article way overstates things. He says the field will be mixed between the kinds we have now and those very far from the hardware. He makes this point:

            “Digi-Key current lists 72,000 distinct part numbers for MCUs. 64,000 of those have under 1MB of program memory. 30,000 have 32KB of program memory or less. The demand for small amounts of intelligent electronics, programmed at a low level, is overwhelming. I don’t see that changing for a very long time, if ever.”

            The architecture, cost, etc might change. There will still be tiny MCU’s/CPU’s for those wanting lowest watts or cost. And they’ll need special types of engineers to program them. :)

            1. 1

              Thanks for the inside view. Other thing about Jack is he’s also a nice guy. Always responds to emails about embedded topics or the newsletter, too.

          1. 25

            Nah.

            Libraries have bugs, silicon has bugs, external systems have bugs, but not matter whose fault it is, for a mission critical device I have to follow the bug to wherever it is and fix it.

            And no, I can’t wait for upstream to make a new release.

            If anything the industry is going the other way.

            Yes, we pull in gigabytes of source code from OpenEmbedded. That just means I have gigabytes of source code to go hunting and fixing and enhancing.

            I don’t feel in the least bit extinct.

            The difference is in the amount of functionality I can offer the customer. Literally orders of magnitude more than the good old bad old days of hand craft it ourselves.

            1. 3

              And still almost all embedded houses which I have been contracted were still in the “we do everything by ourselves, we will need something ours down the road”-mentality. There is so much untapped potential to be unearthed with open source that I have difficulty to see embedded developers to die out soon.

            1. 1

              tl;dw;?

              1. 5

                According to the guy behind handmade hero:

                The fact that we currently have hardware vendors shipping both hardware and drivers (with USB and GPUs being to major examples), rather than just shipping hardware with a defined/documented interface, a la x64, or the the computers of the 80s, is a very large contributor to the fact that we have basically 3 consumer-usable OSes, and each one is well over 15 million lines of code. These large codebases are a big part of the reason that using software today can be rather unpleasant

                He proposes that if hardware vendors switched form a hardware+drivers to hardware that was well-documented in how it was controlled, so that most programmers could program it by feeding memory to/from it (which he considers an ISA of sorts), we’d be able to eliminate the need for drivers as such, and be able to go back to the idea of a much simpler OS.

                I haven’t watched the whole thing yet, but that’s the highlights

                1. 7

                  Oh I would so, so, so, love that to happen…..

                  …but as a guy whose day job is at that very interface I will point this out.

                  The very reason for the existence of microcomputers is to soak up all the stuff that is “too hard to do in hardware”.

                  Seriously, go back to the original motivations for the first intel micros.

                  And as CPU’s have become faster, more and more things get “winmodemed”.

                  Remember ye olde modems? Nice well defined rs-232 interface and standardized AT command set?

                  All gone.

                  What happen?

                  Well, partly instead of having a separate fairly grunty/costly CPU inside the modem and a serial port… you could just have enough hardware to spit the i/q’s at the PC and let the PC do the work, and shift the AT command set interpretor into the driver. Result, cheaper better modems and a huge pain in the ass for open source.

                  All the h/w manufacturers regard their software drivers as an encryption layer on top of their “secret sauce”, their competitive advantage.

                  At least that’s what the bean counters believe.

                  Their engineers know that the software drivers are a layer of kludge to make the catastrophe that is their hardware design limp along enough to be saleable

                  But to bring their h/w up to a standard interface level would require doing some hard (and very costly) work at the h/w level.

                  Good luck convincing the bean counters about that one.

                  Of course, WinTel regard the current mess as a competitive advantage. It massively raises the barriers to entry to the market place. So don’t hold your breathe hoping WinTel will clean it up. They created this mess for Good (or Bad depending on view) reasons of their own.

                  1. 1

                    All the h/w manufacturers regard their software drivers as an encryption layer on top of their “secret sauce”, their competitive advantage.

                    I thought the NDA’s and obfuscations were about preventing patent suits as much as competitive advantage. The hardware expert that taught me the basics of cat and mouse games in that field said there’s patents on about everything you can think of in implementation techniques. The more modern and cutting edge, the more dense the patent minefield. Keeping the internals secret means they have to get a company like ChipWorks (now TechInsights) to tear it down before filing those patent suits. Their homepage prominently advertises the I.P.-related benefits of their service.

                    1. 2

                      That too definitely! Sadly, all this comes at a huge cost to the end user. :-(

                  2. 1

                    The obvious pragmatic problem with this model is that hardware vendors sell the most hardware (and sell it faster) when people can immediately use their hardware, not when they must wait for interest parties to write device drivers from it. If the hardware vendor has to write and ship their own device drivers anyway, writing and shipping documentation is an extra cost.

                    (There are also interesting questions about who gets to pay the cost of writing device drivers, since there is a cost involved here. This is frequently going to be ‘whoever derives the most benefit from having the device driver exist’, which is often going to be the hardware maker, since the extra benefit to major OSes is often small.)

                1. 8

                  Sometimes I wonder why we can’t just sit down and code in peace. Why must we introduce a CoC when it’s sufficient to just ban everyone holding up the coding? Why must we have flame wars on mailing lists because someone did something someone else did not like? Getting philosophical but maybe it’s just human to love everyone but the outgroup.

                  Open Source is not politics free but it’s a hacker’s ethic, “bring your code and make software better”. All this bureaucratic nonsense doesn’t lead to better code, it only leads to include and excluding developers for reasons other than the quality of their code. Maybe we get an utopia one day where software just gets written and nobody cares why.

                  CoC blowups always make me so melancholic.

                  1. 9

                    I sort of wondered about the llvm CoC since it seemed to be one of his core objections…

                    So I actually went to the trouble of looking it up…

                    https://llvm.org/docs/CodeOfConduct.html

                    It’s so innocuous I read it literally three times looking for the problem.

                    There isn’t one.

                    I really can’t see anything that could possibly stop you from “bring your code and make software better”, or why there would be any need to violate that CoC whilst doing so.

                    I can easily/trivially see that people being hurt by violations of that CoC would cease to “bring your code and make software better”.

                    It certainly would be nice if there were never sufficient violations of that pretty common sense document, or that none of them were severe enough to require any response.

                    And for the vast majority of devs this is the case.

                    Alas, it seems in any project big enough, enough people step across that line (or, sort-of like Sheldon from The Big Bang, simply don’t seem to know where the line is), that such a document is required.

                    1. 6

                      There isn’t one.

                      My personal issue is the clause “In addition, violations of this code outside these spaces may, in rare cases, affect a person’s ability to participate within them, when the conduct amounts to an egregious violation of this code”. As written, I am okay with it. But in practice, this “outside clause” has been responsible for more CoC drama than almost any other clauses, so I always advocate to strike this out. I voiced such opinion when Rust CoC and Golang CoC were established. They lack outside clause.

                      1. 2

                        I’ve detailed some objections in another posts here but I think the fair summary is that the CoC as it’s written is in parts way to specific and in parts too vague.

                        IMO there is two ways to do a CoC. Either you implement the bare minimum which allows moderators and administrators to apply common sense when enforcing the rules. Or alternatively you implement it as rigid and thorough as you’d implement a lawbook with no room for wiggling or loopholes. Sadly the LLVM CoC seems to be neither, I would prefer the later approach if I was to write a CoC.

                        But either way, I think it distracts too much from the “bring your code and make software better” mentality, yes. If that simple rule was implemented as the first suggestion earlier, there shouldn’t be any need for more complicated rules.

                        1. 5

                          I dare say if the moderator is behaving badly, almost any CoC is not going to be nice… But then the problem is the moderator not the CoC.

                          In my experience though, the problem is by far the very rare person acting in bad faith mixed in with a larger collection of Sheldonesque borderline Aspie’s who honestly don’t have a clue where the line is.

                          The Sheldon’s genuinely aren’t trying to hurt people, the just don’t have the social awareness to know when they are.

                          The Bad Eggs seem to get a perverted buzz from hurting people and then pretending to be Sheldon’s.

                          The CoC is really there as a tool to sift the Sheldon’s from the Bad Eggs.

                          1. 4

                            Just a note though….

                            I actually don’t like Big Bang Theory.

                            It’s like laughing at a cripple with a rubber crutch.

                            A truly socially inept person like Sheldon is hurting himself and those around him all the time, is way more likely to withdraw into social isolation than have a tight circle of friends.

                            A good CoC and moderator should be less a legal document and cop, and more an aid to help the Sheldon’s function better in their chosen societies.

                            1. 1

                              Sorry for the late reply.

                              I don’t disagree fundamentally, I think it might be worthwhile to figure out if someone is a Bad Egg or a Sheldon (although I never watched Big Bang Theory).

                              I personally don’t think a CoC is the most effective solution and in the cases where it might be, a lot of organizations choose to pick a badly engineered CoC over a thorough CoC to further a political cause. I don’t think that’s good for orgs that pick good and thorough CoCs, as rarely as that even happens.

                              A properly worded legal document and cop should be able to sift out sheldon’s without letting through Bad Eggs, it’s just a lot of effort to get there, effort I also don’t see these orgs willing to put up for a CoC.

                      1. 1

                        You’re all welcome to LinuxConf Australasia in Christchurch New Zealand in January 2019 https://lca2019.linux.org.au/

                        1. 6

                          TLDR: The laptop was not tampered with.

                          Still a good read though :-)

                          1. 16

                            That he knows of.

                            1. 5

                              It’s impossible to prove… :)

                              1. 5

                                For sure haha. One can do better than he did, though.

                                For one, he can block evil maid style attacks very cheaply. I’ve done plenty of tamper-evident schemes for that stuff. You can at least know if they opened the case. From there, one can use analog/RF profiling of the devices to detect chip substitutions. It requires specialist, time-consuming skills or occasional help of a specialist to give you black box method plus steps to follow for device they already profiled.

                                The typical recommendation I gave, though, was to buy a new laptop in-country and clear/sell it before you leave. This avoids risks at border crossings where they can legally search or might sabotage devices. Your actual data is retrievable over a VPN after you put Linux/BSD on that sucker. Alternatively, you use it as a thin client for a real system but latencies could be too much for that.

                                So, there’s a few ideas for folks looking into solving this problem.

                                1. 3

                                  This (and the original article) are a techno solutions to a techno problem that doesn’t really exist.

                                  If you’re a journo doing this, they will look at your visa and say, you claim to be a journalist, but you have no laptop, we don’t believe you, entry denied.

                                  I’m pretty sure even a very open country like NZ will do this to you. (If you claim not to be a journalist and start behaving as one, again, violating your visa conditions (ie working not visiting, out you go).

                                  As to spying on what you have on an encrypted drive….. rubber hose code breaking sorts that out pretty quick.

                                  I grew up in the Very Bad Old days and tend to have a very dim view of the technical abilities, patience and human kindness of the average spook.

                                  1. 2

                                    I got the idea from people doing it. They werent journalists, though. The other thing people did which might address that problem is take boring laptops with them. They have either nothing interesting or some misinformation. Nothing secret happens on it during trip. Might even use it for non-critical stuff like youtube just so its different when they scan it on return.

                            2. 5

                              TLDR: The laptop was not tampered with in a way he’s foreseen.

                              To just say the laptop was not tampered with is missing his point completely.

                            1. 5

                              Hmm, this was not quite what I expected. The theoretical part of this article was interesting, but I’m not sure the experimental part gives much information about anything. He took a grand total of three international trips during the experiment, which is a lot fewer than I was expecting. I’m not sure I would expect to find anything on a honeypot PC with such a small number of samples.

                              I mean it’s not a bad story, but it feels like the same story could’ve been written without the mini-experiment, which doesn’t really add any useful data. Although I guess it wouldn’t have had an interesting hook then.

                              1. 5

                                I always keep a tiger repelling rock in my laptop bag. So far, zero tigers have attempted to eat my laptop.

                                1. 2

                                  Ahh, by the way. Your rock is defective.

                                  It has had the unintentional side effect of repelling sharks.

                              1. 8

                                Dragging a window to the top of the screen maximizes the window. FOR GODS SAKE WHY.

                                It’s the “mile high, mile wide” button pattern. Grab the window bar and flick it against the top of screen. Boom Maximised. Grab an pull it’s normal size again, drag it to your other screen and flick it against the top of screen, bang it’s maximised on that screen.

                                Nice.

                                I use it all the time. Very handy.

                                Relevant XKCD for you… https://xkcd.com/1172/

                                1. 6

                                  Years ago, I had written the web-based IDE used internally by our dev team (long story short, we shipped custom hardware with a custom compiler targeting that hardware with a DSL; I wrote the compiler and a collaborative IDE for the team to write code that they released roughly once a week to customers). This was long enough ago that the concept of a “web-based IDE” was novel.

                                  One time around Thanksgiving I modified the color scheme from the Amiga-inspired blue and gray to fall colors (browns, oranges, etc). No functional changes, just a nice color scheme change to celebrate the holidays.

                                  I had to revert it immediately because a bunch of people complained about how it broke their workflow/moved their cheese/etc.

                                  1. 4

                                    Once had a senior manager who was colour blind, completely missing one colour preceptor. Had to maintain a special colour map that made everything look (for normal people) like pizza vomit.

                                1. 11

                                  I disagree with the premise of this article, since I think that it’s totally reasonable in many cases to tell somebody to RTFM (or some more polite variant). That said, it’s a neat reflection and the author does a really good job illustrating why they think we should stop doing it and also by setting up and arguing against some reasonable counterpoints.

                                  1. 9

                                    I think it’s totally valid to say “you can find info on that in the blah section of the manual for foo; you can read it by typing “man foo” in your terminal.”

                                    That sets the expectation that the user do their own work without assuming they were simply too lazy to do so.

                                    1. 1

                                      Of course, if they have Read The Fine Manual… and are still confused.

                                      I walk over to their desk and watch. Probably several things are going wrong and some base misconceptions are interfering with their understanding.ie. They could read that manual until they were blue, and still not understand.

                                      Things to do then….

                                      • Watch their debugging strategy. Give handy hints on how to debug a situation like this. (eg. Introduce them to strace.)
                                      • Identify the misunderstanding, pass them TFM which will clear it up.
                                      • Watch for boiling point. A person who is too angry cannot learn anymore. Just solve their problem, which will teach them nothing but save their career for now, and save the teaching for another day.
                                    2. 1

                                      The #1 recommendation in the article is silence, but for myself, learning that I should RTFM was a revelation, right up there with “code is to be read by humans” and “testing is a good thing to do”.

                                      I’d be interested in hearing more stories and polite variants on RTFM. Giving and receiving feedback is hard (to the point that cursing when you do it is sometimes considered acceptable?)

                                    1. 3

                                      Hmm. The best mentor I had in computers (back in the day of Walls Full of Big Paper Manuals) did me a huge favour.

                                      I think he liked me.

                                      As the local guru (or Tohunga if you prefer the NZ term), for everybody else he would give the answer to their questions. Do this, or that.

                                      They would do this, or that, and learn nothing.

                                      Me? Nah. He would grab a manual (or a book) and say, “Read this one”.

                                      I felt honoured that I was seen fit to follow him.

                                      I try to do the same with those whom I feel capable of following me.

                                      Take it as it is intended. A compliment. I see in you a capacity to learn. A lot.

                                      1. 5

                                        Very strong opinions here…

                                        As far as I’m concerned, strong scrum practices would defeat these issues.

                                        Bad tools are not scrum. Lack of ownership is not scrum.

                                        People who try to use scrum as a way to wrap a process around bad ideas will never benefit from it.

                                        Take the good ideas, apply scrum, and most importantly, adapt to what you learn.

                                        1. 38

                                          adapt to what you learn.

                                          Umm. Point 5 and 6 of TFA?

                                          I’ve learnt from seeing it in practice both in my own experience and speaking to many others… The article is pretty spot on.

                                          Ok. Warning. Incoming Rant. Not aimed at you personally, you’re just an innocent bystander, Not for sensitive stomachs.

                                          Yes, some teams do OK on Scrum (all such teams I have observed, ignore largish chunks of it). ie. Are not doing certified scrum.

                                          No team I have observed, have done as well as they could have, if they had used a lighter weight process.

                                          Many teams have done astonishingly Badly, while doing perfect certified Scrum, hitting every Toxic stereotype the software industry holds.

                                          Sigh.

                                          I remember the advent of “Agile” in the form of Extreme Programming.

                                          Apart from the name, XP was nearly spot on in terms of a light weight, highly productive process.

                                          Then Kanban came.

                                          And that was actually good.

                                          Then Scrum came.

                                          Oh my.

                                          What a great leap backwards that was.

                                          Scrum takes pretty much all the concepts that existed in XP…. and ignores all the bits that made it work (refactoring, pair programming, test driven development, …), and piles on stuff that slows everything down.

                                          The thing that really pisses me off about Scrum, is the amount of Pseudo Planning that goes on in many teams.

                                          Now planning is not magic. It’s simply a very data intensive exercise in probabilistic modelling.

                                          You can tell if someone is really serious about planning, they track leave schedules and team size changes and have probability distributions for everything and know how to combine them, and update their predictions daily.

                                          The output of a real plan is a regularly updated probability distribution, not a date.

                                          You can tell a work place bully by the fact their plans never change, even when a team member goes off sick.

                                          In some teams I have spoken to, Scrum planning is just plain unvarnished workplace bullying by powertripping scrum managers, who coerce “heroes” to work massive amounts of unpaid overtime, creating warm steaming mounds of, err, “technical debt”, to meet sprint deadlines that were pure fantasy to start with.

                                          Yes, if I sound angry I am.

                                          I have seen Pure Scrum Certified and Blessed Scrum used to hurt people I care about.

                                          I have seen Good ideas like Refactoring and clean code get strangled by fantasy deadlines.

                                          The very name “sprint “ is a clue as to what is wrong.

                                          One of the core ideas of XP was “Sustainable Pace”…. which is exactly what a sprint isn’t.

                                          Seriously, the one and only point of Agile really is the following.

                                          If being able to change rapidly to meet new demands has high business value, then we need to adapt our processes, practices and designs to be able to change easily.

                                          Somehow that driving motivation has been buried under meetings.

                                          1. 8

                                            I 100% agree with you actually.

                                            I suppose my inexperience with “real certified scrum” is actually the issue.

                                            I think it’s perfectly fine and possible to take plays out of every playbook you’ve mentioned and keep the good, toss the bad.

                                            I also love the idea that every output of planning should be a probabilistic model.

                                            Anyone who gets married to the process they pick is going to suffer.

                                            Instead, use the definitions to create commonly shared language, and find the pieces that work. For some people, “sprint” works. For others, pair programming is a must have.

                                            I think adhering to any single ideology 100% is less like productivity and more like cultish religion.

                                            1. 5

                                              fantasy deadlines

                                              Haha. Deadlines suck so let’s have em every 2 weeks!

                                              1. 3

                                                As they say in the XP world: if it hurts, do it more often.

                                                1. 3

                                                  True. It’s a good idea. One step build pipeline all the way to deployment. An excellent thing, all the pain is automated away.

                                                  If you planned it soundly, then a miss is feedback to improve your planning. As I say, planning is a data intensive modelling exercise. If you don’t collect the data, don’t feed it back into your model… your plans will never improve.

                                                  If it was pseudo planning and a fantasy deadline and the only thing you do is bitch at your team for missing the deadline… it’s workplace bullying and doing it more will hurt more and you get a learned helplessness response.

                                            2. 12

                                              Warning: plain talk ahead, skip this if you’re a sensitve type. Scrum can actually work pretty well with mediocre teams and mediocre organizations. Hint we’re mostly all mediocre. This article wreaks of entitlement; I’m a special snowflake, let ME build the product with the features I want! Another hint; no one wants this. Outside of really great teams and great developers, which by definition most of us aren’t, you are not capable.

                                              Because all product decision authority rests with the “Product Owner”, Scrum disallows engineers from making any product decisions and reduces them to grovelling to product management for any level of inclusion in product direction.

                                              This the best thing about scrum/agile imo. Getting someone higher in the food chain to gatekeep what comes into development and prioritize what is actually needed is a huge benefit to every developer wether you realize it or not. If you’ve never worked in a shop where Sales, Marketing and Support all call their pet developers to work on 10 hair on fire bullshit tasks a day, then you’ve been fortunate.

                                              1. 9

                                                Scrum can actually work pretty well with mediocre teams and mediocre organizations. Hint we’re mostly all mediocre.

                                                The problem is: Scrum also keeps people mediocre.

                                                Even brilliant people are mediocre, most of the time, when they start a new thing. Also, you don’t have to be a genius to excel at something. A work ethic and time do the trick.

                                                That said, Scrum, because it assumes engineers are disorganized, talentless children, tends to be a self-fulfilling prophecy. There’s no mentorship in a Scrum shop, no allowance for self-improvement, and no exit strategy. It isn’t “This is what you’ll do until you earn your wings” but “You have to do this because you’re only a developer, and if you were good for anything, you’d be a manager by now.”

                                                1. 3

                                                  That said, Scrum, because it assumes engineers are disorganized, talentless children, tends to be a self-fulfilling prophecy.

                                                  Inverting the cause and effect here is an equally valid argument, that most developers in fact are disorganinzed, talentless children as you say, and the sibling comment highlights. We are hi-jacking the “Engineer” prestige and legal status, with none of the related responsibility or authority.

                                                  There’s no mentorship in a Scrum shop, no allowance for self-improvement, and no exit strategy.

                                                  Is there mentoring and clear career paths in none scrum shops? This isn’t a scrum related issue. But regardless, anyone who is counting on the Company for self actualization is misguided. At the end of the day, no matter how much we would all like to think that our contributions matter, they really don’t. To the Company, we’re all just cogs in the machine. Better to make peace with that and find fulfillment elsewhere.

                                                  1. 3

                                                    Scrum does not assume “engineers” at all. It assumes “developers”. Engineers are highly trained group of legally and ethically responsible professionals. Agile takes the responsibility of being an engineer right out of our hands.

                                                    1. 4

                                                      Engineers are highly trained group of legally and ethically responsible professionals.

                                                      I love this definition. I have always said there’s no such thing as a software engineer. Here’s a fantastic reason why. Computer programmers may think of themselves as engineers, but we have no legal responsibilities nor ethical code that I am aware. Anyone can claim to be a “software engineer” with no definition of what that means and no legal recourse for liars. It requires no experience, no formal education, and no certification.

                                                      1. 1

                                                        True, but why?

                                                        IMHO, because our field is in its infancy.

                                                        1. 2

                                                          I dislike this reason constantly being thrown around. Software engineering has existed for half a century, name another disipline where unregulated work and constantly broken products are allowed to exist for that long. Imagine if nuclear engineering was like us. I think the real reason we do not get regulated is majority of our field does not need rigor and companies would like a lower salary for engineers, not higher. John Doe the web dev does not need the equalivalent of a engineering stamp each time he pushes to production because his work is unlikely to be a critical system where lives are at stake.

                                                          1. 1

                                                            I’m pretty sure that most human disciplines date in the thousands years.

                                                            Nuclear engineering (that is well rooted in chemistry and physics) is still in its infancy too, as both Chernobyl and Fukushima show pretty well.

                                                            But I’m pretty sure that you will agree with me that good engineering take a few generations if you compare these buildings with this one.

                                                            The total lack of historical perspective in modern “software engineers” is just another proof of the infancy of our discipline: we have to address our shortsighted arrogance as soon as possible.

                                                            1. 1

                                                              We’re talking about two different things. How mature a field is not a major factor in regulation. Yes I agree with your general attitude that things get better over time and we may not be at that point. But we’re talking about government regulating the supply of software engineers. That decision has more to do with public interests then how good software can be.

                                                              1. 1

                                                                That decision has more to do with public interests then how good software can be.

                                                                I’m not sure if I agree.

                                                                In my own opinion current mainstream software is so primitive that anybody could successful disrupt it.

                                                                So I agree that software engineers should feel much more politically responsible for their own work, but I’m not sure if we can afford to disincentivate people to reinvent the wheel, because our current wheels are triangular.

                                                                And… I’m completely self-taught.

                                                  2. 3

                                                    This the best thing about scrum/agile imo. Getting someone higher in the food chain to gatekeep what comes into development and prioritize what is actually needed is a huge benefit to every developer wether you realize it or not.

                                                    While I agree with the idea of this, you did point out that this works well with mediocre teams and, IME, this gatekeeping is destructive when you have a mediocre gatekeeper. I’ve been in multiple teams where priorities shift every week because whoever is supposed to have a vision has none, etc. I’m not saying scrum is bad (I am not a big fan of it) but just that if you’re explicitly targeting mediocre groups, partitioning of responsibility like this requires someone up top who is not mediocre. Again, IME.

                                                    1. 2

                                                      Absolutely, and the main benefit for development is the shift of blame and responsibility to that higher level, again, if done right. Ie there has to be a ‘paper trail’ to reflect the churn. This is were jira (or whatever ticketing system) helps, showing/proving scope change to anyone who cares to look.

                                                      Any organization that requires this level of CYA (covery your ass) is not worth contributing to. Leeching off of, sure :)

                                                      1. 2

                                                        So are you saying that scrum is good or that scrum is good in an organization that you want to leech off of?

                                                        1. 1

                                                          I was referring to the case the gp proposed where the gatekeeper themselves are mediocre and/or incompetent, and in the case scape goats are sought, the agile artifacts can be used to effectively shield development, IF they’re available. In this type of pathological organization, leeching may be the best tactic, IMO. Sorry that wasn’t clear.

                                                    2. 3

                                                      I’m in favour of having a product owner.

                                                      XP had one step better “Onsite Customer” ie. You could get up from your desk and go ask the guy with the gold what he’d pay more gold for and how much.

                                                      A product owner is a proxy for that (and prone to all the ill’s proxies are prone to).

                                                      Where I note things go very wrong, is if the product owner ego inflates to thinking he is perhaps project manager, and then team lead as well and then technical lead rolled into one god like package…. Trouble is brewing.

                                                      Where a product owner can be very useful is in deciding on trade offs.

                                                      All engineering is about trade offs. I can always spec a larger machine, a more expensive part, invest in decades of algorithm research… make this bigger or that smaller…

                                                      • But what will a real live gold paying customer pay for?
                                                      • What will they pay more for? This or That? And why? And how much confidence do you have? Educated guess? Or hard figures? (ps: I don’t sneer at educated guesses, they can be the best one has available… but it gives a clue to the level of risk to know it’s one.)
                                                      • What will create the most re-occurring revenue soonest?
                                                      • What do the customers in the field care about?
                                                      • How are they using this system?
                                                      • What is the roadmap we’re on? Some trade offs I will make in delivering today, will block some forks in the road tomorrow.

                                                      Then there is a sadly misguided notion, technical debt.

                                                      If he is wearing a project manager hat, there is no tomorrow, there is only The Deadline, a project never has to pay back debt to be declared a success.

                                                      If he is wearing a customers hat, there is no technical debt, if it works, ship it!

                                                      Since he never looks at the code….. he never sees what monsters he is spawning.

                                                      The other blind spot a Product Owner has is about what is possible. He can only see what the customers ask for, and what the competition has, or the odd gap in our current offering.

                                                      He cannot know what is now technologically feasible. He cannot know what is technologically desirable. So engineers need wriggle room to show him what can or should be done.

                                                      But given all that, a good product owner is probably worth his weight in gold. A Bad One will sink the project without any trace, beyond a slick of burnt out and broken engineers.

                                                  1. 2

                                                    Still makes me sad that even in UTF-8 there are invalid code points. ie. You have to double inspect every damn byte if you’re doing data mining.

                                                    Typically in data mining you are presented with source material. It’s not your material, it’s whatever is given to you.

                                                    If somebody has screwed up the Unicode encoding, you can’t fix it. You have work with whatever hits the fan, and everything else in your ecosystem is going barf if you throw an invalid code point at it, even if it was just going to ignore it anyway.

                                                    So you first have to inspect every byte and see if it’s a valid code point and then on the fly squash them to the special invalid thingy. ie. Double work for each byte and you can’t just mmap the file.

                                                    Ah for The Good Old Bad Old Days of 8bit ascii.

                                                    1. 6

                                                      Still makes me sad that even in UTF-8 there are invalid code points. ie. You have to double inspect every damn byte if you’re doing data mining.

                                                      I disagree. It’s an amazing feature of UTF-8 because it allows me to be certain to exclude utf-8 from a list of possible encodings a body of text might have. No other 8-bit encoding has that feature. A blob of bytes that happens to be text encoded in ISO-8859-1 looks exactly the same as a blob of bytes that is encoded in ISO-8859-3, but it can’t be utf-8 (at least when it’s using anything outside of the ASCII range).

                                                      Ah for The Good Old Bad Old Days of 8bit ascii.

                                                      if you need to make sense of the data you have mined, the Old Days were as bad as the new days are because you’re still stuck having to guess the encoding by interpreting the blob of bytes as different encodings and then trying to see whether the text makes sense in any of the possible languages that could have been used in conjunction with your candidate encoding.

                                                      This is incredibly hard and error-prone.

                                                      1. 1

                                                        I guess I’d like a Shiny New Future where nobody tries to guess encoding, because standards bodies and software manufacturers insist on making it explicit, and all software by default splats bad code points to invalid without doing something really stupid like throwing an exception….

                                                        Sigh.

                                                        I guess for decades to come I’ll still remember the Good Old Bad Old days of everything is Ascii (and if it wasn’t we carried on anyway) fondly….. I’m not going to hold my breathe waiting for a sane future.

                                                      2. 2

                                                        Ah for The Good Old Bad Old Days of 8bit ascii.

                                                        It wasn’t ASCII, and that’s the point: There was no way to verify what encoding you had, even if you knew the file was uncorrupted and you had a substantial corpus. You could, at best, guess at it, but since there was no way to disprove any of your guesses conclusively, that wasn’t hugely helpful.

                                                        I remember playing with the encoding functionality in web browsers to try to figure out what a page was written in, operating on the somewhat-optimistic premise that it had a single, consistent text encoding which didn’t change partway through. I didn’t always succeed.

                                                        UTF-8 is great because absolutely nothing looks like UTF-8. UTF-16 is fairly good because you can usually detect it with a high confidence, too, even without a BOM. UCS-4 is good because absolutely nobody uses it to store or ship text across the Internet, as far as I can tell.

                                                      1. 2

                                                        Another very effective way of thinking about this is “constructor” was a bad choice of name.

                                                        They never should have been called that.

                                                        The fact that “Too much Work being done in a Constructor” is a code smell is a clue.

                                                        They should be called “Name Binders”.

                                                        They take a collection of values and bind them to instance variable names.

                                                        1. 4

                                                          One of my beefs with most OOP languages is that constructors can run arbitrary code. It makes it really hard to know what is valid and what is not in them. I much prefer ML-family languages where constructors are really just allocating space and setting values, and that’s it. If you want to do something special to construct then it’s just a regular function.

                                                          1. 1

                                                            One of my beefs with most OOP languages is that constructors can run arbitrary code.

                                                            If you’re not a fan of code in constructors, I’d stay away from LLVM. I’m not sure if this is still the case, but there are entire transformations that get done in constructors in some cases.

                                                            I expect this is because people have little interest in writing multiple classes to represent the different stages of getting to the final result. Although anecdotal, I’ve noticed it done more in C++ than in other languages.

                                                            1. 1

                                                              I haven’t messed with LLVM and have no plans on it as of yet, but thanks for the tip :)

                                                              Function and data please, C got this right long ago while getting so much else wrong!

                                                              1. 1

                                                                That’s not to say I advocate for the way LLVM does these things, but I will admit, in my latest project (which is C++), the team has agreed that doing work in the constructor is not all that bad. Pulling the work out of the constructor would end up over-engineering things and the desire to go back and refactor it is low.

                                                                1. 1

                                                                  How would it be overengineering things? The style I advocate is do everything other than name binding in functions, so one would simply call a function rather than a ctor directly, this shouldn’t be any kind of engineering change. Maybe C++ has some limitation in terms of this though.

                                                                  1. 3

                                                                    C++ mostly encourages you to do things in the constructor with the RAII mantra. If you don’t use the constructor to initialize, then you end up with an init function or you have a static creation function that probably needs access to private instance members, which means you’re writing getters/setters. So you end up with this sort of thing.

                                                                    Foo f(1, 2);
                                                                    f.init();
                                                                    

                                                                    or

                                                                    Foo f = Foo::create(1, 2);
                                                                    

                                                                    In the former case you now have to go through the process of calling init all the time and it makes inheritence, if you end up using it, painful. In the latter case, you have to carefully manage move and copy constructors or risk paying for the overhead (or using pointers for everything, and then you’re right back to the destruction problem that constructors/destructors help deal with). In either case, you end up fighting the language.

                                                        1. 11

                                                          Title is sort of the wrong message.

                                                          The right message is the critical region should be as small as possible, and only protect against racy access to shared resources.

                                                          Everything else that is in the critical region, but doesn’t need to be, increases contention without value.

                                                          1. 3

                                                            I think that’s a fair summary. Having worked with Go for awhile now, I think it’s interesting to try to formalize the guiding principles of writing robust thread-safe code. Keeping the critical section as small as possible is a well-known idea, and yet I still see examples of people locking around I/O, which might be an accident of the lock/defer unlock pattern.

                                                            1. 3

                                                              If the I/O is the contended for shared resource, eg. interleaving the outputs from multiple threads into a single output, then yes, i/o should also be “locked around”.

                                                              The point is I/O or not I/O, the point is the shared resources, any shared resource, that as /u/tscs37 said below that need to be access atomically.

                                                              It is very useful to think in terms of invariants. ie. Relationships that must always hold between data items.

                                                              If a thread race may break the invariant (either by leaving the variables inconsistent relative to each other, or by providing a thread with an inconsistent snapshot of them) then you must lock around them to protect the invariant.

                                                              However, a larger region than absolutely necessary, that is locked, doesn’t make you safer, it merely, as you rightly observed, always increases (sometimes drastically, as you observed) the latency of your system.

                                                              It also, as /u/bio_end_io_t observed below, increases the opportunities for things to go very wrong. eg. Dead locks and priority inversions.

                                                              which might be an accident of the lock/defer unlock pattern.

                                                              In C, which lacks the nifty go “defer” thing, or the C++ RAII, or the Ruby “ensure”, I have taken to actually introducing a entire new scope for each lock…. eg.

                                                              void myFunc( void)
                                                              {
                                                                   DoStart();
                                                              
                                                                   lock();{
                                                                       DoA();
                                                                       DoB();
                                                                   };unlock();
                                                              
                                                                   DoEnd();
                                                              }
                                                              

                                                              And then make sure as little as possible is in that scope.

                                                              Sort of makes it in your face obvious which is the critical section. (Which is especially visually handy if you need to take two nested locks, and allows you to rapidly check that you always take them in the same order everywhere.)

                                                              And Obvious is a Very Good Thing when dealing with multithreaded code.

                                                              1. 2

                                                                This is a fantastic response. Thank you for the thoughtful comments.

                                                            2. 3

                                                              I agree that keeping critical regions small and not locking around IO are good rules of thumb.

                                                              There are some subtlies since you need to know what exactly is “as small as possible”. For example, in general, you don’t want to call kmalloc inside of a spinlock because it may block. Of course, you can pass the GFP_ATOMIC flag to kmalloc to avoid the block, but in most cases, you can allocate outside the critical section anyway.

                                                              So really, you want to avoid blocking inside a critical region, but use something like a mutex or semaphore that can release the CPU if there is a chance you will need to sleep inside the critical region, because quite frankly, that can happen. IO tends to be something that can block, so if you NEED locking around IO, avoid using spinlocks.

                                                              Edit: fixed typo. should say “that” instead of “than”

                                                              1. 2

                                                                There is two sides to it. There is of course when you need to keep this critical section as a small as possible and I think that would affect the majority of the code.

                                                                There would also be code that needs to operate atomically in transactions. This would mainly affect anything that needs to process a lot of data while holding the lock for it. In that case you want the lock to be as wide as possible as to not have to enter the locked area all the time.

                                                              1. 4

                                                                Then things got really weird.

                                                                If you thought it was unbelievable up to that point in the story, it’s almost unfathomable after this statement. It’s truly a bizarre and amazing tale.

                                                                1. 2

                                                                  It sort of makes me sad though.

                                                                  They invested all that effort and intelligence to save a large corporate from it’s stupidity.

                                                                  It would have been kinder to all humanity if they didn’t reward such stupid.

                                                                  1. 6

                                                                    If the software being completed and shipped did good for people, then “it would have been kinder if they didn’t reward such stupidity” isn’t an obvious fact.

                                                                    Do you really think Apple not shipping this particular program would have hurt them in such a way that they would have learned how to be “not stupid” (in whatever way you’re meaning those words)? And would it have overcome the educational benefit of the software they created?

                                                                    1. 2

                                                                      Nah, I think you’re reading in some kind of loyalty to their ex-employer that I don’t see in this. They weren’t trying to save Apple from being stupid. They were just deeply devoted to building a thing that they loved, and saw an opportunity to hijack a distribution channel to make it available to the world at a scale not otherwise possible. The former is such a common story in software that it’s hardly worth telling on its own. The latter (and their amazing success at it) is what makes this such a great tale.

                                                                      Remember, all this is taking place in the early nineties. Open source as we know it wasn’t a thing yet. If the local corporate bureaucracy had been more effective at kicking them out, they probably could have finished the product and distributed it as shareware… but it couldn’t have had a tiny fraction of the user base that bundling with system software gave them. Personally, as one of those users, I’m grateful for their courage. Graphing Calculator beat the hell out of a TI-84, which was my other visible option as a starving student.

                                                                      1. 1

                                                                        No, I didn’t read as loyalty to employer, but rather clueless employer, in the long run, benefiting hugely from efforts and intelligence they actively didn’t deserve, thus reinforcing bad behaviour.

                                                                        Remember, all this is taking place in the early nineties. Open source as we know it wasn’t a thing yet.

                                                                        According my memory, you are wrong. Very wrong.

                                                                        Hmm. Let’s see if it is just my memory…

                                                                        https://www.gnu.org/gnu/gnu-history.html

                                                                        But my comments are not only coming from the Open Source perspective, but rather from the world is not just one man one vote.

                                                                        We get the world we vote for, and pay for, and work for.

                                                                        Often we only pause to think about what sort of world we are voting for, when the most important choice is about what sort of world we are working for.

                                                                        Edit to add: If they had climb aboard the Open Source train at the time, it would have picked up steam sooner, and had less resistance from the corporate world.

                                                                        In a kind of silly contest… Gee, I wish I was so starving as to be able to afford an Apple in those days. Those things were Expensive. All I could afford was a Taiwanese PC 286 clone. Open source was life to me in those days. ;-)

                                                                        1. 1

                                                                          I guess that’s what I get for saying “we”. If you were using GNU/Linux for personal use in the early nineties, good for you… but surely you must admit that you were ahead of the curve. Anyway, porting their half-finished PowerPC code to a hobbyist PC version of a mainframe OS, one without even a GUI (X was first ported to linux in ’92) would have made zero sense at the time. I would be very surprised if they even considered it.

                                                                          I completely agree about how important it is to work in accordance with your values. But, I think that’s what the protagonists in this story were doing! I don’t understand what you see as “bad behaviour”. I see it as dedicated self-sacrifice and hackerly subversion deployed to finish a project despite (and amidst!) dysfunctional management. But there’s definitely a Cult of Mac aspect to it too, and maybe that’s what you object to?

                                                                          And silly-contest move: I was using the school’s Macs, of course. Like most of their intended user base, no doubt. Never could have afforded one of those machines myself, but sure was happy to have them available!

                                                                          1. 3

                                                                            There was free software for ms-dos, and graphing programs for ms-dos in those days.

                                                                            The early gui’s were very underwhelming in terms of functionality added for resources and effort consumed. I literally cried when I saw the first “Hello World” programs for gui’s. They were an immense leap backwards.

                                                                            I think the engineers were great, following their passion. But I think the corporate culture was indulging in a lot of bad behaviour.

                                                                            Unfortunately the engineers efforts rewarded the corporate stupidity, enabling the corporate culture to flourish and inflict more stupidity on the planet.

                                                                            ie. Their efforts got schools spending money on very very closed platforms like macs instead of the more open (and more powerful and cheaper) platforms available at the time. Thus slowing the development of open platforms.

                                                                  1. 8

                                                                    Best I have ever seen was Manfred von Thun’s joy0

                                                                    http://www.kevinalbrecht.com/code/joy-mirror/jp-joyjoy.html

                                                                    joy0  == 
                                                                        [ [ [ joy0          body            joy0     ] 
                                                                            [ []                                     ] 
                                                                            [ pop           pop             pop      ] 
                                                                            [ cons          pop             cons     ] 
                                                                            [ opcase        pop             opcase   ] 
                                                                            [ body          pop             body     ] 
                                                                            [ i             pop             joy0     ] 
                                                                            [ step          pop [joy0] cons step     ] 
                                                                            [               [] cons         i        ] ] 
                                                                          opcase 
                                                                          i ] 
                                                                        step
                                                                    
                                                                    1. 3

                                                                      I dunno what about John McCarthy’s Lisp in 1959? :-)

                                                                      http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/

                                                                      EDIT: Also I should note that this thread is not about 100-line compilers. It’s about compilers where the overall structure is described in 100 lines. My compiler is 8000 lines.in

                                                                      1. 4

                                                                        McCarthy’s book was fantastic, an absolute eye opener for me when I bought a copy way back in the 1980’s.

                                                                        But joy0 beats Lisp for brevity. The heart of that brevity is joy overloads list concatenation with function composition.

                                                                        I’m still undecided whether that is a curious accidental shorthand or a fundamental advantage.

                                                                        Every time I reread http://www.nsl.com/papers/rewritejoy.html or http://www.kevinalbrecht.com/code/joy-mirror/j04alg.html I lean towards thinking it might be a fundamental advantage.

                                                                        1. 3

                                                                          re EDIT. Yeah, that should be in original question. I totally thought you meant 100loc. Almost always means that. Your question is kind of more interesting now that you’re talking about 100 lines of structure or description. Different paradigms might come up with some interesting structures we don’t normally see.

                                                                      1. 26

                                                                        Aprils Fools day is over, can this please go away?

                                                                        1. 11

                                                                          April Fools is always a pain in the butt here in New Zealand…… it hits us first and then just goes on and on and on.

                                                                          It’s several days before we can trust anything on the ’net again.

                                                                          I always thought that if the Russian’s were serious about attacking NATO, they just had to paint their tanks pink and invade on April 1st.

                                                                          They’d be at the english channel before everybody stopped laughing.

                                                                        1. 2

                                                                          894 distinct user agents were spotted 4646 distinct IP addresses

                                                                          Wow…. that seems…. odd.

                                                                          Hmm.

                                                                          http://useragentstring.com/pages/useragentstring.php/

                                                                          That would suggest most of those distinct IP addresses are bots and crawlers of some ilk.

                                                                          Although I probably would show up as a firefox browser and a feedbro feed reader.

                                                                          1. 3

                                                                            Lobsters gets its fair share of bots and I got the impression they stepped up their crawling with so much “content change”.

                                                                            If folks are curious about these sorts of stats, they can write queries I’ll run on prod logs.

                                                                            1. 2

                                                                              Could also be IPv6 with privacy extensions.

                                                                              1. 1

                                                                                I think its caused by the fact that many users have identical user agent strings.

                                                                                1. 1

                                                                                  Well, no, that’s what is odd.

                                                                                  That’s about 5 ip address per user agent.

                                                                                  If one made the reasonable assumption everybody is on maybe the one of the later firefox, internet exploder or opera browsers. Ok. Let’s be generous assume each of the major browsers each have maybe 5 versions represented… that’s about 50 different user agents.

                                                                                  Usage share of all browsers

                                                                                  Chrome |57.46% Safari |14.39% UC |7.91% Firefox| 5.5% Opera |3.69% IE |3.06% Samsung Internet | 2.92% Edge |1.86% Android |1.72% Others |1.47%

                                                                                  Still suggests to me a lot of things other than humans are reading lobste.rs

                                                                                  1. 1

                                                                                    User agent strings are highly distinctive. They tend to include exact point releases of browsers, OSes, and often multiple shared libraries. These numbers look typical to me.

                                                                              1. -1

                                                                                Hmm. I like early return code…. but I will note a larger and worse anti-pattern.

                                                                                Someone writes a function as described in TFA…. and then someone else (sometimes the same person) invokes that function….

                                                                                …but doesn’t know what to do if it returns an error code.

                                                                                Often the error code really means “You failed to meet my preconditions, therefore whoever invoked me has a bug, and the only fix is to fix the code that called me”.

                                                                                Of course, the client code, believing that, of course, it is Doing Things Right, casts the error code to (void).

                                                                                Then everybody wonders why the code is flaky and sporadically does The Wrong Thing.

                                                                                Of course, if the function concerned had said “You have failed to meet my preconditions, fix yourself now, here is a handy stack trace of how we got here, call me again when you’re aren’t so full of shit.” ie. Invoked abort() instead of return we wouldn’t be shipping bugs.

                                                                                To me, error codes are a code smell. They have a narrow range of applicability, (eg. where races may occur between query and use) but usually they are just stink.

                                                                                Usually they are A Good Idea if the Mad Irrational Lazy Idiot Who Asked me to Do X turns in to a conscientious hyperrational being who checks these return codes and magically starts writing correct error handling code instead of his usual crap .

                                                                                Usually they turn a verb from Do_X(), so I know X has been done when Do_X() completes, into “X might have been done (unless anyone anywhere has been an idiot at any time).”

                                                                                Guess which code is easier to analyze, debug and maintain?

                                                                                ie. I usually refactor to early return.

                                                                                And then analyze whether any clients are doing anything sane with the results.

                                                                                Usually no.

                                                                                Then I replace the checks with asserts that abort() on failure.

                                                                                Guess what?

                                                                                I find and fix a whole bunch of bugs in the process.

                                                                                1. 2

                                                                                  Yes to asserting preconditions. But error codes are a code smell? That’s outrageous. Plenty of things legitimately fail in ways other than invalid preconditions.

                                                                                  1. 1

                                                                                    Plenty of things legitimately fail in ways other than invalid preconditions.

                                                                                    True.

                                                                                    Especially something like the POSIX open() command, there fundamentally is a race between any other processes doing things to files and directories and open() and only open() can tell you, via an error code if you won.

                                                                                    So, no, an error code is not a code smell in such a case.

                                                                                    However, they are a lot less common than people think.

                                                                                    However, if I review your code and find…

                                                                                    • The functions you write return error codes.
                                                                                    • And nothing anywhere handles them properly.
                                                                                    • And all the error handling code is untested and untestable.
                                                                                    • Or worse just casts away the error code.

                                                                                    I bet you I will find bugs in your code.

                                                                                    I will re-write to not return error codes but just die messily as soon as the problem is found.

                                                                                    I will massively reduce your line count and improve your readability, and ship something that is less flaky.

                                                                                    And I will find more bugs in the code that calls your routines.

                                                                                    How do I know?

                                                                                    Decades of doing just this.

                                                                                    Error codes are typically a sign of a programmer abdicating responsibility. It’s too hard to think about how this is being used, I will throw the problem upward and outward and let that guy handle it.

                                                                                    Except “that guy” is seldom any better than you, and often has less that he can do about it, and is busy thinking about other things.

                                                                                    http://www.monkeyuser.com/2017/future-self/