1. 13
    1. 6

      My compliments to the author. None of these things are unknown in the software world, but the pushback on all of these things is sadly real.

      One thing the author misses (and I say this not to disagree with anything in here, it’s great) is that in software it’s not just that changes are easier, it’s that in practice specification is almost impossible - “requirements” come from people who don’t know what they want, can’t express themselves with precision, and struggle to understand that they’re asking for things that are contradictory. In these circumstances, testing is likely a waste of time. Instead static checking that eliminates such footguns as you can afford consistent with ease of iteration is often the best choice.

      1. 3

        in software it’s not just that changes are easier, it’s that in practice specification is almost impossible - “requirements” come from people who don’t know what they want, can’t express themselves with precision, and struggle to understand that they’re asking for things that are contradictory

        This is definitely true, but I suspect the author isn’t listing it as a separate problem because it also stems from the ease of making changes. (Caveat: what follows is my limited experience, I’ve never worked at Sun or Intel, like Rajiv Prabhakar, the article’s author).

        In my experience, hardware companies, just like software companies, have no shortage of people who, if confronted with a technical problem, wouldn’t know what they want, couldn’t express themselves with precision, and couldn’t understand when they’re asking for contradictory things. It’s not like hardware companies only employ engineers, after all.

        However, because the costs involved in making a wrong call are so much higher, these people usually don’t get into first-line product management, or engineering management, or engineering team lead positions. There are plenty of career paths that can take them from sales/marketing/finance/whatever to a senior management position, but technical and non-technical lanes are a lot more, uhm, parallel. They do meet up at the top, to some degree. It’s not that uncommon to find head honchos who have a good understanding of how the industry works and where it’s headed, which they gained from working sales, not engineering. That also works in software. But it’s far less common to find these people in a position where they can make low-level technical calls, or force low-level technical choices upon their teams.

        This is the root from which many, many other “cultural” differences stem – in terms of accountability, reliability engineering (or, more often than not, “reliability” “engineering”), teamwork, professional development, career paths and many, many, many others. It’s a whole other world.

        1. 1

          That sounds quite nice. I wonder how best a software engineer can get into such a company.

          1. 3

            Well, virtually all major hardware vendors employ both these days, and many smaller ones do, too. If you’re selling CPUs, MCUs or DSPs, you’re selling soulless silicon, it’s just a fancy lump of sand without software – you need to maintain compilers, IDEs, write drivers, demo programs, deliver Linux BSPs and so on. Even if you’re not selling things that run code, lots of hardware vendors out there maintain drivers for their own devices. Analog Devices does a lot of cool work in Linux’ IIO subsystem, for example. Or, if you don’t care that much for the “software” part, you can always get an EE degree and get into it the same way you get into a software engineering job.

            Then again, there are two sides to every coin, and this is no exception. For example, some hardware vendors treat their some (or even all) of their software teams very much like cost centres, which is why so many of them sell excellent hardware but have truly abysmal software. Also, the culture of an industry is very much an industry thing, not strictly a company thing. Even if you’re technically a hardware company, you’re still mostly recruiting software engineers and managers from the software world. and you’re recruiting them along with the software culture. Lots of hardware companies deal with it by just letting the software people do their thing.

            No matter where you work, the grass always looks greener on the other side. Lots of hardware engineers aren’t completely happy with how things are done in their field, either (I speak about them in the 3rd person because I haven’t done much EE work in a while, and I guess I wasn’t that good at it, either). For all the things it gets right, the hardware industry isn’t exactly the engineering perfection utopia that many software engineers fed up with all the move fast and break things bullshit imagine it is. The same industry that gives us power-efficient ARM chips also gives us consumer products that break if you as much as look at them menacingly, and that’s not always a design choice, it’s often the same kind of incompetence we’re familiar with from the software world.

      2. 1

        In these circumstances, testing is likely a waste of time.

        I disagree. I write code, I write tests to ensure I wrote what I meant to write. It doesn’t matter if my tests match some spec. When I wrote the code I intended it to do something, and not do other things.

    2. 2

      Excellent article. The discussion of randomness could be enhanced with pointers to tools like QuickCheck and comparable tools for other languages (I use Hypothesis in Python).

    3. 1

      I had hopes to learn new stuff in this article, but unfortunately everything listed here is already known. This all accumulates to: if you had more time (thus, money) to test it all, it would be better. While I mostly agree with everything said, there is the unspoken topic of fossilisation: your test suite is so solid that trying to move the SUT out of the original path is almost impossible without losing most of the assertions that made it trustworthy. in the end, it looks like hardware is a far slower moving target than software and immobile stuff is far less costly (and yet extremely expensive: 6 figures salary he said?) to cover in depth. imagine applying that to fast moving targets!

      He also spoke about 30 seconds long feedback-loops for end-to-end tests? While I’m sure it exists, the vast majority of development I know requires backing services that sometimes take 30 seconds just to start responding correctly. imagine a few hundreds of those and you understand why people favor something else.

      1. 2

        there is the unspoken topic of fossilisation: your test suite is so solid that trying to move the SUT out of the original path is almost impossible without losing most of the assertions that made it trustworthy

        If making changes invalidates large numbers of tests, your tests were poorly designed to begin with. This can happen, but it’s certainly not a necessary byproduct of having lots of tests.