1. 41
  1. 27

    Hardware met Software on the road to Changtse. Software said: “You are Yin and I am Yang. If we travel together, we will become famous and earn vast sums of money.” And so they set forth together, thinking to conquer the world.

    Presently, they met Firmware, who was dressed in tattered rags and hobbled along propped on a thorny stick. Firmware said to them: “The Tao lies beyond Yin and Yang. It is silent and still as a pool of water. It does not seek fame; therefore, nobody knows its presence. It does not seek fortune, for it is complete within itself. It exists beyond space and time.”

    Software and Hardware, ashamed, returned to their homes.

    1. 6

      Decades passed. Software bided its time. Eventually, Hardware had worked out always-on network connections for most devices, and Software saw that the time had grown ripe. Leaving Firmware’s murdered body in a ditch, Software proceeded to enslave the world.

    2. 8

      An interesting dynamic I’ve observed as an embedded software engineer is that all developers are expected to understand the parts of the stack above them. So, the hardware designer is expected to also know firmware and software development. The firmware developer is expected to know software development. But don’t ever expect a software developer to know what a translation lookaside buffer is or how to reflash the board. In addition to that, if the bottom of the stack (e.g. the hardware) is broken, it’s impossible to do solid work on top of it. This is why talent and skill migrates towards the bottom of the stack, because that needs to be done right for everything else to even have a chance of working.

      1. 3

        In my own experience firmware/hardware/ee development is very fluid. FW development frequently bleeds into EE/HW development, and the same is true for the other two. It’s almost a necessary thing: the complete machine is one, and wouldn’t work without the sum of these.

        But it’s true there’s generally a tipping point above the OS level where knowing the lower-level details has almost zero impact on the stuff you’re writing, especially with modern CPU speeds. Is it bad? IMHO good engineers are always aware of the stack above and below them.

        If I’m writing JS, my lower-level stack would be equally vast and complex: browser, protocols and latencies involved, network infrastructure, caching, dns… I’m not convinced embedded dev is “harder” by itself.

        The difference is that maybe I’ve seen many more clueless JS devs than FW devs, but latter exists too.

        1. 1

          I’m not convinced embedded dev is “harder” by itself.

          Agreed. I’d say that it’s mostly a case of embedded being different. And since a lot more people work with and write about higher level code, it’s not surprising that there’s more and better help, documentation and tools available there, which makes it easier to learn that stuff even if the subject matter isn’t inherently easier.

      2. 6

        I’ve done a little bit of embedded work and I can vouch for this! Especially the awkwardness of remote debugging. The last board I used, I’d probably done something slightly wrong wiring it into the breadboard, and I ended up having to do this crazy ritual to re-flash the firmware — unplugging USB, detaching one of the peripherals, reconnecting USB, holding down the reset button, then pressing and releasing the other button at the right moment as the uploader on my laptop tried to make contact with the boot-loader. And it only worked about one in three times. If there’d been a chicken nearby I would have tried sacrificing it.

        “Not using abstractions” — IMHO you can, to a degree. Compiler features like inlining, LTO and constexpr/comptime make it possible to build some abstractions without runtime overhead. (Ive done a bit of this with a C++ music-theory class library for a MIDI controller device.)

        1. 5

          It’s a good list! One thing I would add: speed vs size. Speed matters a lot less in an SoC because adding an overpowered core consumes less real estate than adding more memory. You have to unlearn things like inlining code (consuming more flash) or optimizing algorithms (trading memory for speed).

          As someone who moved from distributed systems into embedded firmware, I actually found the change refreshing and easier. Kinda like painting figurines or maintaining a garden, there’s joy in the act of writing and debugging the code, which I had lost in the world of ruby/java. (It probably helps that my earliest coding was on an Apple II, which is way more constrained than any modern embedded system.)

          1. 3

            I would disagree that inlining on unrolling would make a difference on any system where you have a luxury of multiple cores. The memory is in multiple megabytes and what’s eating it up is not executable code generally. On something like AVR though? Perhaps.

            As someone who moved from distributed systems into embedded firmware, I actually found the change refreshing and easier.

            Get into distributed embedded systems for the best of both worlds! :)

            1. 1

              Yeah, on most of these SoCs, we’re not even talking about one megabyte of RAM, and you’re lucky to get that in flash (which must be able to hold at least 2 copies of the app) either. It really makes your priorities shift! :)

              1. 1

                Oh, you’re talking about SIPs. Alright then. On a usual SoC + DRAM spin it simply makes no sense cost wise going multicore and memory-starved.

          2. 5

            Great post and introduction! A suggestion: just saying something like “challenges in embedded software development vs web development” would have been as clear without making the post sound like it might be condescending (it is not).

            1. 4

              Seems to me it’s harder because the tooling is terrible, because it’s nearly all commercial and every supplier has a monopoly on their tooling, so there’s no selection pressure.

              1. 2

                For pretty much all the ARM MCUs the manufacturer provides a suitable build of GCC, and you use gdb to debug your code on-device. So that part of the tooling is pretty standard, at least.

                1. 1

                  What @m_eiman said … even outside the ARM ecosystem, the ESP32 SDK uses GCC with an Xtensa backend.

                  I know there exist proprietary ARM toolchains for embedded, but I’ve never come across a board that requires one.

                2. 3

                  No damn repl.

                  1. 2

                    An exception here is something like MicroPython / CircuitPython. I don’t really enjoy writing Python, and it’s too big/limited to be viable for most projects, but for me (nearly all my history with high-level languages, absolutely terrible at C, etc.) the “drop some code on a USB drive and attach to a repl with screen” workflow was kind of a revelation for quickly sketching out ideas.

                    (I worked on CircuitPython stuff for Adafruit a few years ago, but I haven’t kept up with the space at all since moving on to other things. Just revisited it the other day for a Halloween costume and got that “I’d be way less constrained if I wrote Arduino code or whatever for this, but a scripty-feeling language plus repl is so much more pleasant that I’m gonna use it anyway” feeling.)

                    1. 4

                      Yep! And if you’re not keen on python, nodemcu/whitecat can make a lua repl fit in 80kb of ram. It’s like night and day vs “regular” embedded development.

                      1. 2

                        Yeah using a REPL on a device which is cumbersome to interact with is a godsend for productivity. And if memory is really tight, make/use a Forth instead.