1. 16
  1.  

  2. 25

    Nah.

    Libraries have bugs, silicon has bugs, external systems have bugs, but not matter whose fault it is, for a mission critical device I have to follow the bug to wherever it is and fix it.

    And no, I can’t wait for upstream to make a new release.

    If anything the industry is going the other way.

    Yes, we pull in gigabytes of source code from OpenEmbedded. That just means I have gigabytes of source code to go hunting and fixing and enhancing.

    I don’t feel in the least bit extinct.

    The difference is in the amount of functionality I can offer the customer. Literally orders of magnitude more than the good old bad old days of hand craft it ourselves.

    1. 3

      And still almost all embedded houses which I have been contracted were still in the “we do everything by ourselves, we will need something ours down the road”-mentality. There is so much untapped potential to be unearthed with open source that I have difficulty to see embedded developers to die out soon.

    2. 10

      (Warning: not an embedded guy but read embedded articles. I could be really wrong here.)

      “This big push is causing a vacuum in which companies can’t find enough embedded software engineers. Instead of training new engineers, they are starting to rely on application developers, who have experience with Windows applications or mobile devices, to develop their real-time embedded software.”

      I don’t know. I was looking at the surveys that Jack Ganssle posts. They seem to indicate that the current embedded developers are expected to just pick up these new skills like they did everything else. They also indicate they are picking up these skills since using C with a USB library or something on Windows/Linux isn’t nearly as hard as low-level C and assembly stuff on custom I/O they’ve been doing. I’m sure there’s many companies that are either new not knowing how to find talent or established wanting to go cheap either of whom may hire non-embedded people trying to get them to do embedded-style work with big, expensive platforms.

      I think author is still overselling the draught of engineers given all the products coming out constantly indicate there’s enough to make them. Plus, many features the engineers will need are being turned into 3rd party solutions you can plug in followed by some integration work. That will help a little.

      “developers used “simple” 8-bit or 16-bit architectures that a developer could master over the course of several months during a development cycle. Over the past several years, many teams have moved to more complex 32-bit architectures.”

      The developers used 8-16 bit architectures for low cost, sometimes pennies or a few bucks a chip. They used them for simplicity/reliability. Embedded people tell me some of the ISA’s are easy enough for even assembly coding stuff to be productive on kinds of systems they’re used in. Others tell me the proprietary compilers can suck so bad they have to look at assembly of their C anyway to spot problems. Also, stuff like WCET analysis. The 8-16-bitters also often come with better I/O options or something per some engineers’ statements. The 32-bit cores are increasing in number displacing some 8-16 bit MCU’s market share, though. This is happening.

      However, a huge chunk of embedded industry is cost sensitive. There will be for quite a while a market for 8-bitters that can add several dollars to tens of dollars of profit per unit to a company’s line. There will always be a need to program them. If anything, I’m banking on RISC-V (32-bit MCU) or J2 (SuperH) style designs with no royalties being those most likely to kill most of the market in conjunction with ARM’s MCU’s. They’re tiny, cheap, and might replace the MIPS or ARM chips in portfolios of main MCU vendors. More likely supplement. This would especially be true if more were putting the 32-bit MCU’s on cutting-edge nodes to make the processor, ROM, and RAM’s cheap as 8-bitters. We’re already seeing some of that. The final advantage on that note of 8-16 bitters is they can be cheap on old process nodes that are also cheap to develop SOC’s on, do analog better, and do RF well enough. As in, 8-16-bit market uses that to create huge variety of SoC-based solutions customized to the market’s needs since the NRE isn’t as high as 28-45nm that 32-bits might need to target. They’ve been doing that a long time.

      Note to embedded or hardware people: feel free to correct anything I’m getting wrong. I’ve just been reading up on the industry a lot to understand it better for promoting open or secure hardware.

      1. 3

        nit: s/draught/drought

        1. 2

          Was just about to post the same thing. I don’t normally do typo corrections, but that one really confused me. :)

        2. 2

          Yup. Jack Gannsle is always a good read. I highly recommend anyone interested in embedded systems subscribing to his Embedded Muse news letter.

          Whether the Open Cores will beat ARM? Hmm. Arm has such a strangle hold on the industry it’s hard to see it happen…. on the other hand Arm has this vast pile of legacy cruft inside it now, so I don’t know what the longer term will be. (Don’t like your endianess, toggle that bit and you have another, want a hardware implemented java byte code, well there is something sort of like that available, …..)

          Compilers? It’s hard to beat gcc, and that is no accident. A couple of years ago Arm committed to make gcc as performant as their own compiler. Why? Because the larger software ecosystem around gcc sold more Arms.

          However, a huge chunk of embedded industry is cost sensitive.

          We will always be pushed to make things faster, cheaper, mechanically smaller, longer battery life,….. If read some of what Gannsle has being writing about the ultra-low power stuff, it’s crazy.

          Conversely we’re also always being push for more functionality, everybody walks around with a smartphone in their pocket.

          The base expectation these days a smartphone size / UI/ functionality / price / battery life ….. which is all incredibly hard to achieve if you aren’t shipping at least a 100000 units…

          So while universities crank out developers who understand machine vision, machine learning, and many other cutting-edge research areas, the question we might want to be asking is, “Where are we going to get the next generation of embedded software engineers?”

          Same place we always did. The older generation were a rag tag collection of h/w engineers, software guys, old time mainframers, whoever was always ready to learn more, and more, and more…

          1. 1

            This week’s Embedded Muse addresses this exact article. Jack seems to agree with my position that the article way overstates things. He says the field will be mixed between the kinds we have now and those very far from the hardware. He makes this point:

            “Digi-Key current lists 72,000 distinct part numbers for MCUs. 64,000 of those have under 1MB of program memory. 30,000 have 32KB of program memory or less. The demand for small amounts of intelligent electronics, programmed at a low level, is overwhelming. I don’t see that changing for a very long time, if ever.”

            The architecture, cost, etc might change. There will still be tiny MCU’s/CPU’s for those wanting lowest watts or cost. And they’ll need special types of engineers to program them. :)

            1. 1

              Thanks for the inside view. Other thing about Jack is he’s also a nice guy. Always responds to emails about embedded topics or the newsletter, too.