1. 38
  1. 22

    This was something I realised when writing Rust on STM32 for the first time: it was like “woah, why is everything so complicated??”. Turns out, microcontrollers have a significant amount of stuff hiding underneath the Arduino libraries…

    1. 21

      Arduino serves it’s purpose perfectly: attract beginners and allow those who don’t want to invest a deep amount of effort to do things with microcontrollers.

      It was never designed to be a one-size-fits-all or a tool for professional embedded programmers.

      I don’t see the issue.

      1. 11

        I agree completely.

        It’s like people who complain that all those people out there using tablets now (90%?) can’t do as much on a tablet as the author can do on a PC/Mac/Linux.

        No doubt true, but those 90% of people couldn’t do those things on a PC either. And can easily do things on their tablet that they wouldn’t be able to do on a PC.

        1. 6

          I agree, and the author even acknowledges that. It seems to me that the author’s main point is that the world of embedded programming is still complex and fragmented, and it’s hard to hire people who have the experience to navigate it effectively.

          The points about Arduino felt mostly unnecessary.

        2. 20

          For me, it is the poor quality of most libraries (bad abstractions, bad design). On the other hand: before Arduino, it was quite difficult to start with MCU programming. Arduino allowed many people to do things they would never do otherwise. So I still think that „world with Arduino“ is better than „world without Arduino“. Anyone can offer better and cleaner alternative platform…

          1. 14

            I definitely don’t want to go back to the pre-Arduino board (spending hundreds of dollars on a stupid development board, and before that leaving the university lab at 2 AM because I was a student and didn’t have hundreds of dollars to spend on a stupid development board in the first place) but the Arduino did make the non-technical bits of my embedded development work infinitely more complicated.

            I’ve spent more than I want to remember on crap like:

            • “Fixing” unrealistic development budgets or timelines based on how long it took to prototype something with an Arduino
            • Drafting realistic development budgets for potential customers (or, worse, internal team leads/product managers/whatever) who’d come up with a basic prototype on an Arduino before
            • Turning an Arduino prototype into something that could be sold at a non-zero margin
            • Helping interns and juniors who’d learned basic embedded development on an Arduino how to do product development for real-life applications
            • Integrating or reviewing poor-quality products that turned out to be Arduino clones, running Arduino code (and therefore buggy libraries written by developers with zero understanding of electrical/electronic engineering) under the hood.

            The Arduino is an excellent platform for people who don’t need to do embedded development and therefore don’t need to learn to do it properly. It’s great for artists who want to do interactive art installations, for example, and have better and more important things to focus on than engineering a cheap but reliable product. It’s not a bad educational platform for teaching some basic interfacing concepts, I guess – it’s certainly easier to explain how you (properly) hook up things to GPIO pins, or how buses like I2C work, when you don’t have to tickle twenty control registers and sacrifice a goat in order to send a bit over a wire.

            It is, however, a pretty bad prototyping platform, especially in a world where cheap development boards are the norm. I remember my first job at a fancy start-up of sorts where people prototyped with a bunch of Arduinos and no ICEs and nobody knew what JTAG was, and how amazed everyone was when I showed them you could debug your code with breakpoints rather than blinking LEDs.

            As for using these things, like, in an actual product that you sell to people, I can’t recall ever seeing a non-trivial Arduino library that I’d want to put on a chip that I put in a box that people who could eventually sue me pay money for. I don’t want to say they don’t exist, maybe they do these days, but I can’t recall seeing one back when I still cared (4-5 years ago? I’m not really involved in this particular flavour of embedded development anymore so I’ve lost touch with this a bit…).

          2. 12

            EDIT: I reached out to some friends who work in hardware and they disagreed with the OP. Here were some of their thoughts.

            1. Successful products have been prototyped, experimented with, and deployed on Arduino just fine.
            2. Arduino doesn’t affect portability much if at all. Most of the time porting embedded code to new platforms involves a lot of reworking anyway.
            3. Hidden details are often abstracted away by HALs anyway. Whether that HAL is Arduino or something provided by a vendor (like Microchip or ST), it all works. PlatformIO is also a thing that exists to solve a similar set of problems.

            Original Post:

            Arduino has always been a bit more of a “hack-it-together” ecosystem than a “production ready” ecosystem (when Arduino was coming out, I was still in university, and there was a lot of derision that Arduino programming wasn’t “real embedded development”.) There’s certainly room in between, but I haven’t seen too many attempts at creating a platform here in the middle. I think it’s largely because tinkerers are fine with the Arduino platform and production-oriented authors will use/write well-tested, well-engineered libraries. I do think it would be quite interesting to explore the space in the middle, but I’m not sure there’s enough interest out there to create a community around developing/maintaining such a platform. OP obviously has built up their own libraries so that might be a great start.

            1. 1

              I came back to see if anything new had been added to this thread, and for some reason this stuck out to me in your edit, even though I’m pretty sure it wasn’t new:

              PlatformIO is also a thing that exists to solve a similar set of problems.

              Does PlatformIO do anything HAL-like? I quite like it because it lets me use tooling that’s more like what I’m accustomed to with Arduino, and I appreciate that there’s less hidden magic in it. I really like the way it manages getting the right versions of relevant tooling for a particular board, but I felt like (and appreciated that) it didn’t insert itself into the programming interface the same way the Arduino-produced tools do.

              1. 1

                I dug into the docs a bit, and from what I can tell, it seems like you can configure the platform you’re writing for with configuration, and then be able to read/write from registers or buses through some Environment abstractions that PlatformIO gives you, but I didn’t dig deep enough to see if I’m accurate or how exactly this is done.

                https://docs.platformio.org/en/latest/platforms/atmelavr.html is an example of this with AVR.

                Here’s a link to STMCube, a HAL for the STM32 line of uCs https://www.st.com/en/ecosystems/stm32cube.html

            2. 9

              I didn’t “do” Arduino for the first couple of years I was playing with microcontrollers; always felt it somehow beneath me as a software engineer of my seniority. I made some cool things but it was extremely hard work and took a long time.

              Early this year I needed to cook something simple up and decided to try the “easy” path, even as I was feeling it was a cop out. Since then: a little serial debugger using a monochrome I2C panel for output. A firmware dumper and flasher for a $30 TFT panel controller from AliExpress. An IR remote-controlled USB gamepad to supplement a DDR pad. All this just in the in-between moments of my actual hobbies and work.

              They don’t breed great software engineers — if I’d learned to code this way I’m sure my mental model would take a lot of adjustments — but they sure do let you actually make stuff that works.

              1. 6

                To pile on a tiny bit here, I’m a software person. I’m interested in hardware, and while I’m not a wizard with a soldering iron, I can generally get through hole components installed and working. I understand enough about hardware not to be lost when someone smart talks about it, to do some minor repairs, and to follow instructions when something I want is either only available as a kit or when only the kit version of a thing fits my budget.

                But my creative expression gets interesting in software, not in hardware.

                The awesome thing about Arduino is it gave me a huge supply of things I could use (with my less-than-half-assed hardware skills) to put together interesting toys that enable my software to interact with physical devices. On a small budget without needing to arrange a group buy because of minimum quantities, even.

                When I wanted to read a couple of temperature sensors and control a heater and a fan based on those readings, there was a shield for that.

                Arduino (and some others that are, in some respects, better, since) put it within reach for me to make software interact with the physical world outside my PC case. Before Arduino came along, I needed to have a product team around just to try out most ideas that I considered interesting.

                It’s not for making products. It lets me play in spaces where products aren’t (yet) reasonable.

              2. 8

                In the Arduino world, everything is done in C++, a language which is almost never used on 8-bit microcontrollers outside of this setting because it adds significant complexity to the toolchain and overhead to the compiled code.

                I don’t buy this. C++ is C with extra features available on the principle that you only pay for what you use. (The exception [sic] being exceptions, which you pay for unless you disable them, which a lot of projects do.)

                The main feature is classes, and those are pretty damn useful; they’re about the only C++ feature Arduino exposes. There is zero overhead to using classes unless you start also using virtual methods.

                The C++ library classes will most definitely bloat your code — templates are known for that — but again, you don’t have to use any of them.

                (Aside: can someone explain why anyone’s still using 8-bit MCUs? There are so many dirt cheap and low-power 32-bit SoCs now, what advantage do the old 8-but ones still have?)

                1. 9

                  (Aside: can someone explain why anyone’s still using 8-bit MCUs? There are so many dirt cheap and low-power 32-bit SoCs now, what advantage do the old 8-but ones still have?)

                  They’re significantly cheaper and easier to design with (and thus less pretentious in terms for layout, power supply parameters, fabrication and so on). All of these are extremely significant factors for consumer products, where margins are extremely small and fabrication batches are large.

                  Edit: as for C++, I’m with the post’s author here – I’ve seen it used on 8-bit MCUs maybe two or three times in the last 15 years, and I could never understand why it was used. If you’re going to use C++ without any of the ++ features except for classes, and even then you still have to be careful not to do whatever you shouldn’t do with classes in C++ this year, you might as well use C.

                  1. 3
                    • RAII is a huge help in ensuring cleanup of resources, like freeing memory.
                    • Utilities like unique_ptr help prevent memory errors.
                    • References (&) aren’t a cure-all for null-pointer bugs, but they do help.
                    • The organizational and naming benefits of classes, parameter overloading and default parameters are significant IMO. stream->close() vs having to remember IOWriteStreamClose(stream, true, kDefaultIOWriteStreamCloseMode).
                    • As @david_chisnall says, templates can be used (carefully!) to produce super optimized type-safe abstractions, and to move some work to compile-time.
                    • Something I only recently learned is that for (x : collection) even works with C arrays, saving you from having to figure out the size of the array in more-or-less fragile ways.
                    • Forward references to functions work inside class declarations.

                    I could probably keep coming up with benefits for another hour if I tried. Any time I’m forced to write in C it’s like being given those blunt scissors they use in kindergarten.

                    1. 2

                      The memory safety/RAII arguments are excellent generic arguments but there are extremely few scenarios in which embedded firmware running on an 8-bit MCU would be allocating memory in the first place, let alone freeing it! At this level RAII is usually done by allocating everything statically and releasing resources by catching fire, and not because of performance reasons (edit: to be clear, I’ve worked on several projects where no code that malloc-ed memory would pass the linter, let alone get to a code review – where it definitely wouldn’t have passed). Consequently, you also rarely have to figure out the size of an array in “more-or-less fragile ways”, and it’s pretty hard to pass null pointers, too.

                      The organisational and naming benefits of classes & co. are definitely a good non-generic argument and I’ve definitely seen a lot of embedded code that could benefit from that. However, they also hinge primarily on programmer discipline. Someone who ends up with IOWriteStreamClose(stream, true, kDefaultIOWriteStreamCloseMode) rather than stream_close(stream) is unlikely to end up with stream->close(), either. Also, code that generic is pretty uncommon per se. The kind of code that runs in 8-16 KB of ROM and 1-2 KB of RAM is rarely so general-purpose as to need an abstraction like an IOWriteStream.

                      1. 2

                        I agree that you don’t often allocate memory in a low-end MCU, but RAII is about resources, not just memory. For example, I wrote some C++ code for controlling an LED strip from a Cortex M0 and used RAII to send the start and stop messages, so by construction there was no way for me to send a start message and not send an end message in the same scope.

                        1. 1

                          That’s one of the neater things that C++ allows for and I liked it a lot back in my C++ fanboy days (and it’s one of the reasons why I didn’t get why C++ wasn’t more popular for these things 15+ years ago, too). I realise this is more in “personal preferences” land so I hope this doesn’t come across as obtuse (I’ve redrafted this comment 3 times to make sure it doesn’t but you never know…)

                          In my experience, and speaking many years after C++-11 happened and I’m no longer as enthusiastic about it, using language features to manage hardware contexts is awesome right up until it’s not. For example, enforcing things like timing constraints in your destructors, so that they do the right thing when they’re automatically called at the end of the current scope no matter what happens inside the scope, is pretty hairy (e.g. some ADC needs to get the “sleep” command at least 50 uS after the last command, unless that command was a one-shot conversion because it ignores commands while it converts, in which case you have to wait for a successful conversion, or a conversion timeout (in which case you have to clear the conversion flag manually) before sending a new command). This is just one example but there are many other pitfalls (communication over bus multiplexers, finalisation that has to be coordinated across several hardware peripherals etc.)

                          As soon as you meet hardware that wasn’t designed so that it’s easy to code against in this particular fashion, there’s often a bigger chance that you’ll screw up code that’s supposed to implicitly do the right thing in case you forget to “release” resources correctly than that you’ll forget to release the resources in the first place. Your destructors end up being 10% releasing resources and 90% examining internal state to figure out how to release them – even though you already “know” everything about that in the scope at the end of which the destructor is implicitly called. It’s bug-prone code that’s difficult to review and test, which is supposed to protect you against things that are quite easily caught both at review and during testing.

                          Also, even when it’s well-intentioned, “implicit behaviour” (as in code that does more things than the statements in the scope you’re examining tell you it does) of any kind is really unpleasant to deal with. It’s hard to review and compare against data sheets/application notes/reference manuals, logic analyser outputs and so on.

                          FWIW, I don’t think this is a language failure as in “C++ sucks”. I’ve long come to my senses and I think it does but I don’t know of any language that easily gets these things right. General-purpose programming languages are built to coordinate instruction execution on a CPU, I don’t know of any language that allows you to say “call the code in this destructor 50us after the scope is destroyed”.

                  2. 7

                    While you can of course can put a 32 bit SoC on everything, in many cares 8 bitters are simpler to integrate into the hardware designs. A very practical point, is that many 8 bitters are still available in DIP which leads to easier assembly of smaller runs.

                    1. 5

                      Aside: can someone explain why anyone’s still using 8-bit MCUs? There are so many dirt cheap and low-power 32-bit SoCs now, what advantage do the old 8-but ones still have?

                      They’re dirt cheaper and lower power. 30 cents each isn’t an unreasonable price.

                      1. 3

                        You can get Cortex M0 MCUs for about a dollar, so the price difference isn’t huge. Depending on how many units you’re going to produce, it might be insignificant.

                        It’s probably a question of what you’re used to, but at least for me working with a 32 bit device is a lot easier and quicker. Those development hours saved pay for the fancier MCUs, at least until the number of produced units gets large. Fortunately most of our products are in the thousands of units…

                        1. 9

                          a 3x increase in price is huge if you’re buying lots of them for some product you’re making.

                          1. 4

                            Sure, but how many people buying in bulk are using an Arduino (the original point of comparison)?

                            1. 2

                              I mean, the example they gave was prototyping for a product..

                          2. 6

                            If you’re making a million devices (imagine a phone charger sold at every gas station, corner store, and pharmacy in the civilized world), that $700k could’ve bought a lot of engineer hours, and the extra power consumption adds up with that many devices too.

                          3. 2

                            The license fee for a Cortex M0 is 1¢ per device. The area is about the size of a pad on a cheap process, so the cost both of licensing and fabrication is pretty much as close to the minimum cost of producing any IC.

                            1. 1

                              The license fee for a Cortex M0 is 1¢ per device.

                              This (ARM licensing cost) is an interesting datapoint I have been trying to get for a while. What’s your source?

                              1. 2

                                A quick look at the Arm web site tells me I’m out of data. This was from Arm’s press release at the launch of the Cortex M0.

                                1. 1

                                  Damn. Figures.

                            2. 1

                              Could you name a couple of “good” 8-bit MCUs? I realized it’s been a while since I looked at them, and it would be interesting to compare my preferred choices to what the 8-bit world has to offer.

                            3. 2

                              you only pay for what you use

                              Unfortunately many arduino libraries do use these features - often at significant cost.

                              1. 2

                                I’ve not used Arduino, but I’ve played with C++ for embedded development on a Cortex M0 board with 16 KiB of RAM and had no problem producing binaries that used less than half of this. If you’re writing C++ for an embedded system, the biggest benefits are being able to use templates that provide type-safe abstractions but are all inlined at compile time and end up giving tiny amounts of code. Even outside of the embedded space, we use C++ templates extensively in snmalloc, yet in spite of being highly generic code and using multiple classes to provide the malloc implementation, the fast path compiles down to around 15 x86 instructions.

                              2. 6

                                I’ve got a similar perspective from having done some great prototypes within the Arduino ecosystem. The biggest thing, though, that the author dances around a bit with points 2 and 3 is that the “escape hatch” is brutally bad. If I have a prototype written inside the Arduino ecosystem and want to turn it into a “production ready” firmware build (which may require twiddling microcontroller-specific stuff), I’m going to have a very bad time.

                                PlatformIO bridges this gap a little bit in that I can at least do a reasonable automated CLI-based build, but it’s still bring along all of the baggage from point 1.

                                I have felt for quite a while that there’s a niche to be filled in between “Arduino handles everything for me” and “Let’s start writing some CMake files and importing CMSIS”. The hardest part with gaining a foothold there will be the momentum that the Arduino ecosystem already has. In the last 2-3 years, I’ve started noticing that a fair number of vendors are proving an Arduino SDK as the only SDK they ship with their devices, often with weird hacks baked in to the SDK to provide e.g. sleep functionality that still fits into the square hole provided by the user-facing Arduino APIs.

                                1. 5

                                  I always start on Arduino. It’s so easy to go from idea to blinking led. And it’s easy to switch to a different dev board - even a vastly different platform, switching from AVR to ARM M0 to ESP32.

                                  Sometimes I want access to something not exposed through Arduino or the project is complex enough that the Arduino IDE is annoying. Then I switch to the vendor sdk and it’s inevitably a pain.

                                  1. 3

                                    Arduino is for hobbyists, not for production. Actually programming MCUs without abstractions is a PITA. Don’t need to know all that stuff if you wanna blink a LED and read a sensor for your personal project.

                                    1. 2

                                      Marlin (the most popular 3D printer firmware) is moving from Arduino to PlatformIO in order to support 32-bit MCUs, for similar reasons. That said, a lot of 3D printer companies are reluctant to upgrade to something new when they know the old 8-bit code works just fine. Getting started with PlatformIO as a hobbyist who just wants to tweak 3D printer compile-time settings is a much steeper pull as well.