1.  

    For some reason this page doesn’t render in Chrome - I just get the raw HTML source :-(

      1.  

        A little bit overly specific in some places (not all code has a GUI or classes) but overall a great guide to an important and sadly neglected aspect of programming.

        1. 8

          While I think a lot of this advice is fine as far as it goes, this still hews to the model of candidates running a gauntlet of interviews. These have very little in common with real work no matter how much you try to make the questions “work like”.

          If you’re a very large company - a google or amazon - it makes sense to use a series of exercises where top performance in the exercises correlates with good job performance, then try to interview every engineer in the world. Those companies have the scale to spend a lot of effort on interviewing, and cream off people who are not likely to be turkeys. Indeed, at their scale this is probably the most reliable way to do things.

          For the rest of us, we have a different problem - how do we find and attract good people without trying to encounter every engineer, how do we in a tractable amount of time fill the positions we need to fill, and how do we avoid bad people while not filtering out good people (if you’re small you can’t really afford to make a bunch of type I or type II errors). So, the problem to solve for is how do you figure out what it would be like to work with each candidate, having first filtered them in as possibly having the attributes you need, and then based what it would be like to work with them, decide if they still look like the person you need.

          These are some exercises I’ve seen that address this problem relatively directly:

          • Take home exercise: some groups of engineers (e.g. web developers) won’t do them (mostly), while other engineers like data engineers and devops engineers seem to prefer them. These can run unnecessarily long unless the interviewers take the time to time trial it repeatedly internally. This can be done very well by the interviewers, if they actually take time testing and scoping seriously.
          • Ask the engineer how they should be assessed. This has the advantage that the candidate should be at their best, and barring bad administration of the assessment, should be very fair to them. You get information both about how they think from the nature of the exercise and how they carry it out
          • Ask the candidate to explain some code of their own choosing. In my own experience, this is the single best way to figure out if a candidate actually understands what they do, or if they just sort of muddle through things. I’ve seen plenty of candidates who aced all other rounds reveal a fundamental lack of understanding. This also allows the whole team to join in the session. It also allows the whole team to potentially nudge or help out the candidate - this controls for the biases of a single or couple of interviewers

          The one thing I will say is that a single live coding session should occur at some point in the process, as there are a very small number of candidates who will cheat on a take home, and in theory they could prep extensively with a coach for the explanation session.

          1. 2

            Your 3 suggestions are very good. In fact, I think they’d improve things at big tech companies too. There’s no such thing as “a series of exercises where top performance in the exercises correlates with good job performance”, evidently :(

          1. 4

            “At the end of the day, nearly all of us run software on the Linux Kernel. The Linux Kernel is written in C, and has no intention of accepting C++ for good reasons. There are alternative kernels in development that use languages other than C such as Fuschia and MirageOS, but none have yet reached the maturity to run production workloads.”

            First counter is all the software not running on Linux kernel. Microsoft, Apple, IBM, Unisys, etc still exist with their ecosystems. Second, there’s two things here author is conflating with the C justifications which are too common: effect of legacy development; what happens at a later point. For legacy, the software gets written in one language a long time ago, becomes huge over time, and isn’t rewritten for cost/time/breakage reasons. That’s the OS’s for sure given their size and heritage. Although legacy kernels are in C, much of Windows is in C++ and Mac OS X in Objective-C. They intentionally avoided C for those new things.

            Outside those, Burroughs MCP is done in an ALGOL, IBM uses macro-ASM’s + PL/S for some of theirs, and OpenVMS has a macro-asm + BLISS in theirs. Those are still sold commercially and in production. For others in past, there were LISP machines in LISP variants, Modula-2/Oberon systems w/ Astrobe still selling it for microcontrollers, Pascal, FreeBASIC, Prime had a Fortran OS, Ada used from embedded to mainframe OS’s, and several in Rust now. There’s also model of systems stuff in high-level language that depends on tiny amount of low-level stuff done in assembly or low-level language. OS projects were done that way in Haskell and Smalltalk.

            Lot of options. You don’t need C for OS’s. The sheer amount of OS work, libraries, compiler optimizations, and documentation make it a good choice. It’s not the only one in widespread use or strictly necessary, though. It’s possible other things will work better for OS developers/users depending on their needs or background.

            1. 2

              I don’t believe the Mac OS X Kernel uses Obj-C. It’s mostly C with the drivers in C++.

              1. 4

                True (source: I used to write kernel extensions for macOS to do filesystem things). NeXTStep/OpenStep had Driver Kit, an ObjC framework for writing device drivers in-kernel, which appeared in early Mac OS X developer previews but not in the released system.

                Specifically IOKit (the current macOS driver framework) is Embedded C++, which is C++ minus some bits like exceptions.

                BTW Cocoa is also a “legacy part”, but it comes from NeXT’s legacy, not Apple’s. IOKit (the C++ driver framework) was new to Mac OS X (it didn’t come from NeXT or Mac OS 8, though clearly it was a thin port of DriverKit) and the xnu kernel was an update of NeXT’s version of CMU Mach 2.5.

                1. 3

                  Yeah, that’s the legacy part. They just keep using what it’s already in. It was the newer stuff like Cocoa that used Objective-C IIRC. I should’ve been more clear.

                  1. 1

                    that has nothing to do with the kernel negates the point. It amplifies the C point because darwin is written in C

                    1. 1

                      It doesn’t at all. I already addressed why the kernel stays in C: someone long ago chose C over the other systems languages with so much code they can’t justify rewriting it. It’s an economic rather than a language effect. Thinking it’s strictly the properties of C is like saying Sun Microsystems wasn’t the cause of the rise of Java, Microsoft wasn’t behind rise of .NET, and so on. People are still being trained in those languages to add to massive codebases that are too big to rewrite. Same thing that always happens.

                      1. 2

                        big or not (and xnu is biggish), risk is more of a concern than cost. the kernel needs to stay up a few years in between panics and they’ve already got it there for the C code.

                        1. 1

                          poppycock

                  2. 2

                    In addition to all that, I think the systems level domain also has a new growth sector by replacing the kernel with a hypervisor+library combination. Aka, the topic from this talk[1].

                    [1] https://www.youtube.com/watch?v=L-rX1_PRdco

                    1. 2

                      Although still listening to talk, their robur collective has a great, landing page. Lots of good stuff on it. I also like her team is working on a home router. That’s what I told separation kernel vendors to build since it’s (a) hugely important, (b) huge market, and (c) can be done incrementally starting with virtualized Linux/BSD.

                  1. 12

                    Commodore was spectacular in how well it could snatch defeat from the jaws of victory. The Amiga was the most amazing machine the world had yet seen in 1985, they had possibly the best team of hardware and software engineers in the world, but management just…couldn’t leave it well enough alone.

                    Bizarre decisions like:

                    • The Amiga (later retroactively named the Amiga 1000) had a sidecar expansion port. The Amiga 500 had the same port, but upside down…so that all of the existing peripherals had to be upside down to work. Given how they were designed, it meant that none of them would.
                    • The Amiga 2000 was the first machine that could use the Video Toaster, and the Video Toaster was the killer app for the Amiga. Then they made the Amiga 3000, which could also use the Video Toaster, except that the case was a quarter-inch too short for the Toaster card.
                    • The Amiga 600 had a PCMCIA slot. Except that they rushed to manufacturing using a draft of the PCMCIA spec, rather than waiting for the final specification. The end result was that regular PCMCIA cards often wouldn’t work on the Amiga.
                    • Amiga Unix on the Amiga 3000UX was considered one of the highest-quality SVR4 ports ever. Sun offered to produce the Amiga 3000UX for Commodore as a Sun-branded Unix workstation that could run Amiga software…and Commodore declined.

                    We’d all be using Amigas now if Commodore’s management had literally been anything other than hilariously incompetent, I swear.

                    1. 4

                      Jimmy Maher’s book about the Amiga explores a number of these bizarre decisions and reaches a similar conclusion. The title says it all: The Future Was Here! http://amiga.filfre.net/

                      1. 2

                        Agree with everything except conclusion as even less incompetent companies failed including Sun. Only Apple survived and even they became are now basically producing PCs with their distro.

                        However we might have been living in a different future if Amiga had an opportunity for a bigger impact. Mine certainly is as I went to study mathematics instead of CS because I could not imagine developing software for PCs in DOS era.

                        1. 1

                          Are you certain that the first 2 issues (upside-down sidecar port & case too short for toaster card) were the fault of management & not engineering?

                        1. 4

                          what happens when disparate applications really do need to know about data in other applications? … Fire off a message saying “CustomerAddressUpdated” and any other applciation that is concerned can now listen for that message and deal with it as it sees fit.

                          What happens if the message drops?

                          1. 1

                            PubSub should guarantee delivery.

                            1. 3

                              And/or you can make it so that the application maintains the events as part of its service. Then, if there’s an outage, as part of the recovery you can go read the event log and update the data as required. Any solution where event messages aren’t ephemeral will do, I think. I also think “not being able to emit an event message” should also be treated like a fairly critical incident, if you go down that path. I think many things.

                              1. 2

                                How? The application can crash between updating the data and publishing the message.

                                1. 1

                                  Ah, you mean the publishing app - I’d thought you meant the subscriber earlier. Treat messages the way an offline email client treats newly composed email: stick it in a queue to be sent and only remove items from the queue once read receipts have been received for them. This requires message to be idempotent, of course.

                                  1. 2

                                    That doesn’t address my question, though: there are two actions happening: update the data in the DB and tell people about it. The app can fail between the first and second.

                                    1. 2

                                      The message creation (e.g. postgres queue) can be part of the data transaction. Otherwise you need 2PC to guarantee the operation between two subsystems. https://en.wikipedia.org/wiki/Two-phase_commit_protocol

                              1. 1

                                Hm, my router is on the new list. I’ll check for the https–>http behavior tonight.

                                1. 1

                                  So is mine :( I bought a new router that isn’t on the list. It’s a Motorola router and none of those seem to be listed but I don’t know if that’s because they aren’t vulnerable or because they weren’t tested.

                                  1. 2

                                    Update: mine is not on the list. Similar model number, different manufacturer.

                                    Still, it highlights a problem that I’ve been aware of for some time. Since my DSL router is issued by and managed by my ISP, I can’t tell if it is infected. I know it runs Linux inside (the UI mentions several Linux-specific terms), but I don’t have root on it. I don’t have a shell at all.

                                    I could capture packets on the ethernet ports or the WiFi interface, but I can’t observe data on the WAN interface.

                                    My conclusion thus far is that I need to break into it myself, just to check if anyone else has done so. Should be easy, since it has tons of services (even Samba for the share-usb-storage-device-with-lan feature) but, I’ve got half a dozen other projects queued up…

                                    If I had a magic wand (and I’d already used it to solve all the more pressing problems) I’d use it to convince my ISP to offer a service which records all the traffic my DSL interface receives or creates. The service would have a web interface which allows me to activate/deactivate it. The service would save the packet capture to a file, gzip it for me, and let me download it (after turning off the capture, of course!). The service might be called ‘virtual mirror port’ or something.

                                1. 2

                                  s/Compiled/Parsed but this is ingenious.

                                  1. 1

                                    The first part reminds me of go vet. The 2nd reminds me of a tool a coworker wrote to thank volunteers who helped her with her project. I’d been helping with it for 2 years and really appreciated the heartfelt individualized thank-you emails she’d been sending. Then one day she showed me how she did it: a program that read list of names & email addresses and sent the thank-you emails; mail-merge, basically! I was a little disappointed but also rather impressed.

                                    1. 1

                                      Not a week too soon ;-)

                                      1. 2

                                        Some of the issues listed have already been closed!

                                        1. 0

                                          Feels like a weak argument. I wonder if the author will agree to eat their hat if Waymo gets cars driving the general public around in Phoenix by the end of the year…

                                          1. 2

                                            Bold claims require bold hats being eaten.

                                            1. 1

                                              Italic hats taste better

                                          1. 5

                                            I really liked the writeup, but must ask: why must there be hypergrowth?

                                            1. 1

                                              Sometimes when you hit product-market fit, there demand is so high that there’s no way around it.

                                              1. 1

                                                There is always a way around it, and it’s pretty simple: raise the price till the demand fit your productivity.

                                                1. 1

                                                  Oh, I think your explanation above captures why that’s not always going to be the optimal solution ;-p

                                                  1. 2

                                                    I think your explanation above captures why that’s not always going to be the optimal solution ;-p

                                                    It largely depends on what you want to optimize.

                                                    If you want to optimize the profit of the company:

                                                    1. hypergrowth is a risky all-or-nothing approach: it could lead to a monopoly in the long term, but the probability of failure is much higher
                                                    2. raising the price is a less risky approach that optimize your profit enough to differentiate your investments

                                                    Now the whole goal of economics is to distribute resources in an efficient way.

                                                    Hypergrowth does not achieve that goals neither when it succeeds to create a monopoly, nor when it fails to. So one might argue that any hypergrowing company should be split, to optimize the efficiency of the system.

                                              2. 0

                                                Because of capitalism at global scale.

                                                It’s a continuous race to reach an overwhelming market share, so that you can destroy or buy any competitor.

                                                You need the hypergrowth because there’s few space left for the second and no place for the third. And if no one is fast enough the biggest players settle in a strong oligopoly: if you are not in you are out.

                                                This also requires a strong cultural pressure towards efficiency and productivity as core moral values: the whole article talk about people as if could talk about chickens in a farm.

                                                The profit is THE value, in itself.
                                                It’s not a measure of the positive social impact of the company, as it was in early economic theories. Even the hypocrisy of looking good is removed.

                                                Note that I liked the article, as a pretty good example of the efficiency of capitalism in the software engineering field.

                                              1. 5

                                                Examples of major changes:

                                                generics?

                                                simplified, improved error handling?

                                                I am glad to see they are considering generics for Go2.

                                                1. 5

                                                  Russ has more background on this from his Gophercon talk: https://blog.golang.org/toward-go2

                                                  The TL;DR for generics is that Go 2 is either going to have generics or is going to make a strong case for why it doesn’t.

                                                  1. 1

                                                    As it should be…

                                                    1. 1

                                                      Glad to hear that generics are very likely on the way from someone on the Go team.

                                                      The impression I got was that generics were not likely to be added without a lot of community push in terms of “Experience Reports”, as mentioned in that article.

                                                      1. 1

                                                        They got those :)

                                                    2. 1

                                                      Wouldn’t generic types change Go’s error handling too? I mean that when you can build a function that returns a Result<Something, Error> type, won’t you use that instead of returning Go1 “tuples” ?

                                                      1. 5

                                                        For Result type, you either need boxing, or sum type (or union, with which you can emulate sum type), or paying memory cost of both value and error. It’s not automatic with generics.

                                                        1. 1

                                                          I see, thanks for clarifying! :)

                                                        2. 1

                                                          As I understand it Go has multiple return values and does not have a tuple type, so not sure how your example would work. There are some tickets open looking at improving the error handling though.

                                                      1. 4

                                                        …or just use UNIX sockets.

                                                        1. 1

                                                          But that only works for comms b/w procs on the same machine!

                                                          1. 1

                                                            Which is what the article is about (as well as ephemeral ports).

                                                            1. 1

                                                              I thought it was about using WebSockets. Did I miss something?

                                                              1. 6

                                                                No more than the article is about Ruby.

                                                                Ephemeral port exhaustion only happens when using TCP, if you are proxying to localhost then UNIX or anon sockets are a far better option; they also have less overhead.

                                                                1. 2

                                                                  I was wondering, is there any downside of binding to UNIX sockets instead of regular TCP ones?

                                                                  1. 4

                                                                    Other than it being a host local only socket, not really though portability to Windows might be important to you. Maybe you are fond of running tcpdump to packet capture the chit-chat between the front and backends and UNIX sockets would prevent this though if you are doing this you probably are just as okay with using strace instead.

                                                                    From a developer perspective instead of connecting to a TCP port you just connect to a file on your disk, the listener when binding to a UNIX socket creates that file, nothing else is different. The only confusing gotcha is that you cannot ‘re-bind’ if the UNIX socket file on the filesystem already exists; for example the situation when your code bombed out and was unable to mop up. Two ways to handle this:

                                                                    1. unlink() (delete) any previous stale UNIX socket file before bind()ing (or starting your code); most do this, as do I
                                                                    2. use abstract UNIX sockets which works functionally identical but does not create files on the filesystem so no need to unlink. You need to take care though on the naming of the socket as all the bytes in sun_path contribute to the reference name, not just the bytes up to the NUL termination

                                                                    Personally what I have found works with teams (for an HTTP service) is for development the backend presentation is a traditional HTTP server listening over TCP enabling everyone to just use cURL, their browser directly or whatever they like. In production though, a flag is set (well I just test if STDIN is a network socket) to go into UNIX socket/FastCGI mode.

                                                                    As JavaScript/Node.js is a effectively a lingua franca around here, this is what that looks like:

                                                                    $ cat src/server.js | grep --interesting-bits
                                                                    const http = require('http');
                                                                    const fcgi = require('node-fastcgi');
                                                                    
                                                                    const handler = function(req, res){
                                                                      ...
                                                                    };
                                                                    
                                                                    const server = fcgi.isService()
                                                                      ? fcgi.createServer(handler).listen()
                                                                      : http.createServer(handler).listen(8000);
                                                                    
                                                                    server.on('...', function(){
                                                                      ...
                                                                    });
                                                                    
                                                                    $ cat /etc/systemd/system/sockets.target.wants/myapp.socket 
                                                                    [Unit]
                                                                    Description=MyApp Server Socket
                                                                    
                                                                    [Socket]
                                                                    ListenStream=/run/myapp.sock
                                                                    SocketUser=www-data
                                                                    SocketGroup=www-data
                                                                    SocketMode=0660
                                                                    Accept=false
                                                                    
                                                                    [Install]
                                                                    WantedBy=sockets.target
                                                                    
                                                                    $ cat /etc/systemd/system/myapp.service
                                                                    [Unit]
                                                                    Description=MyApp Server
                                                                    Before=nginx.service
                                                                    
                                                                    [Service]
                                                                    WorkingDirectory=/opt/myorg/myapp
                                                                    ExecStartPre=/bin/sh -c '/usr/bin/touch npm-debug.log && /bin/chown myapp:myapp npm-debug.log'
                                                                    ExecStart=/usr/bin/multiwatch -f 3 -- /usr/bin/nodejs src/server.js
                                                                    User=myapp
                                                                    StandardInput=socket
                                                                    StandardOutput=null
                                                                    #StandardError=null
                                                                    Restart=on-failure
                                                                    ExecReload=/bin/kill -HUP $MAINPID
                                                                    ExecStop=/bin/kill -TERM $MAINPID
                                                                    
                                                                    [Install]
                                                                    WantedBy=multi-user.target
                                                                    

                                                                    The reason for multiwatch in production is you get forking and high-availability reloads. Historically I would have also used runit and spawn-fcgi but systemd has made this no longer necessary.

                                                                  2. 1

                                                                    Agreed.

                                                                2. 1

                                                                  Local load balancing is the motivating example, but I wrote it highlight the general problem when load balancing between a large number of connections between a small number of backends (potentially external machines).

                                                                  UNIX sockets might be a reasonable solution to the particular problem in the post. It’s not something I’ve tried with HAProxy before though, so I’m not sure how practical it would be.

                                                            1. 1

                                                              DQN seems like a outlier.

                                                              1. 1

                                                                “I’m thinking of a larger program of work in which an entire solution has to be designed and delivered, (more or less) complete. Such projects have distinct beginning, middle and end phases, all of which must complete successfully if the project is to succeed.”

                                                                AFAIK, that’s not really a thing to which one applies Agile.

                                                                1. 3

                                                                  Why not? My company just did this for a year-long project, and it has worked out spectacularly well. We are nearing the end of the project, and everyone is really pleased at how it has turned out and how well by-the-book Scrum[1] worked for the process.

                                                                  [1] I prefer Kanban, but doing Scrum by the book is actually pretty good compared to Flaccid Scrum

                                                                  1. 2

                                                                    That’s great to hear!

                                                                1. 10

                                                                  (Warning: not an embedded guy but read embedded articles. I could be really wrong here.)

                                                                  “This big push is causing a vacuum in which companies can’t find enough embedded software engineers. Instead of training new engineers, they are starting to rely on application developers, who have experience with Windows applications or mobile devices, to develop their real-time embedded software.”

                                                                  I don’t know. I was looking at the surveys that Jack Ganssle posts. They seem to indicate that the current embedded developers are expected to just pick up these new skills like they did everything else. They also indicate they are picking up these skills since using C with a USB library or something on Windows/Linux isn’t nearly as hard as low-level C and assembly stuff on custom I/O they’ve been doing. I’m sure there’s many companies that are either new not knowing how to find talent or established wanting to go cheap either of whom may hire non-embedded people trying to get them to do embedded-style work with big, expensive platforms.

                                                                  I think author is still overselling the draught of engineers given all the products coming out constantly indicate there’s enough to make them. Plus, many features the engineers will need are being turned into 3rd party solutions you can plug in followed by some integration work. That will help a little.

                                                                  “developers used “simple” 8-bit or 16-bit architectures that a developer could master over the course of several months during a development cycle. Over the past several years, many teams have moved to more complex 32-bit architectures.”

                                                                  The developers used 8-16 bit architectures for low cost, sometimes pennies or a few bucks a chip. They used them for simplicity/reliability. Embedded people tell me some of the ISA’s are easy enough for even assembly coding stuff to be productive on kinds of systems they’re used in. Others tell me the proprietary compilers can suck so bad they have to look at assembly of their C anyway to spot problems. Also, stuff like WCET analysis. The 8-16-bitters also often come with better I/O options or something per some engineers’ statements. The 32-bit cores are increasing in number displacing some 8-16 bit MCU’s market share, though. This is happening.

                                                                  However, a huge chunk of embedded industry is cost sensitive. There will be for quite a while a market for 8-bitters that can add several dollars to tens of dollars of profit per unit to a company’s line. There will always be a need to program them. If anything, I’m banking on RISC-V (32-bit MCU) or J2 (SuperH) style designs with no royalties being those most likely to kill most of the market in conjunction with ARM’s MCU’s. They’re tiny, cheap, and might replace the MIPS or ARM chips in portfolios of main MCU vendors. More likely supplement. This would especially be true if more were putting the 32-bit MCU’s on cutting-edge nodes to make the processor, ROM, and RAM’s cheap as 8-bitters. We’re already seeing some of that. The final advantage on that note of 8-16 bitters is they can be cheap on old process nodes that are also cheap to develop SOC’s on, do analog better, and do RF well enough. As in, 8-16-bit market uses that to create huge variety of SoC-based solutions customized to the market’s needs since the NRE isn’t as high as 28-45nm that 32-bits might need to target. They’ve been doing that a long time.

                                                                  Note to embedded or hardware people: feel free to correct anything I’m getting wrong. I’ve just been reading up on the industry a lot to understand it better for promoting open or secure hardware.

                                                                  1. 3

                                                                    nit: s/draught/drought

                                                                    1. 2

                                                                      Was just about to post the same thing. I don’t normally do typo corrections, but that one really confused me. :)

                                                                    2. 2

                                                                      Yup. Jack Gannsle is always a good read. I highly recommend anyone interested in embedded systems subscribing to his Embedded Muse news letter.

                                                                      Whether the Open Cores will beat ARM? Hmm. Arm has such a strangle hold on the industry it’s hard to see it happen…. on the other hand Arm has this vast pile of legacy cruft inside it now, so I don’t know what the longer term will be. (Don’t like your endianess, toggle that bit and you have another, want a hardware implemented java byte code, well there is something sort of like that available, …..)

                                                                      Compilers? It’s hard to beat gcc, and that is no accident. A couple of years ago Arm committed to make gcc as performant as their own compiler. Why? Because the larger software ecosystem around gcc sold more Arms.

                                                                      However, a huge chunk of embedded industry is cost sensitive.

                                                                      We will always be pushed to make things faster, cheaper, mechanically smaller, longer battery life,….. If read some of what Gannsle has being writing about the ultra-low power stuff, it’s crazy.

                                                                      Conversely we’re also always being push for more functionality, everybody walks around with a smartphone in their pocket.

                                                                      The base expectation these days a smartphone size / UI/ functionality / price / battery life ….. which is all incredibly hard to achieve if you aren’t shipping at least a 100000 units…

                                                                      So while universities crank out developers who understand machine vision, machine learning, and many other cutting-edge research areas, the question we might want to be asking is, “Where are we going to get the next generation of embedded software engineers?”

                                                                      Same place we always did. The older generation were a rag tag collection of h/w engineers, software guys, old time mainframers, whoever was always ready to learn more, and more, and more…

                                                                      1. 1

                                                                        This week’s Embedded Muse addresses this exact article. Jack seems to agree with my position that the article way overstates things. He says the field will be mixed between the kinds we have now and those very far from the hardware. He makes this point:

                                                                        “Digi-Key current lists 72,000 distinct part numbers for MCUs. 64,000 of those have under 1MB of program memory. 30,000 have 32KB of program memory or less. The demand for small amounts of intelligent electronics, programmed at a low level, is overwhelming. I don’t see that changing for a very long time, if ever.”

                                                                        The architecture, cost, etc might change. There will still be tiny MCU’s/CPU’s for those wanting lowest watts or cost. And they’ll need special types of engineers to program them. :)

                                                                        1. 1

                                                                          Thanks for the inside view. Other thing about Jack is he’s also a nice guy. Always responds to emails about embedded topics or the newsletter, too.