1. 3

    It looked interesting enough: the design appears to be basically Rust-assisted key tenets of MISRA and isolation of tasks. So I cloned and built it. The binary and an STM32 board are now waiting for, uh, me finding that usb-c to usb-a dongle.

    1. 9

      Two remarks:

      • As the blog post points out, there are large companies with massive codebases in scripting-league languages (Python, PHP, Ruby, etc.; Javascript!) out there. But a surprising number of these companies are investing millions in trying to (1) implement some static typing on top of the language for better maintainability (performance is not cited as the concern usually), or (2) faster implementation techniques than the mainstream language implementation. (Companies seem to have some success doing (1), more than (2); because optimizing a dynamic language to go faster is surprisingly difficult.) This could be taken positively as in the blog post, “there are tools to make your Python codebase more maintainable / faster anyway”, but also negatively: “companies following this advice are now stuck in a very expensive hole they dug for themselves”.

      • “Computation is cheap, developers are expensive” is an argument, but I get uneasy thinking about the environmental costs of order-of-magnitude-slower-than-necessary programs running in datacenters right now. (I’m not suggesting that they are all Python programs; I’m sure there are tons of slow-as-hell Java or C++ or whatever programs running in the cloud.) I wish that we would collectively settle on tools that are productive and energy-efficient, and try to think a bit more about the environment than “not at all” as in the linked post.

      1. 5

         We did once an estimate, that if one our embedded product consumed 3W more per unit we’d have burned nearly 500MWh extra energy over deployed units’ then-lifetime.

        Inefficient code in prod is irresponsible, and unlike crypto mining it is not shamed enough. You might think your slow script that just scratches your itch is nbd but before you know it it’s cloned and used at ten thousand instances..

        1. 5

          “Computation is cheap, developers are expensive” is an argument, but I get uneasy thinking about the environmental costs of order-of-magnitude-slower-than-necessary programs running in datacenters right now.

          Not to mention the poor users who have to wait ten times longer for a command to finish. Their time is also expensive, and they usually outnumber the developers.

          1. 2

            “companies following this advice are now stuck in a very expensive hole they dug for themselves”.

            One could argue that this is “a good problem to have.” I mean, it didn’t become a problem for Facebook or Dropbox or Stripe or Shopify until they were already wildly successful, right?

            1. 4

              There is a strong selection bias here as we hear much less about which technical issues plagued less-successful organisations. It would be dangerous to consider any choice of a large company to be a good choice based on similar reasoning.

          1. 2

            Pushing out my first ever watchOS app.

            1. 2

              Cracked open Xcode to write a small app for isometric hold training. Useful in precision sports like target pistol/rifle and possibly archery. It’s quite possible something like that already exists but it is almost easier writing your own than sifting thru the app store!

              Swift has changed quite a bit since the last time I touched it: a quality I find unsettling in a programming language. Nonetheless the iOS version is almost finished now, and WatchOS is perhaps possible if I convince my wife to requisition her device.

              1. 13

                My daughter was born last Friday. I was fortunate enough to arrange three weeks parental leave where I work, and that’s naturally what am at this weekend too.

                Interestingly enough I found dabbing into my pet projects in between the chores and the little one’s feeding is quite doable. It feels like permanent tiredness and slight sleep deprivation allows for easier focusing, somehow? Or perhaps it’s just less coffee throughout the day.

                1. 5

                  Congrats

                  1. 2

                    Thanks!

                  2. 4

                    Congratulations :)

                    1. 2

                      Thank you!

                    2. 2

                      Congratulations! Glad you’re able to find time for tech as well.

                      1. 2

                        Thanks! Was bit afraid that the rest of my personal life will be completely derailed but it’s going good.

                      2. 2

                        Congrats and best of luck. 🎉 Mine are both teenagers but I still remember those early days.

                        1. 1

                          Thanks! Our older is 17 now, so our recollections are quite dim :)

                      1. 5

                        It’s a good list! One thing I would add: speed vs size. Speed matters a lot less in an SoC because adding an overpowered core consumes less real estate than adding more memory. You have to unlearn things like inlining code (consuming more flash) or optimizing algorithms (trading memory for speed).

                        As someone who moved from distributed systems into embedded firmware, I actually found the change refreshing and easier. Kinda like painting figurines or maintaining a garden, there’s joy in the act of writing and debugging the code, which I had lost in the world of ruby/java. (It probably helps that my earliest coding was on an Apple II, which is way more constrained than any modern embedded system.)

                        1. 3

                          I would disagree that inlining on unrolling would make a difference on any system where you have a luxury of multiple cores. The memory is in multiple megabytes and what’s eating it up is not executable code generally. On something like AVR though? Perhaps.

                          As someone who moved from distributed systems into embedded firmware, I actually found the change refreshing and easier.

                          Get into distributed embedded systems for the best of both worlds! :)

                          1. 1

                            Yeah, on most of these SoCs, we’re not even talking about one megabyte of RAM, and you’re lucky to get that in flash (which must be able to hold at least 2 copies of the app) either. It really makes your priorities shift! :)

                            1. 1

                              Oh, you’re talking about SIPs. Alright then. On a usual SoC + DRAM spin it simply makes no sense cost wise going multicore and memory-starved.

                        1. 6

                          If you enjoy this kind of interviews, I would wholeheartedly recommend Siebel’s Coders at Work. Some really in-depth conversations.

                          1. 2

                            “Here, Margaret is shown standing beside listings of the software developed by her and the team she was in charge of, the LM [lunar module] and CM [command module] on-board flight software team.”

                            1. 2

                              Looking at the “Motivating Example” section and its conclusion it pleases to see there indeed has been some progress since 1979.

                              1. 1

                                Shooting a magnum pistol match tomorrow.

                                Also going to have a check on my Frankenspectrum that hasn’t been powered on for over 20 years.

                                1. 2

                                  We’re hit again by supply chain issues, so banging my head on the desk I suppose.

                                  1. 4

                                    There’s so much wrong with this article I don’t know where to start.

                                    “lisp-1 vs lisp-2”? One of the things that lispers forever ado about.

                                    I guess this depends on who you talk to–on the whole for lispers the only people who don’t consider lisp-2 to be a mistake are the hardcore CL fans. Emacs Lisp is the only other lisp-2 with a large userbase, and if you talk to elisp users, most of them are annoyed or embarrased about elisp being a lisp-2. If you look at new lisps that have been created this century, the only lisp-2 you’ll find is LFE.

                                    Not a Important Language Issue […] For another example, consider today’s PHP language. Linguistically, it is one of the most badly designed language, with many inconsistencies, WITH NO NAMESPACE MECHANISM, yet, it is so widely used that it is in fact one of the top 5 most used languages.

                                    You can use this same argument to justify classifying literally any language issue as unimportant. This argument is so full of holes I’m honestly kind of annoyed at myself at wasting time refuting it.

                                    Now, as i mentioned before, this (single/multi)-value-space issue, with respect to human animal’s computing activities, or with respect to the set of computer design decisions, is one of the trivial, having almost no practical impact.

                                    Anyone who has tried to use higher-order functions in emacs lisp will tell you this is nonsense. Having one namespace for “real data” and another namespace for “functions” means that any time you try to use a function as data you’re forced to deal with this mismatch that has no reason to exist.

                                    I could go on but I won’t because if I were to find all the mistakes in this article I’d be here all day.

                                    1. 9

                                      I guess this depends on who you talk to–on the whole for lispers the only people who don’t consider lisp-2 to be a mistake are the hardcore CL fans.

                                      This only is doing a lot of work here, given that CL is where the majority of practice happens in the (admittedly tiny) Lisp world.

                                      1. 4

                                        I know anecdote is not data but I know far more people who work at Clojure shops than I do Common Lisp shops. How would we quantify “majority of practice?”

                                        1. 2

                                          It’s more to do with whether qualify Clojure as a dialect of Java or a dialect of Lisp.

                                          Clojure proclaims itself a dialect of Lisp while maintaining largely Java semantics.

                                          1. 2

                                            CL programmers are so predictable with their tedious purity tests. I wish they’d move on past their grudges.

                                            1. 3

                                              Dude you literally wrote a purity rant upthread.

                                              1. 2

                                                Arguing about technical merits is different from regurgitating the same tired old textbook No True Scotsman.

                                                1. 3

                                                  Look, (like everyone else) I wrote a couple Scheme interpreters. I worked on porting a JVM when Sun was still around. I did a JVM-targeting “Lisp-like” language compiler and even was paid for doing it. I look on Clojure and immediately see all the same warts and know precisely why they are unavoidable. I realize some people look at these things and see Lisp lineage, but I can’t help seeing some sort of Kotlin with parens through it.

                                                  And it’s not just me really: half of the people who sat on RxRS were also on X3J13, and apparently noone had a split personality. So no need to be hostile about technical preferences of others. When you talk to your peers it helps to build a more complicated theory of mind than “they are with me or they are wrong/malicious”.

                                                  1. 2

                                                    Sure, you can have whatever prreferences you want. But if you go around unilaterally redefining terms like “lisp” and expecting everyone to be OK with it, well, that’s not going to work out so well.

                                                    1. 2

                                                      If you hang around long enough you hear people calling about anything as “Lisp-like”. Forth, Python, Javascript, Smalltalk, you name it. Clojure is a rather major departure from lisps both in syntax and semantics, so this is not a super unusual point.

                                      2. 6

                                        on the whole for lispers the only people who don’t consider lisp-2 to be a mistake are the hardcore CL fans.

                                        That folks who use a Lisp-1 prefer a Lisp-1 (to the extent that non-Common Lisp, non-Emacs Lisp Lisp-like languages such as Scheme or Closure can fairly be termed ‘Lisps’ in the first place) is hardly news, though, is it? ‘On the whole, for pet owners the only people who don’t consider leashes to be a mistake are the hardcore dog owners.’

                                        Emacs Lisp is the only other lisp-2 with a large userbase, and if you talk to elisp users, most of them are annoyed or embarrased about elisp being a lisp-2.

                                        Is that actually true? If so, what skill level are these users?

                                        For my own part, my biggest problem with Emacs is that it was not written in Common Lisp. And I think that Lisp-N (because Common Lisp has more than just two namespaces, and users can easily add more) is, indeed, preferable to Lisp-1.

                                        1. 4

                                          Is that actually true? If so, what skill level are these users?

                                          This is based on my experience of participating in the #emacs channel since 2005 or so. The only exceptions have been people coming to elisp from CL. This has held true across all skill levels I’ve seen including maintainers of popular, widely-used packages.

                                        2. 4

                                          I dunno. I think the article is a bit awkward but I think the author is absolutely right: in practice, to the language user, it doesn’t really make a difference.

                                          I am a full-time user of a lisp-1. When I use it, I appreciate the lack of sharps and the like for when it’s time to use higher order functions or call variables as functions. The same language has non-hygienic macros, which Dick Gabriel rather famously claimed more or less require a separate function namespace, and have almost never found my macro usage to be hampered.

                                          At the same time, I was for three years a professional user of Elixir, a language with both syntactic macros and separate namespaces. I found it mildly convenient that I could declare an variable without worrying about shadowing a function, and never found the syntax for function reference or for invoking variables as funs to be particularly burdensome at all.

                                          To the user, it really doesn’t have to matter one way or the other.

                                        1. 2

                                          Not really, even though I do program a lot.

                                          1. 21

                                            Old Soviet systems mostly used an encoding named KOI8-R. … That encoding is, to put it politely, mildly insane: it was designed so that stripping the 8th bit from it leaves you with a somewhat readable ASCII transliteration of the Russian alphabet, so Russian letters don’t come in their usual order.

                                            That is both horrifying, and fiendishly clever. 🤯

                                            1. 10

                                              The OG though was KOI-7, a 7-bit encoding which mapped onto ASCII in a similar fashion. Its huge advantage was legibility across mixed Latin/Cyrillic devices on a teletype network, which was a big deal for Soviet merchant fleet at the time.

                                              1. 1

                                                This sounds clever, but all I ever actually saw was an unreadable mess of characters misinterpreted as Latin-1 (CP-1252).

                                              1. 18

                                                I’m on staycation so continuing what I was doing during the week: writing letters to political prisoners in my home country. There are 586 on the last count.

                                                Never been much of an essayist. It takes me most of the day (while mostly doing other things) to pick one from the list, formulate what I want to tell them and actually write/edit the letter. One complication with writing is the letters are censored. It is not advisable to write directly about the prisoner’s case and at the same time an impersonal text would come across shallow. If you have to convey certain strong feelings you have to get creative with indirect references. This is a simple but rather emotionally draining work, very much because you get to know their circumstances but are not really able to help other than with a word of support.

                                                1. 2

                                                  Oh wow, that sounds like a challenge. What do you hope to achieve? Giving them hope and support?

                                                  1. 7

                                                    Yes it’s not possible to achieve much else other than letting them know their sacrifice is felt and appreciated. Also the conditions tend to be rough. Belarusian prisons are not particularly comfy to begin with but political prisoners have to wear yellow patches and face additional hardships. There’s very little information coming in unfiltered and together with grim everyday routine it is almost a form of sensory deprivation. So letters from outside are very welcome.

                                                    The people imprisoned are from all walks of life, for the most part not politicians: workers, students (some as young as 16), teachers. As it happens high profile cases get most of people’s mail but it is equally important for the others. Not that I’d make a dent here, right now the prisoner population growth outpaces my writing.

                                                    1. 2

                                                      Oh wow. That is tragic.

                                                1. 3

                                                  Have to agree with one of criticisms: it is a trivial observation. But a well chosen title can make that profound :)

                                                  1. 8

                                                    Relieved to see none of our devices are vulnerable by virtue of having less than 256Mb memory. There is such a thing as too much memory!

                                                    1. 4

                                                      Bill Gates was right!

                                                      1. 7

                                                        Gates himself has strenuously denied making the comment. In a newspaper column that he wrote in the mid-1990s, Gates responded to a student’s question about the quote: “I’ve said some stupid things and some wrong things, but not that. No one involved in computers would ever say that a certain amount of memory is enough for all time.” Later in the column, he added, “I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There’s never a citation; the quotation just floats like a rumor, repeated again and again.”

                                                        https://www.computerworld.com/article/2534312/the--640k--quote-won-t-go-away----but-did-gates-really-say-it-.html

                                                        1. 4

                                                          The first time I heard this quote it was ‘64 K ought to be enough for anyone’. It was in reference to the fact that Microsoft BASIC had some fixed limits that made it impossible for it to support more than 64 KiB of RAM (some types had to be 16 bits, irrespective of the architecture that you ported it to, and on the 8086 / 8088 this limit still held because the 16-bit assumption was so ingrained in the codebase that removing it would have been close to a full rewrite). Some years later, I heard the 640 K version, which didn’t really make sense to me because that was an Intel-imposed limitation, not a Microsoft-imposed one.

                                                          Bill Gates has always denied the 640 K version and I’ve always wondered if he’s so vocal about denying that he said ‘640 K ought to be enough for anyone’ to obscure the fact that he actually said ‘64 K ought to be enough for anyone’.

                                                          1. 4

                                                            Doesn’t matter; still funny.

                                                      1. 1

                                                        VLAs are interesting in some more ways, one being among the few or even only(?) feature going from being part of the language in one revision to moving to an optional annex is another (c11) hinting at some of the tug of war going on between stakeholders within the committee itself.

                                                        VLAs are allocated on stack - and this is the source of the most of the problems

                                                        Does this have to be the case? I have not seen any implementation to the contrary, but what are the blockers from having an implementation defined control in the compiler for mapping the allocation/deallocation to functions of my choosing yet retain the syntactical convenience and distinction from static sized auto / dynamic size-lifespan malloc?

                                                        I can see that cleanup across longjmps being an awkward edge case to cover, are there any others?

                                                        1. 2

                                                          Not quite the same, but related: _malloca; allocates small sizes on the stack and large ones on the heap. Its memory needs to be manually freed, though; so it’s about performance, not correctness.

                                                          cleanup across longjmps being an awkward edge case to cover

                                                          Raii style patterns are problematic in general in c because there’s no mechanism for stack unwinding. There’s a working (controversial) proposal to add stack unwinding; presumably that could be implemented to integrated with your heap VLAs, but there’s no rescuing longjmp.

                                                          Another fun wrinkle: tcc vlas are incompatible with signal handlers.

                                                          1. 1

                                                            Using malloc for VLAs seems like an interesting idea. There’s a lot of weird cases (such as goto and longjmp), but AFAIK, compilers already generally handle those cases because it’s required by C++‘s destructors or GNU C’s __attribute__((cleanup)).

                                                            I wonder if it would be correct in all situations to rewrite:

                                                            int array[x];
                                                            ...
                                                            something(sizeof(array));
                                                            

                                                            into:

                                                            size_t __vla_len_1 = x;
                                                            int *array __attribute__((cleanup(free))) = malloc(sizeof(*array) * __vla_len_1);
                                                            ...
                                                            something(sizeof(*array) * __vla_len_1);
                                                            

                                                            You’d probably need a few more rewrite rules, but I don’t immediately see any huge show-stopping issues.

                                                            EDIT: I checked, and it seems like neither C++ destructors nor __attribute__((cleanup)) works with longjmp.

                                                            1. 6

                                                              I’ve written C macros in the past that use __attribute__((cleanup)) and a small fixed-size on-stack allocation and give me a pointer to either the stack allocation (if it’s big enough) or to a heap allocation. They expand to something like this:

                                                              static void clean_heap_buffer(void **buf)
                                                              {
                                                                free(*buf);
                                                                *buf = NULL;
                                                              }
                                                              
                                                              ...
                                                              
                                                              T stack_buf[16] = {0};
                                                              __attribute__((cleanup(clean_heap_buf)) T *heap_buf = NULL;
                                                              T *buf = stack_buf;
                                                              if (size > 16)
                                                              {
                                                                heap_buf = calloc(size, sizeof(T));
                                                                buf = heap_buf;
                                                              }
                                                              

                                                              You can now just use buf and everything is fine. If the function is inlined and the compiler can prove that size <= 16 is always true then the malloc paths are optimised away. Note that you can’t use free as the argument to the cleanup attribute because the cleanup function takes a pointer to the on-stack value, so you need a dereference. Your version will pass the address of a stack variable to free.

                                                              Whether destructors / cleanup work with longjmp depends on the implementation. The simple implementation saves some registers in setjmp and just restores them in longjmp. The Itanium version records the stack and instruction pointer registers and then uses DWARF unwind info to unwind the stack. Both modes are supported by GCC and Clang on most architectures but because they’re ABI-incompatible the former is used by default.

                                                              At some point after writing this kind of macro, you realise that you should just use a language that you don’t have to fight to do this kind of thing. C++ makes this trivial. LLVM has a SmallVector class that does this in a much cleaner way (including allowing dynamic resizing). It looks to consumers like a std::vector but contains a small (specified by template parameter) in-object buffer. If the number of elements you store fits into that buffer, it never allocates on the heap. If you push another element, it does a heap allocation and copies everything.

                                                            2. 1

                                                              How useful is that when you don’t have standard library linked?

                                                              1. 1

                                                                Not very, unless you write your own malloc/free I suppose. But in which situations do you have so much stack space available that you can just dynamically allocate it safely, but don’t have a libc?

                                                                1. 1

                                                                  This is common in bare metal applications and in OS implementation, a major use case for C where other languages aren’t making a dent. The systems are not necessarily tiny at all, just that you don’t have the standard library.

                                                                  Stack allocation is still a problem there of course due to runtime non-determinism. My point is making a core language feature depend on standard library is not a great idea practically and a bit of circular thinking.

                                                            1. 16

                                                              For me it’s mu4e in Emacs. The speed of mailutils, convenient keybindings and sane composition defaults you don’t have to fight to submit patches.

                                                              1. 3

                                                                Another vote here for mu4e. It helps me focus on getting through my inbox to have it outside of my browser and be able to use even more keyboard shortcuts than the gmail interface.

                                                                1. 3

                                                                  I also use mu4e. I haven’t found another email client that offers the same speed of execution and of user input. It connects with my password manager with a single line of configuration: (auth-source-pass-enable) which is builtin to Emacs. I also have the ability to define custom bookmarks to, with a single keystroke, show me all my inboxes, just my flagged emails, etc.

                                                                  The big feature for me though is contexts. For each email account I have, I define a :match-func function. I actually used a macro to create the functions to match on the account’s given Maildir. A large part of the mu4e workflow is marking messages to delete/flag/move/etc and then executing those marks (similar to dired). When I realized the contexts automatically reassign for each message you mark in “real time”, I was pleasantly surprised. This means, for example, if I there are a bunch of emails in a row from potentially different accounts, I can just spam the d key to mark them for deletion, then x to actually delete, and they will all go to their respective trash folders, not just the trash folder of the context you selected when you launched mu4e.

                                                                  1. 2

                                                                    Yet another vote for mu4e. Been using it for a few years and it’s great. A bonus is that it integrates especially well with orgmode; e.g. it’s trivial to link to emails from within orgmode TODOs, which is exceptionally helpful when a lot of TODOs come in via email :)

                                                                    1. 1

                                                                      I used to use mu4e, but I could never get the moving parts of mu, mbsync and Office365 to play nice together

                                                                      1. 1

                                                                        Same! Would love to hear from anyone with an Emacs-Office365 workflow they’re happy with to be honest.

                                                                        1. 1

                                                                          I’m using Gnus/nnimap now, which works reliably, if sometimes a wee bit slow due to O365 throttling

                                                                        2. 1

                                                                          I use it primarily with office365/exchange via offlineimap.

                                                                      1. 3

                                                                        People interested in this may also enjoy Dan Gelbart’s short presentation on flexures and their uses and construction in his (albeit rather smart) home workshop:

                                                                        https://www.youtube.com/watch?v=PaypcVFPs48

                                                                        This is part of a larger series on making prototypes, all of which are quite interesting.

                                                                        1. 1

                                                                          Oh? I thought Dan runs a lab (and a company making sintering machines).

                                                                          1. 1

                                                                            All his vids in that series are in his very well provisioned home lab. He’s taken expensive jig boring machines in lieu of performance bonuses in the past. He’s living the dream!