1. 13

    I found this to be a lovely 30-minute read on C’s motivation, history, design, and surprising success. I marked down over 50 highlights in Instapaper.

    If you haven’t written C in awhile, you should give it a try once more! Some tips: use a modern build of clang for great compiler error messages; use vim (or emacs/vscode) to be reminded C “just works” in editors; use a simple Makefile for build/test ergonomics.

    In writing loops and laying out data in contiguous arrays and structures, remind yourself that C is “just” functions, data atoms, pointers, arrays, structs, and control flow (plus the preprocessor!)

    Marvel at the timeless utility of printf, a five-decade-old debugging Swiss army knife. Remember that to use it, you need to #include <stdio.h>. As Ritchie laments here, C regrettably did not include support for namespaces and modules beyond a global namespace for functions and a crude textual include system.

    Refresh your memory on the common header files that are part of the standard and needed for doing common ops with strings, I/O, or dynamic heap allocation. You can get a nice refresher on those in your browser here:

    https://man.cs50.io/

    Overall, you’ll probably regain some appreciation for the essence of programming, which C represents not due to an intricate programming language design with an extensive library, but instead due to a minimalist language design not too far from assembly, in which you simply must build your own library, or rely directly on OS system calls and facilities, to do anything useful. There is something to be said for languages so small they can truly fit in your head, especially when they are not just small, but also fast, powerful, stable, ubiquitous, and, perhaps, as timeless as anything in computing can be.

    1. 5

      I don’t think that a lack of namespaces is really something to lament. _ is prettier than ::. For all the handwringing you see about them, I’ve literally never seen a symbol clash in C ever.

      1. 5

        I love C and it continues to be my first language of choice for many tasks, but namespaces are the first thing I’d add to the language if I could. Once programs get beyond a certain size, you really need a setting between “visible to one file” and “visible EVERYWHERE”. (You can get some of the same effect by breaking your code up into libraries, but even then the visibility controls are either external to the language or non-portable compiler extensions!)

        And for the record, I’d prefer an overload of the dot operator for namespaces. Or maybe a single-colon operator – some languages had that before the double-colon became ubiquitous.

        1. 2

          I tend to agree that this isn’t a huge issue in practice, especially since so very many large and well-organized C programs have been written (e.g. CPython, redis, nginx, etc.), and the conventions different teams use aren’t too far apart from one another. As you noted, they generally just group related functions together into files and name them using a common function namespace prefix, like ns_. But, clashes are possible, and it has meant C is used much more as a starting point for bespoke and self-contained programs (again, CPython, redis, and nginx are great examples), rather than as a programming environment to wire together many different libraries, as is common in Python, or even Go.

          As dmr describes it in the OP, this is just a “smaller infelicity”.

          Many smaller infelicities exist in the language and its description besides those discussed above, of course. There are also general criticisms to be lodged that transcend detailed points. Chief among these is that the language and its generally-expected environment provide little help for writing very large systems. The naming structure provides only two main levels, ‘external’ (visible everywhere) and ‘internal’ (within a single procedure). An intermediate level of visibility (within a single file of data and procedures) is weakly tied to the language definition. Thus, there is little direct support for modularization, and project designers are forced to create their own conventions.

          1. 2

            I don’t really think that namespaces are the reason people don’t use C for gluing together lots of other C programs and libraries. I think people don’t do that in C because things like Python and Bash are a million times more suitable for it in a million different ways, only one of which is namespaces.

            Large systems don’t need to all be linked together with one big ld call. Large systems should be made up of small systems interacting over standardised IPC mechanisms, each of which of course have their own independent namespaces.

            There’s also the convention we see of lots of tiny files, which is probably not actually necessary today. It made more sense in there days of centralised version control and global file locking in very old version control systems where merging changes from multiple people was difficult or impossible and one person working on a file meant nobody else could. But today, most modules should probably be one file. Why not?

            For example, OpenBSD drivers are usually a single .c file, for example, and they recommend that people porting drivers from other BSDs merge all the files for that driver into one. I actually find this easier to understand: it’s easier for me to navigate one file than a load of files.

        2. 4

          If you haven’t written C in awhile, you should give it a try once more! Some tips: use a modern build of clang for great compiler error messages; use vim (or emacs/vscode) to be reminded C “just works” in editors; use a simple Makefile for build/test ergonomics.

          I am going through the Writing An Interpreter In Go book but in C (which is totally new to me, coming from a JavaScript background) and it’s been the most fun I had in years. I’m actually starting to get quite fond of the language and the tooling around it (like gdb and valgrind).

          1. 2

            I recommend you take a look at http://craftinginterpreters.com as well, if you want something similar for C. The book is in two parts: the first part a very simple AST-walking interpreter written in Java, the second part a more complex interpreter that compiles the language to bytecode and has a VM, closures, GC, and other more complicated features, written in C. If you’ve already read Writing An Interpreter In Go you can probably skip the Java part and just go straight to the C part.

            1. 3

              Thanks, I will (after I’m done with this). I actually really liked it that the book is in Go but my implementation in C, as it made it a bit more exciting for me to think about how I would structure things in C and see what the tradeoffs are versus doing it in Go. Otherwise I’d be tempted to skip entire chapters and just re-use the author’s code, which obviously doesn’t help if my goal is to learn how it’s actually done.

          2. 4

            so small they can truly fit in your head

            Very true. One thing I’ve noticed, going to C from Rust and especially C++, is how little time I spend now looking at language docs, fighting the language or compiler itself, or looking at code and wondering, “WTF does this syntax actually mean?”

            There’s no perfect language though. I do pine sometimes for some of the fancier language features, particularly closures and things which allow you to directly concepts directly in code, like for(auto i : container_type) {...} or .map(|x| { ...}).

            1. 1

              One thing I’ve noticed, going to C from Rust and especially C++, is how little time I spend now looking at language docs, fighting the language or compiler itself, or looking at code and wondering, “WTF does this syntax actually mean?”

              It’s also really nice being able to type:

              man $SOME_FUNCTION
              

              to get the documentation for any function in the standard library (and many others not in the standard library). I do a lot of my development on the train (without access to the internet) and man pages are my best friend.


              On the topic of “wtf does this syntax actually mean” I do think C has some pain points. East const vs west const is still a point of confusion for many, and C’s function pointer syntax will melt your brain if you stare at it too long.

              At one point I wrote a C backend for a compiler I was working on and needed to really understand how declarations worked. I found that this article does a really good job explaining some of the syntax insanity.

            2. 4

              If anyone is looking to give modern C a try I would recommend reading How to C. It’s a short article that bridges the gap between C from K&R Second Edition and C in $CURRENT_YEAR. The article doesn’t cover the more nuanced details of “good” C programming, but I think that K&R + How to C is the best option for people who are learning the C language for the first time.

              1. 2

                Awesome recommendation! As someone who is picking up C for some programming fun & tasks again after a decade-long hiatus (focused on other higher-level languages), this is super useful for me. I have been re-reading K&R 2nd Ed and been looking for something akin to what you shared in “How to C”.

                I also found these two StackOverflow answers helpful. One, on the various C standards:

                https://stackoverflow.com/questions/17206568/what-is-the-difference-between-c-c99-ansi-c-and-gnu-c/17209532#17209532

                The other, on a (modern) set of useful reference books:

                https://stackoverflow.com/questions/562303/the-definitive-c-book-guide-and-list/562377#562377

            1. 3

              Awesome!

              Reminds me of something I did a while back:

              #include <stddef.h>
              #include <stdlib.h>
              
              typedef struct REF REF;
              struct REF
              {
                 int rc;
                 void (*free)(void *p);
                 char p[];
              };
              
              void *
              make(size_t n, void (*freefunc)(void *p))
              {
                 REF *r = calloc(1, sizeof(REF) + n);
                 if (!r)
                    return NULL;
                 r->rc = 1;
                 r->free = freefunc? freefunc : free;
                 return &r->p;
              }
              
              void
              retain(void *v)
              {
                 REF *r = (REF *)(((char *)v) - offsetof(REF, p));
                 r->rc++;
              }
              
              void
              release(void *v)
              {
                 REF *r = (REF *)(((char *)v) - offsetof(REF, p));
                 if (--r->rc <= 0)
                    r->free(r);
              }
              

              Basically make returns a “normal” pointer and we abuse offsetof to access a phantom containing structure that holds the reference count.

              1. 2

                I think sds does something similar for cstrings. It’s a pretty neat trick!

              1. 10

                This sadly feels like C++ will be bound by the part of the industry that just wants to live on legacy forever. So we’ll probably need to sacrifice C++ to them and use something new, if we want a modern C++.

                1. 7

                  There are only two kinds of languages: the ones people complain about and the ones nobody uses.

                  -Bjarne Stroustrup

                  The commitment to backwards compatibility is one of the major reasons why C++ has gained such high adoption in industry. C++ is certainly not my favorite language, but being able to compile and run code from two or three decades ago without modification is massively important for infrastructure development, and few languages have demonstrated the same compatibility guarantees seen in C++.

                  1. 4

                    That Bjarne’s quote is annoying. It is obviously true (almost tautological), but completely sidesteps whether the complaints are valid or not.

                    Backwards compatibility with problems of the past was already a hard constraint for C++98, and now there’s 20 more years of C++ additions to be backwards-compatible with.

                  2. 6

                    d and rust are both solid options for a modern c++ if you don’t need the backward compatibility. as others have noted, legacy support is one of the major selling points of c++; there is no reason for them to give that up.

                    1. 4

                      Explain please. Why do you think that? I only see new tools to use but my projects are my own.

                      1. 6

                        It’s regarding the non-breaking of the API and rather deprecation of std::regex etc. They’re not breaking it because apparently the industry will fork it otherwise. So we’re stuck with some bad decisions like forever.

                        1. 2

                          because apparently the industry will fork it otherwise

                          Don’t they already fork it? Almost every C++ project seems to have some reimplementation of part of the standard library.

                          1. 1

                            There’s a really big difference between forking the standard (whatwg-style) and avoiding a few of the features of the standard library in favor of your own alternative implementations.

                            1. 2

                              I very much doubt you’d see an industry fork. The companies with sufficient interest in C++ and the massive resources required to develop compilers are probably the ones pushing C++ forward.

                              What you would be more likely to see is some companies that just stop upgrading and stick with the latest release which doesn’t break compatibility.

                              If you did see any non-standard developments to said release, I expect they would be minor and not widely distributed. Those who are resistant to change are unlikely to be making big changes, and until relatively recently C++ has had very little in the way of a community that might coordinate a fork.

                      2. 4

                        Legacy code is code that’s worth keeping running. A significant part of C++’s value is in compatibility with that code.

                        Causing an incompatibility that would be at the kind of distance from C++ that Python 3 was from Python 2 just isn’t worth it. If compatibility constaints are loosened more, Rust already exists.

                        1. 3

                          Legacy code is code that’s worth keeping running.

                          Sure. The question is whether we have to keep punishing newly written code and new programmers (who weren’t even alive when C++’s poor decisions were made) with it.

                          1. 2

                            A language called “C++” but incompatible with the C++ that existing code bases are written in wouldn’t solve problems for new programmers working on those code bases. That is, you can’t change a language spec and fundamentally alter design decisions that existing code already was built on.

                            Constraints on newly-written code depend in how the new code intermingles with old code. New code interleaved tightly into existing C++ code is constained by existing C++. For code that doesn’t interact with C++ at all or interacts with C++ through a sufficiently identifiable interface doesn’t have to be in a language called “C++”.

                        2. 3

                          Agreed. I believe that this kind of “modern C++” is Rust; there just has to be a way to keep C++ experts and their design mentality away from core development. Otherwise Rust will end up like C++.

                          1. 3

                            I’d disagree with a characterization that Rust is a modernized C++. I believe there are things in Rust that the people building C++ would love to have (epochs for example are a hot topic in that thread), but I don’t think it’s the language they would build if they could get out the chisel and compatibility break away not only ABI but maybe even syntactic decisions and more. Despite the lack of commitment so far to measly ABI breaks, with some of what’s seemingly in the pipeline really just transforming a lot of the day-to-day of working with C++, maybe all you’d need to end up with a “modern C++” is sort of a, uh “C+”. My personal choice for trimming to create a C+? Kill preprocessor directives with fire!

                            1. 1

                              I could have said more precisely:

                              Rust is what C++ developers need. It’s not necessarily what they want.

                            2. 1

                              …there just has to be a way to keep C++ experts and their design mentality away from core development. Otherwise Rust will end up like C++.

                              This comment makes me a bit sad. I understand the point that @soc is making, but I don’t think that the Rust community should ever be built on a foundation of keeping people out. The C++ community certainly struggles with a culture of complexity, but there is a lot that the C++ community can bring to the Rust community and vice versa.

                              1. 2

                                That and a lot of Rust’s core team are C++ experts already.

                                1. 1

                                  Every language is built around a set of values that result in a self-selection process for potential adopters:

                                  • The language has bad documentation? Keeps out people who think documentation is important. (See Scala.)
                                  • The language keeps adding features? Keeps out people who think that adding more features does not improve the language. (See C++, C#, JavaScript, Typescript, Swift, Rust, …)
                                  • Etc. etc.

                                  For instance point number 2 – I have decided that Rust 1.13 is roughly the max language size I’m willing to deal with when writing libraries.

                                  I have subsequently skipped all newer Rust versions and the features that were added in those versions. I can’t really “un-adopt” Rust, but I think I’m pretty far removed from usage that Rust devs would consider “mainstream”.

                            1. 3

                              For a commercial project I would personally choose:

                              • Embedded => C (w/ GCC)
                              • High Performance Computing => C++14 or C++17 (w/ GCC or Clang)
                              • Web:
                                • Front-end => TypeScript
                                • Back-end => TypeScript (w/ node) or Go

                              For a non-commercial project I would use language(s) that bring me joy:

                              • C99 (w/ Clang)
                              • Racket
                              • R5RS scheme
                              • Bash
                              1. 3

                                Last week, I was able to take C++ code that I wrote 18 years ago and incorporate it into a new project. This is precisely why I consider C++ to be one of the greatest languages and Bjarne Stroustrup one of the best if not the best language desgners in the world.

                                It’s even better than C, because C++ is improving at a faster rate while making sure your code that you wrote 18 years ago still works. People severely underestimate how hard and impressive this is.

                                This is why I trust C++ with my new code because I trust that it will work 20 years from now.

                                1. 4

                                  I totally agree with your comment; I just wanted to point out that the evolution of C++ is managed by a committee, not just Bjarne Stroustrup. Bjarne was the initial language designer, but we should make sure we acknowledge the hard work and effort of everyone involved in the evolution of C++ (or really any community-driven technology).

                                  :-)

                                  1. 2

                                    Yes definitely! I don’t want to diminish the herculean efforts of the committee. Just pointing out that Bjarne could have given up long ago or created other languages like so many language designers do. He is now just a member of the committee but I think it’s fair to say his opinions still carry a lot of weight. The committee embodies a lot of the values he has always advocated for.

                                    1. You pay for only what you use.
                                    2. Don’t break old code.
                                    3. Make things as simple as possible but not simpler.

                                    If you keep rubbing a stone, eventually it will shine.

                                  2. 2

                                    I too would trust C++ in this fashion. What I don’t trust is all the meat to write C++ code without all the CVEs.

                                    1. 1

                                      Curious, which OS and browser are you using?

                                      1. 2

                                        Those that I distrust the least. The paucity of my available options has no impact on the quality of code being written around the world, the causal chain goes in the other direction if at all.

                                        1. 1

                                          Sure it does! Have you wondered WHY there are so few options? There are social and economic reasons. In other words, security requires investment that no government or corporation have interest in. They WANT leaky software.

                                          1. 1

                                            Yes, they do. And they also explicitly want everybody to throw it out and replace it constantly, which is the opposite of what we’re talking about.

                                            1. 1

                                              I guess, who is “they” in your response?

                                  1. 18

                                    In contrast my old C projects from 5-8 years ago still compile and run, with one particular example only needing a small modification.

                                    My Go programs from ~7 years ago still compiles and run with one example needing a small non-language related change.

                                    I don’t have any Rust programs that have been untouched for that long, but if you checkout 0.1.2 of ripgrep (released ~3.5 years ago), then it compiles just fine after running cargo update -p rustc-serialize.

                                    I can see how time safety could be an issue with some languages, but I think both Rust and Go are at a point where one can be fairly confident that they will be around ten years from now.

                                    In any case, this strikes me as a primitivist argument. If time safety is more important than memory safety, then that rules out every possible solution to memory safety that involves a new language.

                                    I mean, I don’t get why this isn’t an obvious case of risk profiles. At least, it would be better phrased that way than bombastically claiming time safety as some kind of trump card over some very important kinds of progress in programming languages.

                                    1. 9

                                      It should also be noted that (from what I understand) the Rust and Go toolchains provide a much cleaner path for upgrading code.

                                      In my personal projects I use some C libraries that were written before I was born. The libraries compile and run just fine, but if I wanted to port them to a modern (post C99) dialect of C it would take a lot of work, and there would surely be some mistakes that slip passed the compiler.

                                      Meanwhile Rust and Go have strong tooling built into the language infrastructure that will automate a majority of the work in porting to newer language specifications. In a way, software developed in these languages becomes more robust against the test of time, because the burden for upgrading a codebase does not fall solely on the programmer.

                                    1. 1

                                      I have a script git-wipu that does a git commit --ammend without changing the commit message and then force pushes it. I use it all the time when I’m on my own work-in-progress branch and I want to “accumulate” a single commit in small chunks. It’s like my git equivalent of instinctively saving a file every ten seconds.

                                      I also use the following aliases:

                                      alias shelve='git stash'
                                      alias unshelve='git stash apply'
                                      

                                      because I can never remember git stash on the few occasions that I need it.

                                      1. 2

                                        I did something similar with lots of sq! {SHA} commits until I found out about git commit --fixup which let’s you do similar, and then git rebase -i --autosquash it afterwards, which is a bit safer with repos others are working on - there’s some more detail on https://www.jvt.me/posts/2019/01/10/git-commit-fixup/ if that helps!

                                        (Originally posted at https://www.jvt.me/mf2/2020/02/s9zg3/ and hand syndicated)

                                        1. 2

                                          I didn’t even know that the --fixup & --autosquash flags existed.

                                          Thanks for the link; I’ll definitely look at incorporating these into my workflow!

                                          :-)

                                      1. 3

                                        I’m not skillful enough at CSS to just look at the code and tell, so, is it responsive?

                                        1. 4

                                          There is an accompanying post that uses it. It looked good and read well in all the testing I did.

                                          1. 2

                                            thanks!

                                        1. 2

                                          What browsers support rem units?

                                          What browsers have you tested this with?

                                          1. 5

                                            All modern browsers have supported rem units for a while now .

                                            1. 3

                                              I see a lot of red in this graph:

                                              https://caniuse.com/rem

                                              Global usage 97.95% + 0.08% = 98.03%

                                              That means nearly 2% of global users – 1 out of 50 – would see some kind of layout irregularities if using this style.

                                              1. 3

                                                While I disagree with your conclusion, I’d like to [meta] call this out as a high quality comment. You’ve done the work to support your argument.

                                                Of the affected 2% of users, nearly all are using IE 8 in 2020. They are very well practiced at using websites with layout irregularities.

                                                1. 1

                                                  What is the threshold at which we can start using a feature? Do these layout irregularities prevent users viewing the content or using the website?

                                                  1. 1

                                                    Honestly, some features I probably would not use for anything. I personally still believe in the Hypertext Dream, where every client is reasonably accomodated, without serving up different content to each one, if possible.

                                                    If some units cause everything to display in 1-pixel font in IE3, as happened with a different unit, I’m not going to use that feature, because why should anyone be denied using an old browser? Maybe they have an emotional attachment to it because they grew up with it, maybe they just appreciate its appearance, or maybe they’re just testing.

                                                    The amount of testing and tweaking needed to realize this dream may seem overwhelming at first, but it’s doable. And you already have all the old JavaScript books to guide you.

                                                    1. 2

                                                      Note the number is actually 99.85%, and 1.91% “untracked” browsers (it’s displayed confusing). I also suspect that at least some part of older browsers are actually bots that just have an old UA header. So effectively, it’s very close to 100%.

                                                      I’m generally pretty receptive to the idea of supporting old browsers, but being backwards-compatible all the way to IE3/1996 strikes me as unfeasible for many projects, not in the least because for much of the 90s and early 00s pretty much all browsers were horribly buggy and full with ad-hoc features and unspecified behaviour.

                                                      We don’t write C programs that still work on 4.2BSD, so why should we write websites that still work on IE3?

                                                      1. 1

                                                        TL;DR: It’s not feasible for many, but it’s feasible for some. There’s value in it, mainly learning the lessons of the last 25 years of the web, being forced to design light and functional, and just plain old nostalgia, a force to be reckoned with, because geeking out with old software is fun.

                                                        You’re right, it’s probably not feasible for many projects. I wanted to know just how unfeasible it is. I remembered that browsers I’d used in the past (starting around NN3 time) were not very picky about whatever HTML they supported, as long as you didn’t pile on too much of it. I also remembered browsers back to IE4 being pretty performant at DOM manipulations, as long as you stuck to syntax they understand.

                                                        So I started with the most likely candidates I’d end up using: I have a couple of older iPads I’ve stopped upgrading at iOS 8 and iOS 9, and a Windows XP machine with IE6 and Office 2003. Why should my own website be inaccessible to me from these devices?

                                                        Well, each browser has its beauty, in my opinion, and I think it’s a shame to just throw them out and never use them again. In part, I was inspired by a digital-art exibit at the MFA. There was a lot of modern stuff, with WiFi and interactivity, etc. There was also a room with a couple of beige-boxes set up with Windows 95 and Netscape 3, which were set to display some web-art from the 90s. You were free to come over and browse around. Of course, it made sense to use period-accurate browsers.

                                                        They may not be much in the rendering department, but the beautiful UIs are a whole other story. Have a look at some screenshots, and tell me you don’t feel something when you get to a browser you recognize. Also, my websites tend to attract a technical crowd, and many people ask me about Lynx, so I tested with that as well. Well, once I made it work with Lynx, I found that it worked reasonably well with just about any browser I threw at it. Without JavaScript, of course. Each browser required a few small tweaks, but overall, HTML was living up to its original intent.

                                                        With JS enabled, older browsers would get very confused about things like === and foo = function() { }. JavaScript compatibility has been one of the biggest chunks of it so far, and I am still working on it. I have gotten rid of all instances of “new” syntax and restricted it to an external .js file which is loaded selectively, based on feature checks. In-page JS has had to be stripped of all > characters, because apparently, Mosaic considers that the end of an HTML comment.

                                                        And so on… It may not be feasible for many projects, but it is certainly feasible for mine. And I do wonder how Microsoft and Google don’t spend a bit of effort on it. Actually, google.com is one of the more accessible places on the Web. For all their push to adopt https everywhere, and drop http, which people seemed to have assumed was implied, at least they still allow older browsers to access their search, and if you add -inurl:https to your query, you might get some results you can visit in, say, IE4. Microsoft’s website, and msn.com, on the other hand, undoubtedly still visited by millions of IE6es and IE4s, and probably a handful of earlier version, throw errors or don’t load.

                                                        Why doesn’t a company the size of Google spend a couple hundred grand on making their website load in every browser? I sometimes wonder, and the conclusions I’ve come to is that they don’t give a shit. They don’t care about the thinnest, longest tip of the long tail. They don’t care about pride in the quality of the product. And they don’t car about the Web itself, it’s just a platform they’re forced to deal with. But I do care, because I love browsers, and I love the Web.

                                                        I believe in compatibility, and I believe in the robustness principle. I think every browser is worth it, and that there’s value in studying them and using them, rather than forgetting history. We keep talking about diversity in browsing software, yet we ignore all the existing diversity. There are literally hundreds, if not thousands, of different browsers out there. And if they have a website they can be used on, maybe people will start giving them a try again.

                                                        I also wish to write a website which can last another 25 years of the Web. I think that designing a website which, theoretically, could have been working for the previous 25 years brings me closer to that goal.

                                                        Edit: I guess there’s another personal motivation for me. In my computing life, I’ve used many devices, and have many times been frustrated with “your browser is no longer supported”, when, as a web dev, I could not see a good reason not to. Often, these computers are in places like food banks, libraries, or someone’s home. There are also vision-impaired users out there, mobility-impaired, people like myself who prefer all-keyboard. For a while, without my own phone, I accessed my site through New York City’s “LinkNYC” kiosks via Google Translate, but that loophole was eventually closed. (What a loss that the entire city is peppered with web-capable devices, and none of them allow browsing the web, except for the kiosk website.) Every browser I make my site compatible with, on average, improves the possibility that a reader who was turned away at every other site finds mine welcoming, and that would be an honor.

                                              2. 2

                                                https://www.caniuse.com/#feat=rem

                                                All current versions including IE.

                                              1. 1

                                                Avoid package level state

                                                Seek to be explicit, reduce coupling, and spooky action at a distance by providing the dependencies a type needs as fields on that type rather than using package variables.

                                                I’ve actually started doing the opposite of this in a bunch of cases. I discovered that a lot of my classes/modules were basically being used as singletons so instead of binding functionality to types and requiring instantiation of the type I just keep everything as globals internal to the module and then expose a public interface that encapsulates that module-level-state. If something does require multiple instantiations I can always break it out into a struct/class later. A more detailed explanation of this concept is discussed in Brian Will’s Object-Oriented Programming is Good*. This form of module-oriented-programming doesn’t work in every case, but I have personally found it useful.

                                                1. 10

                                                  I wonder how much security review maintainers actually do. Reviews are difficult and incredibly boring. I have reviewed a bunch of crates for cargo-crev, and it was very tempting to gloss over the code and conclude “LGTM”.

                                                  Especially in traditional distros that bundle tons of C code, a backdoor doesn’t have to look like if (user == "hax0r") RUN_BACKDOOR(). It could be something super subtle like an integer overflow when calculating lengths of buffers. I don’t expect a volunteer maintainer looking at a diff to spot something like that.

                                                  1. 4

                                                    a backdoor doesn’t have to look like if (user == “hax0r”) RUN_BACKDOOR()

                                                    Reminded me of this attempted backdoor: https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-attempt-of-2003/

                                                    1. 3

                                                      I assume that as long as we keep insisting on designing complex software in languages that can introduce vulnerabilities, then such software will contain such vulnerabilities (introduced intentionally or unintentionally).

                                                      I think integer overflow is a great example. In C & C++ unsigned integers wrap on overflow and signed integers exhibit UB on overflow. Rust could have fixed this, but overflow is unchecked by default in release builds for both signed and unsigned integers! Zig doubles down on the UB by making signed and unsigned integer overflow both undefined unless you use the overflow-specific operators! How is it that none of these languages Rust doesn’t handle overflow safely by default?! (edit: My information on Zig was inaccurate and false, it actually does have checks built into the type-system (error system?) of the language!)

                                                      Are the performance gains really worth letting every addition operation be a potential source of uncaught bugs and vulnerabilities? I certainly don’t think so.

                                                      1. 6

                                                        Rust did fix it. In release builds, overflow is not UB. The current situation is that overflow panics in debug mode but wraps in release mode. However, it is possible for implementations to panic on overflow in release mode. See: https://github.com/rust-lang/rfcs/blob/master/text/0560-integer-overflow.md

                                                        Obviously, there is a reason for this nuanced stance. Because overflow checks presumably inhibit performance or other optimizations in the surrounding code.

                                                        1. 2

                                                          To clarify what I was saying: I consider overflow and UB to both be unacceptable behavior by my standard of safety. Saying that it is an error to have integer overflow or underflow, and then not enforce that error by default for all projects feels similar to how C has an assert statement that is only enabled for debug builds (well really just when NDEBUG isn’t defined). So far the only language that I have seen which can perform these error checks at compile time (without using something like Result) is ATS, but that language is a bit beyond my abilities.

                                                          If I remember correctly, some measurements were taken and it was found that always enabling arithmetic checks in the Rust compiler lead to something like a 5% performance decrease overall. The Rust team decided that this hit was not worth it, especially since Rust is aiming for performance parity with C++. I respect the team’s decision, but it does not align with the ideals that I would strive for in a safety-first language (although Rust’s primary goal is memory safety not everything-under-the-sun safety).

                                                          1. 4

                                                            That’s fine to consider it unacceptable, but it sounded like you thought overflow was UB in Rust based on your comment. And even then, you might consider it unacceptable, but surely the lack of UB is an improvement. Of course, you can opt into checked arithmetic, but it’s quite a bit more verbose.

                                                            But yes, it seems like you understand the issue and the trade off here. I guess I’m just perplexed by your original comment where you act surprised at how Rust arrived at its current state. From what I recall, the intent was always to turn overflow checks on in release mode once technology erased the performance difference. (Or rather, at least that was the hope as I remember it at the time the RFC was written.)

                                                            1. 2

                                                              Oh no I actually am glad that at least Rust has it defined. It is more like Ada in that way, where it is something that you can build static analysis tools around instead of the UB-optimization-wild-west situation like in C/C++.

                                                              From what I recall, the intent was always to turn overflow checks on in release mode once technology erased the performance difference.

                                                              Yes I believe that was in the original RFC, or at least it was discussed a bunch in the GitHub comments, but “once technology erased the performance difference” is different than “right now in shipping code after the language’s 1.0 release”. I would say it is less of surprise that I feel and more a grumpy frustration - one of the reasons I picked up (or at least tried to pick up RIP) Rust in the first place was because I wanted a more fault-tolerant systems programming language as my daily driver (I am primarily a C developer). But I remember being severely disappointed when I learned that overflow checks were disabled by default in release builds because a that ends up being only marginally better than everyone using -fwrapv in GCC/Clang. I like having the option to enable the checks myself, but I just wish it was universal, because that would eliminate a whole class of errors from the mental model of a program (by default).

                                                              :(

                                                        2. 3

                                                          With the cost, it depends. Even though overflow checks are cheap themselves, they have a cost of preventing autovectorization and other optimizations that combine or reorder operations, because abort/exception is an observable side effect. Branches of the checks are trivially predictable, but they crowd out other branches out of branch predictors, reducing their overall efficiency.

                                                          OTOH overflows aren’t that dangerous in Rust. Rust doesn’t deal with bare malloc + memcpy. Array access is checked, so you may get a panic or a higher-level domain-specific problem, but not the usual buffer overflow. Rust doesn’t implicitly cast to a 32-bit int anywhere, so most of the time you have to overflow 64-bit values.

                                                          1. 3

                                                            How is it that none of these languages handle overflow safely by default?!

                                                            The fact that both signed and unsigned integer overflow is (detectable) UB in Zig actually makes zig’s –release-safe build mode safer than Rust’s –release with regards to integer overflow. (Note that Rust is much safer however with regards to memory aliasing)

                                                            1. 1

                                                              I stand corrected. I just tested this out and it appears that Zig forces me to check an error!

                                                              I’ll cross out the inaccuracy above…and I think I’m going to give Zig a more rigorous look… I’m quite stunned and impressed actually. Sorry for getting grumpy about something I was wrong about.

                                                            2. 2

                                                              I don’t think so either. I did a benchmark of overflow detection in expressions and it wasn’t that much of a time overhead, but it would bloat the code a bit, as you have to not only check after each instruction, but only use instructions that set the overflow bit.

                                                          1. 6

                                                            I don’t understand the issue people have with dependencies per say.

                                                            As in, I think the whole fear of dependencies comes from the time of bad language package manager & repository (e.g. pypi&pip, or whatever the heck people used in the dark days when pypi&pip was considered a good tool).

                                                            I think the other part comes from understanding dependencies in the C++/C sense, where every time I added a library I hadn’t used before I’d have to spent 20 minutes figuring out what’s the best way to include it and how to make it compile.

                                                            Also, fat binaries should just become a thing. Memory and storage issues that made fat binaries not a good option are just not a thing anymore, if they are, they are no niche systems where you can just use distros specifically made for that.

                                                            As in, things like openssl should be dynamically linked, I agree with that. However the category of things which should be dynamically linked is tiny and far outweights the 99% of libraries which could just be static. Never has one uttered the words “Oh gee, I’m sure glad this package dynamically links to lboos, otherwise it might cost me an extra 5Mb of disk space and have from the old version that is 5.6% slower on this specific benchmark”.

                                                            I still hold to the belief that containers have basically become a thing due to fobia of fat binaries and the fact that most people don’t seem to even be aware that they would fix most or probably all of their dependency-related issues.

                                                            I’d go as far saying that things such as databases should be included in the binaries (unless there’s a reason for the database to be up and running when the program itself is shut down).

                                                            1. 2

                                                              As in, I think the whole fear of dependencies comes from the time of bad language package manager & repository (e.g. pypi&pip, or whatever the heck people used in the dark days when pypi&pip was considered a good tool).

                                                              Counterpoint: At $WORK we use Rust for a Modbus command line tool. The entire application is less than two hundred lines of code and has three dependencies (modbus, clap, and byteorder). When transitive dependencies are brought in those three dependencies become ~20. Building a release version of the application in GitLab CI takes fifteen minutes. Building an application with four times the LOC in C using only system dependencies takes seconds. I don’t fear dependencies because Cargo is a bad language package manager (it’s probably the best I’ve ever used). I fear dependencies because every single dependency could lead to dramatic compilation-time-bloat.

                                                              1. 3

                                                                Fifteen minutes?!? Is that really all compilation time? How long does it take locally? ripgrep has ~70 dependencies and a release build compiles in under a minute for me. Maybe two minutes when I use my laptop from 2012.

                                                                1. 2

                                                                  It’s compilation time plus pull-in-dependencies-from-cargo time. It may also include downloading the rust tool-chain, but I’m not longer at my work machine so I don’t actually have the .gitlab-ci.yml or Dockerfile in front of me to check that.

                                                                  I did a comparison in this comment running builds of both programs on my local machine. The C program took 0m0.137s locally, but takes an order of magnitude longer in the CI job. The Rust program took 0m36.862s locally, so presumably there is a similar scale factor for the Rust application’s CI job.

                                                                2. 2

                                                                  That sounds unrealistically long, how long does it take you to build locally if all the caches are in place and you just modify a few files ?

                                                                  If it take 15 minutes to build from scratch, then that’s irrelevant in of itself, because you don’t need to run your ci build like a madman… Unless you can’t test locally, but then the problem is that you can’t test locally and you are using your CI as a dev testing tool.

                                                                  1. 1

                                                                    Building the two examples above locally from scratch:

                                                                    Rust tool (3 primary dependencies):

                                                                    $ time cargo build --release
                                                                    # build output fluff...
                                                                    # ...
                                                                    # ...
                                                                    real	0m36.862s
                                                                    user	0m59.874s
                                                                    sys	0m1.642s
                                                                    
                                                                    $ cloc src/
                                                                           2 text files.
                                                                           2 unique files.
                                                                           0 files ignored.
                                                                    
                                                                    github.com/AlDanial/cloc v 1.74  T=0.01 s (317.3 files/s, 26812.3 lines/s)
                                                                    -------------------------------------------------------------------------------
                                                                    Language                     files          blank        comment           code
                                                                    -------------------------------------------------------------------------------
                                                                    Rust                             1             13              0            112
                                                                    YAML                             1              0              0             44
                                                                    -------------------------------------------------------------------------------
                                                                    SUM:                             2             13              0            156
                                                                    -------------------------------------------------------------------------------
                                                                    

                                                                    C tool (0 primary dependencies):

                                                                    $ time make clean all
                                                                    # build output fluff...
                                                                    # ...
                                                                    # ...
                                                                    real	0m0.137s
                                                                    user	0m0.094s
                                                                    sys	0m0.044s
                                                                    
                                                                    $ cloc src/
                                                                           4 text files.
                                                                           4 unique files.
                                                                           0 files ignored.
                                                                    
                                                                    github.com/AlDanial/cloc v 1.74  T=0.01 s (397.6 files/s, 91740.2 lines/s)
                                                                    -------------------------------------------------------------------------------
                                                                    Language                     files          blank        comment           code
                                                                    -------------------------------------------------------------------------------
                                                                    C                                3            171            109            614
                                                                    C/C++ Header                     1             11              0             18
                                                                    -------------------------------------------------------------------------------
                                                                    SUM:                             4            182            109            632
                                                                    -------------------------------------------------------------------------------
                                                                    

                                                                    With changes made to one file both the Rust and C applications take less than 0.2 sec. I timed it at 0m0.205s real for the Rust application and 0m0.132s real for the C application, but I think both of those are low enough that I would consider their perceived compile time to basically be zero.


                                                                    If it take 15 minutes to build from scratch, then that’s irrelevant in of itself, because you don’t need to run your ci build like a madman.

                                                                    It is relevant to me. If a CI job needs to run every time a change is getting merged into our development branch then I care a lot about how long it takes to run CI with those changes in place. I don’t want to have to wait 15 min to fully context switch to another task.

                                                                    I do not mean to sound ungrateful for the many quality open source libraries that allow us to write software without reinventing the wheel for every project; I just think that we should acknowledge that a dependency-bloat problem exists in modern software culture. If we do decide that the trade-offs involved in taking on that bloat end up being ultimately worth it then great, but I don’t want to pretend that the problem isn’t there.


                                                                    Edit: Please don’t take this unscientific test as a comparison of the C and Rust programming languages. These applications do completely different things and were merely chosen because that had small (< 1000) lines of code and happened to show a difference between production applications that used a zero and non-zero number of dependencies outside of each language’s standard library.

                                                                    1. 2

                                                                      Hmh, fair enough, the way I see it time spent in CI is basically nothing off my back. As in, I expect CI to always pass since I test locally, the few situations where it doesn’t are the exception.

                                                                      If for some reason that’s not the case with your project though, you could get around it with having an OS image for your CI machine that just includes the cached rust libraries… I mean, that’s why C compilation is fast to some extent, because you link against already-compiled libraries… It just so happens that all standard OSes come with those precompiled libraries.

                                                                      The simple solution for this though is just having a dedicated build or build+test machine, and everything gets cached there and stays there.

                                                                      There’s a small overhead either way, but that overhead is unrelated to fat Vs dynamicslly linked binaries or rust Vs c.

                                                                      1. 1

                                                                        Ya on the topic of Rust vs C I added an edit at the end of my comment above to make it clear that the test is not meant to be a language comparison.


                                                                        If for some reason that’s not the case with your project though, you could get around it with having an OS image for your CI machine that just includes the cached rust libraries… I mean, that’s why C compilation is fast to some extent, because you link against already-compiled libraries… It just so happens that all standard OSes come with those precompiled libraries.

                                                                        You kind of hit the nail on the head. For C/C++ it’s a lot easier because those libs come compiled and pre-packaged in most distros (basically what the article mentioned). And for the case at $WORK that often ends up being a major win. If we can apt-get everything we would ever need in 30 seconds and not have to worry about caching or keeping machines up to date then that provides a substantial benefit to my team. I think you should always use the right tool for the job, and that tool is definitely not always going to be C or C++, but it certainly is nice not needing to worry about caching dependencies on a dedicated machine.

                                                              1. 4

                                                                We need to stop writing drivers in such ad-hoc and low-level ways. This is the second large effort to reverse-engineer Mali, after Lima, and the primary artifacts produced by these projects are piles of C. Long-term maintainability can only come from higher-level descriptions of hardware capabilities.

                                                                I’m a little frustrated at the tone of the article. I was required to write this sort of drivel when I was applying for grants, but by and large, the story of how I wrote a GPU driver is simple: I was angry because there wasn’t a production-quality driver, so I integrated many chunks of code and documentation from existing hard-working-but-exhausted hackers to make an iterative improvement. The ingredients here must have been quite similar. The deliberate sidelining of the Lima effort, in particular, seems rather rude; Panfrost doesn’t mark the first time that people have been upset with this ARM vendor refusing to release datasheets, and I think that most folks in the GPU-driver-authorship world are well aware of how the continual downplaying of Lima is part of the downward spiral that led to their project imploding.

                                                                I don’t think I ever hid that r300, r500, and indeed radeonhd, from the same author as Lima, were all big influences on my work, and that honest acknowledgement of past work is the only way to avoid losing contributors in the future.

                                                                1. 15

                                                                  I’m a little frustrated at the tone of the article. I was required to write this sort of drivel

                                                                  Is it not possible that the tone is a true, unforced reflection of the author’s enthusiasm? That’s how I read it. Maybe that’s just naive of me.

                                                                  1. 9

                                                                    Long-term maintainability can only come from higher-level descriptions of hardware capabilities.

                                                                    Is there any source you can provide indicating that this would actually work? From my understanding, creating meaningful abstractions over hardware is an extodinarily tough problem to solve. For example device trees work as a general purpose description of hardware, but still need a lot of kernel-space driver fluff to get anything talking correctly. What kind of higher level description do you think would work in this space?

                                                                    FYI: I have zero experience in graphics driver land so I don’t actually know anything about this domain. ¯\_(ツ)_/¯

                                                                    1. 1

                                                                      I read it not as “assembling a driver from common code components and interfaces in c”, but as “write a high-level description of the hardware and api, from which implementations in C or Rust or whatever can be generated”.

                                                                      But maybe we’re both reading it wrong :)

                                                                    2. 4

                                                                      Isn’t the primary artefact produced a set of instruction able to use a GPU? I would think it comes first, before the piles of C.

                                                                      Long-term maintainability can only come from higher-level descriptions of hardware capabilities.

                                                                      This seems like an extraordinary claim. “Can only come from” is a very strong statement.

                                                                      1. 4

                                                                        GPU drivers are not required to have a single shape. Indeed, they usually have whatever shape is big and complex enough to fit their demanded API. The high-level understanding of GPUs is what allowed the entire Linux GPU driver tree to be refactored around a unified memory manager, and what allowed the VGA arbiter to be implemented. High-level descriptions, in particular datasheets, are already extremely valuable pieces of information which are essential for understanding what a driver is doing. At the same time, the modern GPU driver contains shader compilers, and those compilers are oriented around declarative APIs which deal with hardware features using high-level descriptions of capabilities.

                                                                        Let me show you some C. This module does PCI ID analysis and looks up capabilities in a table, but it’s done imperatively. This module does very basic Gallium-to-r300 translation for state constants, but rather than a table or a relation, it is disgustingly open-coded. (I am allowed to insult my own code from a decade ago.) I won’t lie, Panfrost has tables, but this is, to me, only the slightest of signs of progress.

                                                                        1. 3

                                                                          Ah, I see what you mean.

                                                                          Higher-level description means genericity, this can lead to bloated code trying to deal with the future, impairing the present. Trying to keep the proper balance of high-enough description and low-enough efficient description is a challenge.

                                                                          Helping the maintenance effort by lending hands to refactor with a fresh mindset is my naive view of how to fight this, but I know this is falling prey to the rewrite fallacy.

                                                                    1. 14

                                                                      An important point that’s getting downplayed here is that this isn’t just a Firefox feature. It’s a library that you can pull into any Cargo-based project and that provides header files and bindings to C. We’ve already seen how this library-ification of “Firefox” features can help other projects when librsvg adopted Mozilla’s CSS implementation, and seeing the nearly ubiquitous* use of Servo-owned libraries like url.

                                                                      If you like using apps that interoperate with the web without being implemented as browser apps, then this should be really good news in general. It gives you access to web standards without having to write your app in JavaScript and without having to implement it all yourself.

                                                                      * Across the ecosystem of Rust applications, of course. Firefox is about the only C++ application I know of that directly depends on rust-url.

                                                                      1. -2

                                                                        Of course that also means that now anything that depends on librsvg now depends on Rust, which is a massive and complicated dependency that is difficult to package and maintain, and isn’t very portable. There are platforms that major GNU+Linux distros support that LLVM doesn’t, and as a result librsvg is now being held back to the last non-Rust version on those platforms. As much as I like Rust as a language, it’s a bad citizen when it comes to integrating well into the rest of the ecosystem. If Rust people want what they seem to want, which is to replace C with Rust as the go-to systems programming language, they need to play nicely with the rest of the world.

                                                                        LLVM has been a blessing and a curse for Rust, I think. On one hand, it’s made developing Rust itself much easier. But it’s also been a major contributing factor towards Rust’s biggest problems: low portability, abysmal compile times, and a bit too much abstraction (because you can layer a lot of abstractions on top of each other in Rust without significantly hurting runtime performance).

                                                                        1. 9

                                                                          This is a pretty uncharitable reading. Indeed, we work together with all package maintainers that actually get in touch with us and want help. We have very good experiences with the Debian folks, for example.

                                                                          LLVM is a major block in this, but there’s only so much we can do - work on new backends is underway, which may help the pain. When Rust started, GCC was also not in a state that you would want to write a frontend for it the way you could do it for LLVM. I would love to see a rustc version backed by GCC or cranelift to come to a state that makes writing codegen backends easier (this is an explicit goal of the project).

                                                                          “Bad citizen” implies that we don’t appreciate those problems, but we all got limited hands and improvements are gradual. Indeed, Rust has frequently been the driver behind new LLVM backends and has been driven LLVM to support targets outside of the Apple/Google/Mobile spectrum that LLVM is traditionally aimed for. It’s not like people that can write a good quality LLVM backend are easy to find and to motivate to do that on their free time. A lot of backends need vendor support to be properly implemented. We actively speak to those vendors, but hey, the week has a limited time of hours and it’s not like vendor negotiation is something you want to do in your free time either.

                                                                          librsvg did weight these pros and cons and has decided that the modularity and ability to use third party packages is worth making the jump. These moves are important, because no one will put an effort behind porting a programming language without some form of pain. Projects like this become the motivation.

                                                                          1. 0

                                                                            This is a pretty uncharitable reading. Indeed, we work together with all package maintainers that actually get in touch with us and want help. We have very good experiences with the Debian folks, for example.

                                                                            I never said or implied that Rust’s developers or users have anything but good intentions. Bad citizen doesn’t mean that you don’t appreciate the problems, it means that the problems exist.

                                                                            Like it or not, Rust doesn’t play well with system package management. Firefox, for example, requires the latest stable Rust to be compiled, which means that systems either need to upgrade their Rust regularly or not update Firefox regularly. Neither are good options. Upgrading Rust regularly means having to test and verify every Rust program in a distro’s repositories every 6 weeks, which becomes a bigger and bigger effort as more and more Rust packages are updated. What happens if one of them no longer works with newer stable Rusts? It’s not like Rust is committed to 100% backwards compatibility. And not upgrading Firefox means missing out on critical security fixes.

                                                                            Rust needs a proper distinction between its package manager and its build system. Conflating them both into a Cargo-shaped amorphous blob is harmful.

                                                                            LLVM is a major block in this, but there’s only so much we can do

                                                                            Just don’t depend on LLVM in the first place. Most compiled languages have their own backends. It’s not like Rust has saved any real effort in the long term anyway, as they’re having to reimplement a whole lot of optimisation in MIR anyway to avoid generating so much LLVM bytecode.

                                                                            These moves are important, because no one will put an effort behind porting a programming language without some form of pain. Projects like this become the motivation.

                                                                            That’s just arrogant imo.

                                                                            1. 3

                                                                              Just don’t depend on LLVM in the first place.

                                                                              Because rust would definitely support more CPU architectures if they had to build out the entire backend themselves. Just like DMD and the Go reference implementation, both of which lack support for architectures like m68k (the one that caused so much drama at Debian to begin with) and embedded stuff like avr and pic.

                                                                              I’d be more interested in a compile-to-C version, or a GCC version. Those might actually solve the problem.

                                                                          2. 1

                                                                            @milesrout could you point to some examples that back up your comments on librsvg being held back to the non-rust version on those platforms and perhaps some commentary / posts from package maintainers on the subject? Not agreeing or disagreeing here, I just want to get some insight from package maintainers before I form an opinion on the subject.


                                                                            On the topic of compile times I agree that LLVM is a blessing and curse. Clang is my preferred C/C++ compiler, but I have noticed that compiling with -O2 or -O3 on projects even as small as ~10kloc takes substantial time when compared to a compiler such as TCC. Sure the generated machine code is much more performant, but I do not think the run-time performance is always worth the build-time cost (for my use cases at least).

                                                                            I haven’t written enough Rust to know how this translates over to rustc, but I imagine that a lot of the same slowness in optimizing of C++ template instantiations would appear in Rust generics.

                                                                            1. 3

                                                                              FYI (I’m familiar with the internals there) what you want for quick compiles is to not run -O2 or -O3.

                                                                              Clang (and LLVM) emphasise speed very much, but do include some very expensive analysis and transformation code, and some even more expensive sanity checks (I’ve seen one weeklong compile). If you want quick compiles, don’t run the passes that do the slow work. TCC doesn’t run such extra slow things. If you want fast output (or various other things) rather than quick compiles, then do run the appropriate extra passes.

                                                                              If you want both, you’re out of luck, because doing over 30 kinds of analysis isn’t quick.

                                                                              1. 1

                                                                                Hey @arnt, thanks for the reply.

                                                                                I understand that -O2 and -O3 trade off compilation speed for additional optimization passes. Perhaps TCC was actually bad example because it doesn’t have options for higher levels of optimization.

                                                                                My issue is more that the actual time to perform such additional passes is extraordinarily high period, and that if one wants to make meaningful changes to performance sensitive code, the rebuild process leads to a less-than-stellar user experience. The LLVM team has done amazing work that has lead to absolutely astounding performance in modern C/C++ code. There are much smarter people than I who have evaluated the trade-offs involved when dealing with higher optimization levels, and I trust their judgement. It is just that for me the huge bump in compilation time from -O0 or -O1 to -O2, -O3, or -Ofast is painful, and I wish there was an easier path to getting middle-of-the-road performance for release builds with good compilation times.

                                                                                1. 2

                                                                                  You have performance-sensitive code in .h files, so very many files need to be recompiled? BTDT and I feel your pain.

                                                                                  In my own compiler (not C++) I’m taking some care to minimise the problem by working on a more fine-grained level, to the degree that is possible, which is nonzero but…

                                                                                  One of the major LLVM teams uses a shared build farm: A file is generally only recompiled by one person. The rest of the team will just get the compiled output, because the shared cache maps the input file to the correct output. This makes a lot of sense to me — large programs are typically maintained by large teams, so the solution matches the problem quite well.

                                                                        1. 10

                                                                          2D C99 game engine work, around 2200 LOC now. I made the animation loop system generic last night from its prototype, so I’m tagging on image rendering, and should have inefficient but working looping animations done by the end of the weekend (no texture atlases yet). I think that’s the start point to actually try to put together some Atari or NES-style demo games. I have this idea in my head that I’m going to have cancellable animations for in-game, but I don’t want to go down that rabbit hole right now even though I have a prototype of that working.

                                                                          1. 1

                                                                            Is it going to be open source / do you have a link to a public repository for it? This sounds like a cool project and I would love to check it out.

                                                                            1. 1
                                                                          1. 5

                                                                            I would highly recommend uFlash for anyone who prefers a command line development environment over the provided editors on the micro:bit website.

                                                                            1. 3

                                                                              I’m trying for the 3rd time to learn Rust with the klabnik book and exercism.io’s exercises, and well, I really suck at it. Is it me or this language is really more difficult that it should be? My main languages are C, golang, php, python usually… I’m starting to believe I’m too old for learning.

                                                                              1. 2

                                                                                Is it me or this language is really more difficult that it should be?

                                                                                More difficult that it should be is hard to answer. Some other attempts at new languages try to aim for easier to use concepts also ensuring the same safety but right now only rust can bring you both safety and performances.

                                                                                Difficult ? Yes it is, especially if you learn rust at evenings while you spend your days on easier languages. But I think it’s worth it. If it’s any consolation: most developers found it really hard to learn until, progressively, it started to fall in place.

                                                                                Suggestion: You should maybe start to use it without bothering learning it ? That’s what I did and I found it easier to learn by practicing and being corrected by the compiler (more usually the borrow checher).

                                                                                1. 2

                                                                                  I had that same feeling of “Is it me or this language is really more difficult that it should be?” when I tried to learn Rust a few years ago. I see why a lot of people enjoy the language, but it wasn’t my cup of tea.

                                                                                  1. 2

                                                                                    Unfortunately it seems to take most people at least one time bouncing off of it, sometimes several as you have experienced… Rust’s type system borrows a lot of ideas from OCaml/F#, which are a pretty big change in the way of doing things from the language you’ve listed, so that might be one level of adjustment to get used to already. Then, between the type system, borrow checker, traits, iterators, generics, and so on there’s a lot of stuff going on in Rust, and most of it gets hidden by the type inference so that you don’t have to constantly bump into it. If you already know how it works.

                                                                                    What worked for me was two things: first, start by writing a program way simpler than you think you can handle; for me it was just Conway’s Life. Second, get in one of the various chat rooms of one kind or another, find the beginners channel, and ask for help as much as possible. A lot of Rust’s complexity is optional, but things like the standard library use a lot of it, so it’s common for new people to feel like they have to overdesign things.

                                                                                    1. 1

                                                                                      I also tried, and “bounced” (even though I know many languages, and was very good at C++ some time ago). I tried the “start using it and learn as I go” approach, but got overwhelmed by nuance and kinda betrayed when trying to rely on error message hints, which showed up to be occasionally not working or self-contradictory (“You are doing A; maybe try B instead?”, then after doing B: “You are doing B; maybe try A instead?”). However I hope to try revisiting it in future, maybe more than once. I can’t count how many times I bounced off learning vim, but when I finally had to, I went on to love it. So, keeping fingers crossed for you!!!

                                                                                      Also, one tutorial I found recently that I want to read, maybe you also will find it interesting, is about writing a roguelike game in Rust.

                                                                                      1. 1

                                                                                        I’ve been learning Rust too with the main book for about a month and a half, and it is a doozy. I have been doing every example in every chapter and tweaking to make sure I understand. That has helped. I started a project In Rust but I am realizing the value of working through the whole book, so I put that on hold for now.

                                                                                      1. 2

                                                                                        Working on a C documentation generator and accompanying “making-of” blog post(s). The code for part 1 is complete. The WIP blog post for part one is nearly complete and I hope to publish today or tomorrow. The rest of the week will probably be spent working on the code for part 2.

                                                                                        1. 7

                                                                                          I find it curious that dotfiles are among the things listed by the author that don’t scale. I’ll quote Steve Losh because he says it better than I ever could:

                                                                                          I can count on my balls how many times I’ve sat down to program at someone else’s computer in the last five years. It just never happens.

                                                                                          1. 3

                                                                                            I’ve done it. Either because it’s just Peer Programming, or because of a client’s Misplaced Paranoia.

                                                                                            1. 2

                                                                                              I’m equally surprised to see static blog generators there. Sure, some generators have slow build times for large sites… and many others don’t. If anything they scale better than CMSes because you only build the blog every so often, but it needs no maintenance and can be served to a large crowd of visitors per minute from a free or dirt cheap hosting.

                                                                                              I generally agree with the idea that we should be solving problems for everyone whenever possible, but some things just can’t have universal “good” defaults. Highly domain specific example: MuseScore has no default shortcut for “toggle concert pitch”. For people writing for woodwinds, having one is a real time saver. Everyone else usually has no idea what on earth is “concert pitch”. People writing different kinds of music can benefit from simpler shortcuts for their common tasks a lot. I bet same goes for many other applications, if the default shortcut for a thing you do every minute is Ctrl-Alt-Meta-Escape-Super-F14, you should rather change it and add it to the dotfiles than put up with it or argue with people whose needs are different that they should cater to your needs.

                                                                                              1. 1

                                                                                                I’m equally surprised to see static blog generators there.

                                                                                                This surprised me too, until I read a comment where the author explained their rationale:

                                                                                                If the problem you’re solving is “I want to have a website to post my articles on”, then I think the solution should probably not involve git, local builds from the terminal, or CNAME configs to get a custom domain.

                                                                                              2. 1

                                                                                                Agreed. I mean it’s also not that I’m a clueless fool when I work with other people’s computers. It may take a bit longer, but I don’t see the problem.

                                                                                                This is not really like handing your hammer to another person on a construction site. This is more like having to put on their shoes and trousers, because the hammer is not the problem.

                                                                                                1. 1

                                                                                                  Ya that seemed odd to me. I have have a dotfiles directory where I store configurations for the software I use most often. There has never been a time during machine setup or server configuration where running setup-dotfiles.sh has not given me the exact environment I like, customizations and all. It’s not like Vim is software that introduces a lot of breaking changes.

                                                                                                  1. 1

                                                                                                    I’ve had to jump on a cow-orker’s workstation to help diagnose a problem and man, is it painful as nothing works like I expect it to. And the customizations I have aren’t that many (in fact, I tend to remove default settings in bash), but I’ve been using said settings for over 20 years now.

                                                                                                    The problem I see with the author’s approach is either fighting for change (what if they reject it?) or just living with the ever changing set of defaults (which in my experience destroy any hope for a good long term work flow to develop).

                                                                                                  1. 18

                                                                                                    I feel resource constraints are gone now. People built apps to send text back and forth that use 1.5GB memory with a constantly active CPU.

                                                                                                    I hate it.

                                                                                                    1. 8

                                                                                                      I do not think resource constraints are gone, but I do believe there is a growing separation between software that is resource-aware and software that is not. Right now my system monitor is reporting the memory usage of the five programs I am using (at the moment) as:

                                                                                                      • chromium-browser : 431.7 MB
                                                                                                        • Approximately 30 tabs open, most of which are inactive.
                                                                                                      • firefox : 352.3 MB
                                                                                                        • One tab open, active.
                                                                                                      • spotify : 324.6 MB
                                                                                                      • mate-terminal : 45 MB
                                                                                                      • vim : 12.1 MB

                                                                                                      Comparing Vim to Firefox is like comparing apples to oranges, and I think it would be ridiculous to say that Firefox requiring ~30x as much memory makes it ~30x worse of an application as Vim. But I do feel that the divide between these two is much greater than the divide between a web browser and a text editor was twenty years ago. I don’t know whether that is a good thing or a bad thing overall, or if it really even matters in the grand scheme of things, but it does feel different.

                                                                                                      1. 9

                                                                                                        The comparison Firefox vs Vim might be a bit unfair, given how people have basically made the browser an operating system and therefore the browser needs to provide somehow everything. But then compare Spotify (a media player, only capable of playing one song at a time and playlists plus some browseable database) with a player with a local library or an MPD player and then you realize what a mess it is. But of course the times where you could build your own FOSS clone of ICQ or Spotify are mostly gone.

                                                                                                        1. 6

                                                                                                          I think your comparison of Spotify vs $OTHER_MEDIA_PLAYERS is much better than of my Firefox vs Vim comparison.


                                                                                                          I understand where you are coming from when you say that browsers were basically made an operating system, but I do think it is important to note that current browsers offer significantly less functionality than the OSs I use on a daily basis at the cost of equal or greater memory usage. It doesn’t seem like you are trying to hold a particularly strong position that BROWSER == OS, but if such a position was strongly asserted I would contend that a noticeable gap in functionality vs memory footprint exists. That’s just my opinion though.

                                                                                                          1. 1

                                                                                                            I think modern browsers offer functionality roughly comparable to minimalistic OSes, except resources are very virtual and sandboxed.

                                                                                                          2. 0

                                                                                                            So, we’d compare Firefox to Windows or Linux when they’re idle. Back when I checked that, the Ubuntu builds were around the same size in memory. So, that fits.

                                                                                                        2. 6

                                                                                                          I just posted something in a web editor and it has ~500ms input lag (rough estimate; certainly far too much to be comfortable). Resource constraints are far from gone, people just accept stuff like this for some reason 🤷‍♂️

                                                                                                          1. 4

                                                                                                            Agreed. What increased every year is our tolerance to broken stuff (and requiring a ludicrous amount of resources is one such example).

                                                                                                            The only thing that changed is that people managed to stack the pile of shit that is software development even higher. I can’t wait until that tower comes tumbling down.

                                                                                                            Recommended talk.

                                                                                                            1. 2

                                                                                                              Software is immutable. Why must the tower crumble?