Pretty much matches my thoughts on why people should learn C, even if they don’t use it. And by learn, I mean complete at least one non trivial project, not just a few exercises.
In many languages, a loop that concatenates (sums) a sequence of integers looks a lot like one that concatenates a sequence of strings. The run time performance is not similar. This is obvious in C.
I have mixed thoughts on rust. As a practical systems language, sure, great. No cost abstraction, ok, sure, great. But for understanding how computers work? To compile by hand? Less sure about that.
There’s a place for portable assembler that’s a step above worrying about whether the destination is the left or right operand and what goes in the delay slot and omg so many trees. And whether it’s arm or Intel or power, all computer architectures work in similar fashion, so it makes sense to abstract that. But we shouldn’t abstract away how they work into something different.
I have mixed thoughts on rust. As a practical systems language, sure, great. No cost abstraction, ok, sure, great. But for understanding how computers work? To compile by hand? Less sure about that.
Rust helped me understand how computers work much better than C (I’ve learned both C and C++ at Uni and had to implement some larger projects in it). It just adds a lot more explicit semantics to it which you need to know in C. It keeps strings as painful as they are :). The thing that’s really lacking - and I agree on that fully - is any kind of structured material covering all those details. If you really want to mess around with computer on a low level - and not write a tutorial on how to mess around with a computer on a low level with your chosen language - C is still the best choice and will remain so.
Most arguments about C seem, though, at a closer look, end up as “it’s there, it’s everywhere”. The same argument can be made about Java or C#. A agree that some C doesn’t hurt, but I don’t see how it is as necessary as people make it to be.
I really agree with the idea that Rust makes explicit a long catalog of things that one can do in C, but should not*.
Most arguments about C seem, though, at a closer look, end up as “it’s there, it’s everywhere”. The same argument can be made about Java or C#.
The missed point here is that important parts of the Java or C# toolchains or runtimes are written in C or C++; the argument rests on C (for the time being still “the portable assembly language”) being foundational, not just ubiquitous. C is almost always present, right above the bottom of the (technological) stack.
*Unless interacting with hardware / doing things that are inherently unsafe.
On further reflection, I think I missed @tedu’s point, which is something more like, “C is special because it’s pretty easy to mentally translate C into assembly or disassembly into C.”
Here’s an example of something I ran into the other day that’s easy to do in C but unnecessarily hard to do in Rust: adding parent pointers to a tree data structure. In C I’d just add a field and update it everywhere. In Rust I’d have to switch all my existing left/right pointers to add Rc<T>.
I’m willing to buy that a binary tree implementation in Rust encodes properties it would be desirable to encode in the C version as well. But once you start with a correct binary tree Rust doesn’t seem to prevent any errors I’d be likely to make when adding a parent pointer, for all the pain it puts me through. There’s a lot of value in borrow-checking, but I think you’re understating the cost, the level of extra bondage and discipline involved.
Most arguments about C seem, though, at a closer look, end up as “it’s there, it’s everywhere”. The same argument can be made about Java or C#. A agree that some C doesn’t hurt, but I don’t see how it is as necessary as people make it to be.
It is everywhere though, what do most people think runs microcontrollers/avr/pic/etc… chips? Generally it is a bunch of bodged C and assembly.
The argument here is a bit different, you can avoid java (I have zero interaction with it), but you can’t realistically avoid dealing with C.
The argument here is a bit different, you can avoid java (I have zero interaction with it), but you can’t realistically avoid dealing with C.
I can totally do that. Lots of new high-performance software (Kafka, Elasticsearch, Hadoop and similar) is written in Java, so if you are in that space, you can realistically avoid dealing with C. You will certainly avoid C if you do anything in the web space.
You will certainly run C software, but that doesn’t mean you need to know it.
Ah I see the disconnect, you’re using avoid in reference to having to program in. I’m using it in reference to use. In which case we’re both correct but talking past each other in meaning.
Rust is pretty great, no arguments there, but I think at least one of the author’s points is that C as a language has remained INCREDIBLY stable over decades.
Can you honestly say that immersing students in Rust will still have the same level of applicability 10 years from now?
Strong disagreement to part of your point. C in the 90s was very different to C in the early 00’s which is very different to C now. The standards may not have changed that much (though eg C89, C99 etc were things, which did change things), but the way compilers work has massively changed, eg in aggressive UB handling, meaning that the effective meaning of C code has completely changed, and continues to be in wild flux. C as a language is amazingly unstable, in part due to the language specifications being amazingly underspecified, and in part due to so many common things being UB.
You’re quite right. I learned C back in the K&R days, and even the transition to ANSI was readily noticeable, if not earth moving.
My point is that even having learned K&R, the amount of time it takes for me to come up to speed is relatively trivial. I would argue that this is nothing compared to say even the rate of change in idiomatic usage of the Java language over time. I learned Java back before generics and autoboxing, to say nothing of more recent Java 8 enhancements, and the Java landscape is a VERY different place now than it was when I ‘lived’ there.
I would disagree that the time to come up to speed is trivial in comparison - you are comparing apples and oranges.
The time to superficially come up to speed is trivial in comparison, but the time to actually learn how to not write heisenbugs into your code that you did not used to have to worry about - well, unless you are extensively fuzzing and characterising your code, you don’t actually know that you even have come up to speed.
They probably don’t work the same, unless you have extensively characterised and fuzzed them, or unless you have done binary diffs of the executables and they are identical - and the latter I would not believe, as there have been other differences in what output compilers produce over the years that should show up.
That is, my meaning is not that the changes will result in code that produced tetris now producing space invaders. They will result in code that produced tetris now producing tetris with additional weird heisenbugs that can be used for eg arbitrary code execution.
Edited to add: Also, the part about the C standard being underspecified is what means that C programs do not have an inherent meaning - their meaning differs massively depending on which compiler, which architecture etc. For example, how some cases of bit shifts are handled differs completely between x86 and ARM, and has historically differed between compilers on x86.
Most people’s continuing use of c89 is either cargo culting or the need to support very old compilers. GCC and Clang have both had adequate support of c11 for years now. If you’re working with an old microcontroller you might be stuck having to use their patched gcc 3.x (a very bad time, speaking from experience) but aside from this sort of situation using c99 or c11 is perfectly reasonable.
Most people are indeed in an environment where they can use either c99 or c11, but since c11 is not quite a superset of c99, c89 seems to have retained a certain role in C programmers' mental models as the core of C. The real working core today could in practice be a bit bigger, essentially c89 plus those features that were not in c89, but are in both of c99 and c11. But that starts getting more complex to think about! So if you want your stuff to compile on both c99 and c11 compilers, just sticking to c89 is one solution, and probably the simplest one if you already knew c89.
I personally wrote mostly c99 in the early 2000s, but one of the specific features I used most, variable-length arrays [1], was taken back out of c11! (Well, demoted to an optional feature.)
[1] Perhaps better called runtime-length arrays. They aren’t variable in the sense of resizable, just with size not specified at compile-time; usually it’s known instead at function-entry time.
That’s kinda what I’ve been designing. At a high level but without the low-level chops (so far) to make it real. So since I can’t do, I teach :)
My Basic-like language Mu tries to follow DJB’s dictum to first make it safe before worrying about making it fast. Instead of pointer arithmetic I treat array and struct operations as distinct operations, which lets them be bounds-checked at source. Arrays always carry their lengths. Allocations always carry a refcount, so use-after-free is impossible. Reclaiming memory always clears it. You can’t ever convert a non-address to an address. All of these have overhead, particularly since everything’s interpreted so far.
While it’s safer than C it’s also less expressive. A function is a list of labels or statements. Each statement can have only one operation. You don’t have to worry about where the destination goes, though, because there’s an arrow:
x:num <- add y:num, z:num
You don’t have to mess with push or pop. Functions can be called with the same syntax as primitive operations.
It also supports tagged unions or sum types, generics, function overloading, literate programming (labels make great places to insert code), delimited continuations (stack operations are a great fit for assembly-like syntax). All these high-level features turn out not to require infix syntax or recursive expressions.
I’m working slowly to make it real, but I have slightly different priorities, so if someone wants to focus on the programming language angle here these ideas are freely available to steal.
In many languages, a loop that concatenates (sums) a sequence of integers looks a lot like one that concatenates a sequence of strings. The run time performance is not similar. This is obvious in C.
I’d expect ropes to be more popular in high-level languages, to be frank. Java’s StringBuilder is a painful reminder that some supposedly “high-level” language designers still think in C.
Oh, you mean people should use ropes? In my experience, c# devs use StringBuilder and everyone else uses some spelling of Array.join(). I knew about, but still didn’t use, ropes when I wrote c++ code because what I really wanted was ostringstream.
Because for most development purposes the interface used when building up strings is more important than the performance. And StringBuilders are a common and well understood interface for building up strings incrementally.
What I’m saying is that StringBuilder is too low-level an interface. I want to be able to concatenate lots of strings normally, and let the implementation take care of doing it efficiently.
In theory you’re right, because StringBuilder is less widely known, less intuitive, harder to use, more verbose, etc.
In practice, high level abstractions with “magic” optimizations can cause problems if you’re really depending on the optimizations to work. E.g. you switch implementations, or make an innocuous change that causes the optimizer to bail out. It turns out even some high level languages aren’t really committed to their ideology.
I don’t want “magic optimizations” - far from it. I want a language where the cost of each operation is a part of the language specification, so that I don’t need to rely on implementation specifics to know that my programs are efficient. For instance, if the language specification says “string concatenation is O(log n)”, then naïve implementations of strings are automatically non-conforming, so I can assume it won’t happen.
The problem is that most strings are small – in fact, most strings would fit in a char* – and not concatenated all that often. In the common case, ropes are an incredible amount of overhead for a very small, common data structure.
Then you can use a hybrid implementation, where small enough strings are implemented as arrays of bytes storing the characters contiguously, and large strings are implemented as ropes. In most high-level languages, this doesn’t impose any extra overhead, since every object already has at least one word of metadata for storing dynamic type info and GC flags.
It is still one of the most commonly used languages outside of the Bay Area web/mobile startup echo chamber;
Given that argument, you should use Java or C#. It’s really weak. Calling web/mobile an “echo chamber” is uncalled for, especially as mobile developer make the tradeoff between interpreted languages (such as JS) and a superset of C (Objective-C) all the time.
Also, it’s the market the largest companies of all time are in. How is this an echo chamber?
C’s influence can be seen in many modern languages;
Sure, but so does Smalltalk, ML and others. Why should C be the one I learn? I started programming with ML, Java and Ruby and am very happy with that path.
C helps you think like a computer; and,
C, first of all, makes you think like C. When it comes to thinking like a computer, my assembler courses helped me much more.
Most tools for writing software are written in C (or C++)
Is that so? Eclipse, IntelliJ and similar are not. A huge amount of such tooling is written in C#. I’d call [citation needed].
Also, tooling for software is a fringe business, why must I learn that?
C, first of all, makes you think like C. When it comes to thinking like a computer, my assembler courses helped me much more.
This is a key point people keep missing when they tell people to learn how C works to be closer to the machine. The machine runs assembly language with some architecture-specific models. Unlike a portable ASM, maybe an intermediate language for a compiler, the C language makes specific decisions about how things will be implemented that have to be put on top of the assembly. So, you’re really learning C’s model instead of assembly’s model. Learning assembly’s would be better if the goal is producing better assembly from HLL code. Plus, there’s macro assemblers. :)
I actually felt like I got a better grasp of “how to think like a computer” by learning Forth than by learning C, because at least Forth doesn’t pretend that there are data types that aren’t numeric.
Stack-based architectures allow for more compact instruction representation, which when coupled with a stack-based language like Forth allows much more efficient use of caches. Stack CPUs also tend to be smaller (owing to reduced complexity of instruction decoding and register access) which lets you fit many more cores on a die. This, in turn, allows for better whole-system performance even if individual cores are slower than they could be, since an individual core generally spends much of its time on memory access (more than half the time under many modern programs). GPUs use a similar tactic, but they combat core complexity with SIMT which makes them inherently harder to program than independent threads.
Also, 16-bit x86 isn’t very well-suited to C. Segmented memory interacts poorly with C’s idea of how a pointer should act, and there aren’t enough registers. You could view x64’s additional registers as a concession to C-like languages, and it helped: the x64 calling conventions are way nicer than in x86.
For a more “out-there” look at what processors not tailored to C look like, check out reduction machine architectures. They offer implicit parallelism that a traditional CPU really can’t.
and to what other language, however fanciful, could they present a more pleasant interface, while remaining as performant as they are?
Modula-3 or a subset of it with annotations to help compiler and macros for occasional zero-cost abstractions. That would be more consistent than C, compile faster, easier to read, easier to maintain, and run fast in production. The regular Modula-3 already had those properties except the compiler hints and macros should help in those few situations where C’s unsafety gives it benefits. Or just tell optimizer to go all out when it sees UNSAFE keyword.
“i think a reasonable case could be made that the “ease” of running C on modern processors has more to do with the effort put in by an entire generation of researchers to figure out how to compile C to architectures like modern ones. now we just take all that understanding for granted.”
C was popular in OS’s and performance-critical applications. Both CPU’s and compilers were optimized to make C apps run faster. That kept the CPU’s and compilers competitive in market. It’s that simple.
It’s a self-reinforcing process. New processors are designed to go faster on benchmarks with a representative instruction mix. Since C is so prevalent, a lot of that instruction mix is the output of a C compiler. Even when it isn’t (Java JIT code should also be prevalent, for example), it’s the output of a compiler trying to be fast on a processor that’s fast at running C.
The days when you could design a new instruction set that isn’t great at representing C code and have much commercial success are long gone.
Most of this just says C was a popular implementation language following UNIX’s fame. So, many things are written in C. So, knowing C will help you understand how they work. People doing cutting-edge work that pick C are usually clear that they pick it for compiler and talent availability, not technical superiority for problem at hand. Now for the No’s. You don’t need to know it for systems programming as plenty of better languages exist for that. It barely contributed anything to good programming languages compared to ALGOL, Simula, Modula, or LISP. Many key designs pushing OS’s forward were not in C: Burrough’s ALGOL, IBM’s PL family, MULTICS in PL/0 & BCPL (UNIX started as MULTICS subset), VMS in BLISS, LISP machines in LISP, Smalltalk-80 in Smalltalk-80, Oberon systems in Oberon dialects, Spin in Modula-3, JX in Java, and recently unikernels in Ocaml.
So, C is a necessary evil if you want to understand or extend legacy systems whose authors preferred C. It’s also trendy in that people keep it mostly like it is while defending any problems it has. That’s a social thing. Like COBOL and PL/I are with the mainframe apps for companies using them. It’s not the best language for systems programming along a lot of metrics. One doing clean-slate work on a platform can use better languages to do better in the long-term. Short-term benefits matter, though, so such projects often compile to C or include C FFI to benefit from legacy stuff.
There’s a growing trend of hating C out there these days. In part, it’s because so many CVE’s are created because of sloppy C programming. Recently, a coworker said the following: “nobody should write anything in C anymore. it’s a DSL for CVEs.”
It does seem as though there are a disproportional number of CVEs attributed to C. But if you consider the track record of projects like OpenBSD, or anything DJB touches, it’s easy to see that this doesn’t have to be the case. The reasons that these projects are successful has more to do with the fact that they think about things like privilege separation and security posture. Every program, written in any language, is more secure when the people writing it understand that, and are constantly considering it.
“But if you consider the track record of projects like OpenBSD, or anything DJB touches, it’s easy to see that this doesn’t have to be the case. ”
The argument still applies when I consider their track record. The empirical studies military did in 90’s showed the Ada programmers were both more productive and introducing less defects than the C programmers. Same with C++ programmers but not as much. Turbo Pascal and Oberon users had amazing productivity with fewer, safety issues than C coders due to rapid compilation & type-system reducing debugging. SPARK went further by using a clean-language + tooling to prove absence of common errors. iMatix used DSL’s in tools like Xitami web server to auto-generate portable, high-performance, safe C from high-level specs. Galois is doing same with Ivory and Tower languages. Recently, COGENT did a similar level of assurance as seL4 for specs and implementation of a whole filesystem by two, non-formal experts in a functional language. It performed well, took a fraction of time of seL4, extracts to C currently to use certified compiler for C, and can be extended so something like SPARK for added guarantees.
We’re not hating C because of CVE’s. We’re hating C because it’s inferior on many objective metrics to alternative designs that were implemented from back in its time to the current one. The OpenBSD team or DJB using such technology would’ve gotten more assurance at a faster pace with easier long-term maintenance. All due to better, language design. All that mental energy working around C’s deficiencies would’ve instead been invested into new developments of actual products that came with mathematical proof of correctness of certain features or limitation of damage. We think that, in presence of better tools, it’s irrational to use C unless you have a very good reason. Also note that better tools can be made to generate C or use C FFI if its compiler or ecosystem trying to force C on you. Then, the C parts get gradually rewritten themselves over time. Precedent for those in Ada and recently Rust.
“The reasons that these projects are successful has more to do with the fact that they think about things like privilege separation and security posture. Every program, written in any language, is more secure when the people writing it understand that”
It’s orthogonal. The high-assurance, security field came up with the components far back as the late 70’s. The MULTICS evaluation by co-inventors of INFOSEC, Karger and Schell, showed many issues that were recurring in INFOSEC including choosing a programming language that leads to lower defects. They were applied in GEMSOS (mid 1980’s) whose properties Schell describes in 2nd link including specifically avoiding C (used Pascal w/ call-by-value internally) to ensure every state was computable. Also had secure multiprocessing with covert channel suppression and fine-grained POLA. Not sure if OpenBSD has all three even today. Available data indicates no A1-class kernel was ever penetrated during NSA’s 2-5 years of pentesting for certification of each or the 20+ years of field use for Boeing SNS or GEMSOS. So, people wanting real security should probably follow those principles that led to it in practice like using safer methods, architecture, and languages. ;)
I appreciate your comment. There is a lot of important points in there. I wrote, and then rewrote my original comment a few times. I don’t disagree that there are other options out there that we should be using, but despite all of that, hardly anyone is. The operating system in everyone’s pocket, and on most everyone’s desk, and in racks at data centers almost everywhere, is written in C and/or C++. The utilities, and network servers, and scripting languages that people are utilizing on top of those operating systems are also largely written in C. This may be due to Worse is Better, and it may be an idiotic stance as humans, but it is reality.
How do you propose we move to a better reality? And, what’s different about your proposition that will make people actually listen and start adopting safer tools? Until that enormous revolution starts, we’re all but stuck supporting trillions of lines of C / C++ code. Sure, we can start replacing it with Rust, or Ada, or any of the other suitable but safe languages out there. But, something tells me it’s not going to be at all trivial to get that going and to the scale we need, otherwise, I think we’d have seen a rise in the number of programmers using Oberon, or Modula-2 on a daily basis starting in the late 80s and continuing til the present.
Fortunately, Go and Rust do exist and are starting to gain a bit of traction. There’s even an operating system being written in Rust. But, this shouldn’t be taken as “wide-spread adoption.” This should probably be taken as “Bay Area web/mobile startup echo chamber” (to use the original author’s words) has some new shiny toys to play with. But, maybe in time, languages like Rust will take over the landscape everywhere and we’ll slowly phase out our heavy reliance on C.
Good points. The legacy and clean-slate problems are best kept as two, separate things. The reason is that early work on UCLA Secure Unix, etc showed UNIX architecture was inherently impossible to secure. Too complex, too many covert channels, etc. The stuff from Schell’s era plus the MILS designs later took a simple approach best illustrated with the open Nizza paper:
This architecture starts with a tiny component in kernel mode that is easy to verify, enforces key properties, and can’t be bypassed. There are extra components, best in user-mode, for specific stuff like init, secure storage of secrets, I/O, even scheduling in recent designs. These are privileged and designed to be shared (MLS, MILS). On top of this are both VM’s for legacy apps and stand-alone apps running right on the kernel. Frameworks like Camkes or ZeroMQ can facilitate easy communication. The idea is that you put what you can of the security-critical stuff outside the VM’s on top of the trusted kernel. RTOS’s like INTEGRITY-178B went further with Ada and [tiny] Java runtimes for those apps. Nizza demonstrator isolated the GUI, email signatures, and VPN’s crypto mechanism. Genode.org applies this for a whole desktop although proprietary products got finished first with INTEGRITY PC, LynxSecure, VxWorks MILS, and Sirrix TrustedDesktop.
So, that’s the proven method that mostly preserves legacy. What if you want the kernel safe without rewriting all the C? That’s where projects like Softbound + CETS and SAFEcode come in. The first gives you complete memory safety for C with less of a performance hit than naive versions of that. SAFEcode aims at something similar but has already been used on Linux and FreeBSD. Criswell et al in SVA-OS add OS interfaces that enforce safety for things like memory management and DMA manipuation. So, the interface + implementation code are safer. This comes with a performance hit but looks reasonable for level of protection it offers. You basically buy a CPU with a bit more speed and cache. Better to spend $50-100 on that than antivirus suite anway. ;)
Next approach, the best but most costly, is to solve underlying problem: both languages and hardware itself make software insecure by default. The leading one in security is probably CHERI that modifies MIPS, C compiler, and FreeBSD into capability-secure platform. Important for our discussion, it lets you use legacy code under MMU protection side-by-side with fine-grained POLA for components you modify. Two other models. One is the hardware version of Softbound+CETS, Watchdog, that knocks out most performance penalty. Another is series of CPU’s modified with crypto between CPU & memory unit that protects confidentiality and integrity of pages from software and RAM attacks. Let’s just say they can do memory safety, too, as part of their function. They’re the strongest of security measures since everything outside SOC boundary is untrusted.
These are collectively the strong approaches to dealing with the C situation without costly rewrites. They’re what the money and time should be poured into. Every improvement in speed or security will benefit every tool using them automatically. Some already run OS’s, esp FreeBSD, right now. Robust implementation of both the compilers and CPU’s would get the most done. The compilers (or just passes), kernels, trusted components, middleware, whatever could be done in better language of choice so long as it can call or be called from C functions. Result would be small TCB whose components are stronger than monoliths alone. Lower chance of being bypassed as well since it’s the tactical shit like re-ordering memory or blocking one vector that get bypassed the most. The OS’s currently rely on combinations of those. Well, combining strong security with automated obfuscation/diversification was in my proposal to counter nation-states. So, can still use those OpenBSD tricks if they want to spend extra time. :)
I’m not DJB. I’ve never sold myself to anyone as DJB. But it does no good for me to tell my employers that. They won’t understand what that implies. But if I code in Rust rather than C, it means my employers won’t find out the hard way, that I am not DJB.
Look, DJB is human. I presume you are too. This stuff can be learned. DJB wrote a very interesting, and easy to follow paper about qmail’s security practices. If you are disciplined, you can write safe and secure C. It’s not impossible to do so.
It isn’t hard to answer the question, “why shouldn’t I just use Rust?” You probably should! Rust makes it harder to shoot yourself in the foot, since it has guarantees on memory safety, etc, which is a common attack vector for code written in C. But, even if you use Rust, you’re not strictly safe. It’s still possible to have exploitable security concerns, they just don’t involve buffer overruns, which are all too common.
No buffer overruns. No race conditions. Still a chance for integer overruns/underruns, and of course errors in my own logic.
Now, I have battle scars from writing lots of C code. To some degree, you need experience in C to understand why it’s worth climbing the Rust learning curve. But how much?
To clarify, Rust purports to prevent data races, not race conditions. Notably, safe Rust code can still deadlock. (We do try to take steps to avoid deadlock too. For example, the standard library provides channels and hard-to-misuse mutexes. But there are no guarantees there about deadlock. You can still lock/unlock mutexes in the wrong order. :-))
I have to wonder just how important it is for a Rust developer to speed up his program that he would risk deadlocks by going to mutexes when the Rust-ly abstractions will do the same job, albeit with some more waiting around.
For my use cases, the benefit to ditching Python/Java and going down to native code is well worth some unneeded clone() calls and syncs here and there.
I have to wonder just how important it is for a Rust developer to speed up his program that he would risk deadlocks by going to mutexes when the Rust-ly abstractions will do the same job, albeit with some more waiting around.
Hmm, could you elaborate a bit more on this? A Mutex does have a bit of nice abstraction around it (where destructors are responsible for locking a mutex after its data has gone out of scope, for example).
Are the other abstractions you’re referring to channels? I’m not sure those are necessarily slower than mutexes, certainly, the channels in the crossbeam crate don’t use locks at all IIRC.
For my use cases, the benefit to ditching Python/Java and going down to native code is well worth some unneeded clone() calls and syncs here and there.
Of course we’re in agreement. :-) I’ve just seen a lot people say “Rust prevents race conditions” and I like to make sure to nip that in the bud. (Neither Java nor Python prevent race conditions either.)
Not really. Too much of a neophyte at the moment. But I have enough experience with concurrent programming that I prefer to start out by mapping out my tasks and the communication patterns between them based on the patterns in the ZMQ manual. And then I write the code using whatever will implement those patterns in the language I have to use. In Rust, this winds up just being Channels.
I have to wonder just how important it is for a Rust developer to speed up his program that he would risk deadlocks by going to mutexes when the Rust-ly abstractions will do the same job, albeit with some more waiting around.
I routinely use mutexes in Golang, which has a safer channel primitive, because the cost of channels is very high in performance critical code. I can’t speak for Rust, since I don’t have enough experience with it, but that’s one potential reason.
There’s a whole hierarchy of things to learn if you want to understand the full stack behind a computer:
Binary arithmetic.
Digital circuits.
Digital gates.
Sequential logic.
ALUs and CPUs.
Then on the software side come:
Instruction sets.
Assembly language.
C.
OOP languages.
Languages running VMs, e.g. Java, Python, C#.
Domain specific languages and applications.
Full stack applications running in the cloud.
Arguably, for any one of these levels, if you want to understand your abilities and limitations, you need some experience one level below. And for example, my lack of experience programming in assembly language does limit my versatility in doing anything in an embedded context. But demanding that someone spend inordinate amounts of time at C can quite easily turn into hazing, IMO.
Pretty much matches my thoughts on why people should learn C, even if they don’t use it. And by learn, I mean complete at least one non trivial project, not just a few exercises.
In many languages, a loop that concatenates (sums) a sequence of integers looks a lot like one that concatenates a sequence of strings. The run time performance is not similar. This is obvious in C.
I have mixed thoughts on rust. As a practical systems language, sure, great. No cost abstraction, ok, sure, great. But for understanding how computers work? To compile by hand? Less sure about that.
There’s a place for portable assembler that’s a step above worrying about whether the destination is the left or right operand and what goes in the delay slot and omg so many trees. And whether it’s arm or Intel or power, all computer architectures work in similar fashion, so it makes sense to abstract that. But we shouldn’t abstract away how they work into something different.
Rust helped me understand how computers work much better than C (I’ve learned both C and C++ at Uni and had to implement some larger projects in it). It just adds a lot more explicit semantics to it which you need to know in C. It keeps strings as painful as they are :). The thing that’s really lacking - and I agree on that fully - is any kind of structured material covering all those details. If you really want to mess around with computer on a low level - and not write a tutorial on how to mess around with a computer on a low level with your chosen language - C is still the best choice and will remain so.
Most arguments about C seem, though, at a closer look, end up as “it’s there, it’s everywhere”. The same argument can be made about Java or C#. A agree that some C doesn’t hurt, but I don’t see how it is as necessary as people make it to be.
I really agree with the idea that Rust makes explicit a long catalog of things that one can do in C, but should not*.
The missed point here is that important parts of the Java or C# toolchains or runtimes are written in C or C++; the argument rests on C (for the time being still “the portable assembly language”) being foundational, not just ubiquitous. C is almost always present, right above the bottom of the (technological) stack.
*Unless interacting with hardware / doing things that are inherently
unsafe
.On further reflection, I think I missed @tedu’s point, which is something more like, “C is special because it’s pretty easy to mentally translate C into assembly or disassembly into C.”
Which would probably be true in the absence of aggressive UB optimizations. But, if horses were courses…
“C is special because it’s pretty easy to mentally translate C into assembly or disassembly into C if we assume it was compiled with -O0”
Here’s an example of something I ran into the other day that’s easy to do in C but unnecessarily hard to do in Rust: adding parent pointers to a tree data structure. In C I’d just add a field and update it everywhere. In Rust I’d have to switch all my existing left/right pointers to add
Rc<T>
.I’m willing to buy that a binary tree implementation in Rust encodes properties it would be desirable to encode in the C version as well. But once you start with a correct binary tree Rust doesn’t seem to prevent any errors I’d be likely to make when adding a parent pointer, for all the pain it puts me through. There’s a lot of value in borrow-checking, but I think you’re understating the cost, the level of extra bondage and discipline involved.
It is everywhere though, what do most people think runs microcontrollers/avr/pic/etc… chips? Generally it is a bunch of bodged C and assembly.
The argument here is a bit different, you can avoid java (I have zero interaction with it), but you can’t realistically avoid dealing with C.
I can totally do that. Lots of new high-performance software (Kafka, Elasticsearch, Hadoop and similar) is written in Java, so if you are in that space, you can realistically avoid dealing with C. You will certainly avoid C if you do anything in the web space.
You will certainly run C software, but that doesn’t mean you need to know it.
Ah I see the disconnect, you’re using avoid in reference to having to program in. I’m using it in reference to use. In which case we’re both correct but talking past each other in meaning.
Rust is pretty great, no arguments there, but I think at least one of the author’s points is that C as a language has remained INCREDIBLY stable over decades.
Can you honestly say that immersing students in Rust will still have the same level of applicability 10 years from now?
Strong disagreement to part of your point. C in the 90s was very different to C in the early 00’s which is very different to C now. The standards may not have changed that much (though eg C89, C99 etc were things, which did change things), but the way compilers work has massively changed, eg in aggressive UB handling, meaning that the effective meaning of C code has completely changed, and continues to be in wild flux. C as a language is amazingly unstable, in part due to the language specifications being amazingly underspecified, and in part due to so many common things being UB.
You’re quite right. I learned C back in the K&R days, and even the transition to ANSI was readily noticeable, if not earth moving.
My point is that even having learned K&R, the amount of time it takes for me to come up to speed is relatively trivial. I would argue that this is nothing compared to say even the rate of change in idiomatic usage of the Java language over time. I learned Java back before generics and autoboxing, to say nothing of more recent Java 8 enhancements, and the Java landscape is a VERY different place now than it was when I ‘lived’ there.
I would disagree that the time to come up to speed is trivial in comparison - you are comparing apples and oranges.
The time to superficially come up to speed is trivial in comparison, but the time to actually learn how to not write heisenbugs into your code that you did not used to have to worry about - well, unless you are extensively fuzzing and characterising your code, you don’t actually know that you even have come up to speed.
[Comment removed by author]
They probably don’t work the same, unless you have extensively characterised and fuzzed them, or unless you have done binary diffs of the executables and they are identical - and the latter I would not believe, as there have been other differences in what output compilers produce over the years that should show up.
That is, my meaning is not that the changes will result in code that produced tetris now producing space invaders. They will result in code that produced tetris now producing tetris with additional weird heisenbugs that can be used for eg arbitrary code execution.
Edited to add: Also, the part about the C standard being underspecified is what means that C programs do not have an inherent meaning - their meaning differs massively depending on which compiler, which architecture etc. For example, how some cases of bit shifts are handled differs completely between x86 and ARM, and has historically differed between compilers on x86.
Does GCC even fully implement C99 yet? Many people still compile their C with –std=c89…
Most people’s continuing use of c89 is either cargo culting or the need to support very old compilers. GCC and Clang have both had adequate support of c11 for years now. If you’re working with an old microcontroller you might be stuck having to use their patched gcc 3.x (a very bad time, speaking from experience) but aside from this sort of situation using c99 or c11 is perfectly reasonable.
Most people are indeed in an environment where they can use either c99 or c11, but since c11 is not quite a superset of c99, c89 seems to have retained a certain role in C programmers' mental models as the core of C. The real working core today could in practice be a bit bigger, essentially c89 plus those features that were not in c89, but are in both of c99 and c11. But that starts getting more complex to think about! So if you want your stuff to compile on both c99 and c11 compilers, just sticking to c89 is one solution, and probably the simplest one if you already knew c89.
I personally wrote mostly c99 in the early 2000s, but one of the specific features I used most, variable-length arrays [1], was taken back out of c11! (Well, demoted to an optional feature.)
[1] Perhaps better called runtime-length arrays. They aren’t variable in the sense of resizable, just with size not specified at compile-time; usually it’s known instead at function-entry time.
What does fully implement mean? c99 without appendices? yes, c99 with appendices? no.
Regardless clang is your best bet for something beyond c89 that works.
That wasn’t my argument. I was opposing the argument that C is the only language usable for that.
I’m quite aware that Rust is currently too young to teach it people as a usable skill on the job for the future.
I agree with flaviusb though, idiomatic C has had incredible changes over the last few years.
That’s kinda what I’ve been designing. At a high level but without the low-level chops (so far) to make it real. So since I can’t do, I teach :)
My Basic-like language Mu tries to follow DJB’s dictum to first make it safe before worrying about making it fast. Instead of pointer arithmetic I treat array and struct operations as distinct operations, which lets them be bounds-checked at source. Arrays always carry their lengths. Allocations always carry a refcount, so use-after-free is impossible. Reclaiming memory always clears it. You can’t ever convert a non-address to an address. All of these have overhead, particularly since everything’s interpreted so far.
While it’s safer than C it’s also less expressive. A function is a list of labels or statements. Each statement can have only one operation. You don’t have to worry about where the destination goes, though, because there’s an arrow:
You don’t have to mess with
push
orpop
. Functions can be called with the same syntax as primitive operations.It also supports tagged unions or sum types, generics, function overloading, literate programming (labels make great places to insert code), delimited continuations (stack operations are a great fit for assembly-like syntax). All these high-level features turn out not to require infix syntax or recursive expressions.
I’m working slowly to make it real, but I have slightly different priorities, so if someone wants to focus on the programming language angle here these ideas are freely available to steal.
[Comment removed by author]
I’d expect ropes to be more popular in high-level languages, to be frank. Java’s
StringBuilder
is a painful reminder that some supposedly “high-level” language designers still think in C.As far as I know “str += postfix” in a loop is going to be O(n^2) in c#, ruby, Python, JavaScript, and lua. What languages are you thinking of?
V8 uses ropes to represent javascript strings almost all the time (IIRC there are exceptions for e.g. very large strings).
I’m not thinking of any specific language, I’m thinking of a specific data structure: https://en.wikipedia.org/wiki/Rope_(data_structure)
Oh, you mean people should use ropes? In my experience, c# devs use StringBuilder and everyone else uses some spelling of Array.join(). I knew about, but still didn’t use, ropes when I wrote c++ code because what I really wanted was ostringstream.
In C++, using
std::ostringstream
is understandable, but how come high-level language pretenders like C# and Java force you to useStringBuilder
s?Because for most development purposes the interface used when building up strings is more important than the performance. And StringBuilders are a common and well understood interface for building up strings incrementally.
What I’m saying is that
StringBuilder
is too low-level an interface. I want to be able to concatenate lots of strings normally, and let the implementation take care of doing it efficiently.In theory you’re right, because StringBuilder is less widely known, less intuitive, harder to use, more verbose, etc.
In practice, high level abstractions with “magic” optimizations can cause problems if you’re really depending on the optimizations to work. E.g. you switch implementations, or make an innocuous change that causes the optimizer to bail out. It turns out even some high level languages aren’t really committed to their ideology.
I don’t want “magic optimizations” - far from it. I want a language where the cost of each operation is a part of the language specification, so that I don’t need to rely on implementation specifics to know that my programs are efficient. For instance, if the language specification says “string concatenation is
O(log n)
”, then naïve implementations of strings are automatically non-conforming, so I can assume it won’t happen.The problem is that most strings are small – in fact, most strings would fit in a char* – and not concatenated all that often. In the common case, ropes are an incredible amount of overhead for a very small, common data structure.
Then you can use a hybrid implementation, where small enough strings are implemented as arrays of bytes storing the characters contiguously, and large strings are implemented as ropes. In most high-level languages, this doesn’t impose any extra overhead, since every object already has at least one word of metadata for storing dynamic type info and GC flags.
I have many issues with the core of the argument.
Given that argument, you should use Java or C#. It’s really weak. Calling web/mobile an “echo chamber” is uncalled for, especially as mobile developer make the tradeoff between interpreted languages (such as JS) and a superset of C (Objective-C) all the time.
Also, it’s the market the largest companies of all time are in. How is this an echo chamber?
Sure, but so does Smalltalk, ML and others. Why should C be the one I learn? I started programming with ML, Java and Ruby and am very happy with that path.
C, first of all, makes you think like C. When it comes to thinking like a computer, my assembler courses helped me much more.
Is that so? Eclipse, IntelliJ and similar are not. A huge amount of such tooling is written in C#. I’d call [citation needed].
Also, tooling for software is a fringe business, why must I learn that?
This is a key point people keep missing when they tell people to learn how C works to be closer to the machine. The machine runs assembly language with some architecture-specific models. Unlike a portable ASM, maybe an intermediate language for a compiler, the C language makes specific decisions about how things will be implemented that have to be put on top of the assembly. So, you’re really learning C’s model instead of assembly’s model. Learning assembly’s would be better if the goal is producing better assembly from HLL code. Plus, there’s macro assemblers. :)
I actually felt like I got a better grasp of “how to think like a computer” by learning Forth than by learning C, because at least Forth doesn’t pretend that there are data types that aren’t numeric.
Thank you, this one is a pet-peeve of mine!
Hmm…
Partly the pre-eminence of C has altered computers to be a lot more C like over the years.
Modern CPU’s are presenting a swan like smooth sailing C - like view….
While paddling madly underneath handling cache misses, pipeline stalls, page faults, ….
[Comment removed by author]
Stack-based architectures allow for more compact instruction representation, which when coupled with a stack-based language like Forth allows much more efficient use of caches. Stack CPUs also tend to be smaller (owing to reduced complexity of instruction decoding and register access) which lets you fit many more cores on a die. This, in turn, allows for better whole-system performance even if individual cores are slower than they could be, since an individual core generally spends much of its time on memory access (more than half the time under many modern programs). GPUs use a similar tactic, but they combat core complexity with SIMT which makes them inherently harder to program than independent threads.
Also, 16-bit x86 isn’t very well-suited to C. Segmented memory interacts poorly with C’s idea of how a pointer should act, and there aren’t enough registers. You could view x64’s additional registers as a concession to C-like languages, and it helped: the x64 calling conventions are way nicer than in x86.
For a more “out-there” look at what processors not tailored to C look like, check out reduction machine architectures. They offer implicit parallelism that a traditional CPU really can’t.
Modula-3 or a subset of it with annotations to help compiler and macros for occasional zero-cost abstractions. That would be more consistent than C, compile faster, easier to read, easier to maintain, and run fast in production. The regular Modula-3 already had those properties except the compiler hints and macros should help in those few situations where C’s unsafety gives it benefits. Or just tell optimizer to go all out when it sees UNSAFE keyword.
“i think a reasonable case could be made that the “ease” of running C on modern processors has more to do with the effort put in by an entire generation of researchers to figure out how to compile C to architectures like modern ones. now we just take all that understanding for granted.”
C was popular in OS’s and performance-critical applications. Both CPU’s and compilers were optimized to make C apps run faster. That kept the CPU’s and compilers competitive in market. It’s that simple.
https://en.wikipedia.org/wiki/Timeline_of_computing
Don’t forget at the time C came out there was a wild Zoo of diversity in CPU designs, far wider than there is now…
I believe there was a fair degree of influence from the design of the PDP-11 on C and the success of both influenced subsequent CPU designs.
It’s a self-reinforcing process. New processors are designed to go faster on benchmarks with a representative instruction mix. Since C is so prevalent, a lot of that instruction mix is the output of a C compiler. Even when it isn’t (Java JIT code should also be prevalent, for example), it’s the output of a compiler trying to be fast on a processor that’s fast at running C.
The days when you could design a new instruction set that isn’t great at representing C code and have much commercial success are long gone.
Most of this just says C was a popular implementation language following UNIX’s fame. So, many things are written in C. So, knowing C will help you understand how they work. People doing cutting-edge work that pick C are usually clear that they pick it for compiler and talent availability, not technical superiority for problem at hand. Now for the No’s. You don’t need to know it for systems programming as plenty of better languages exist for that. It barely contributed anything to good programming languages compared to ALGOL, Simula, Modula, or LISP. Many key designs pushing OS’s forward were not in C: Burrough’s ALGOL, IBM’s PL family, MULTICS in PL/0 & BCPL (UNIX started as MULTICS subset), VMS in BLISS, LISP machines in LISP, Smalltalk-80 in Smalltalk-80, Oberon systems in Oberon dialects, Spin in Modula-3, JX in Java, and recently unikernels in Ocaml.
So, C is a necessary evil if you want to understand or extend legacy systems whose authors preferred C. It’s also trendy in that people keep it mostly like it is while defending any problems it has. That’s a social thing. Like COBOL and PL/I are with the mainframe apps for companies using them. It’s not the best language for systems programming along a lot of metrics. One doing clean-slate work on a platform can use better languages to do better in the long-term. Short-term benefits matter, though, so such projects often compile to C or include C FFI to benefit from legacy stuff.
There’s a growing trend of hating C out there these days. In part, it’s because so many CVE’s are created because of sloppy C programming. Recently, a coworker said the following: “nobody should write anything in C anymore. it’s a DSL for CVEs.”
It does seem as though there are a disproportional number of CVEs attributed to C. But if you consider the track record of projects like OpenBSD, or anything DJB touches, it’s easy to see that this doesn’t have to be the case. The reasons that these projects are successful has more to do with the fact that they think about things like privilege separation and security posture. Every program, written in any language, is more secure when the people writing it understand that, and are constantly considering it.
“But if you consider the track record of projects like OpenBSD, or anything DJB touches, it’s easy to see that this doesn’t have to be the case. ”
The argument still applies when I consider their track record. The empirical studies military did in 90’s showed the Ada programmers were both more productive and introducing less defects than the C programmers. Same with C++ programmers but not as much. Turbo Pascal and Oberon users had amazing productivity with fewer, safety issues than C coders due to rapid compilation & type-system reducing debugging. SPARK went further by using a clean-language + tooling to prove absence of common errors. iMatix used DSL’s in tools like Xitami web server to auto-generate portable, high-performance, safe C from high-level specs. Galois is doing same with Ivory and Tower languages. Recently, COGENT did a similar level of assurance as seL4 for specs and implementation of a whole filesystem by two, non-formal experts in a functional language. It performed well, took a fraction of time of seL4, extracts to C currently to use certified compiler for C, and can be extended so something like SPARK for added guarantees.
We’re not hating C because of CVE’s. We’re hating C because it’s inferior on many objective metrics to alternative designs that were implemented from back in its time to the current one. The OpenBSD team or DJB using such technology would’ve gotten more assurance at a faster pace with easier long-term maintenance. All due to better, language design. All that mental energy working around C’s deficiencies would’ve instead been invested into new developments of actual products that came with mathematical proof of correctness of certain features or limitation of damage. We think that, in presence of better tools, it’s irrational to use C unless you have a very good reason. Also note that better tools can be made to generate C or use C FFI if its compiler or ecosystem trying to force C on you. Then, the C parts get gradually rewritten themselves over time. Precedent for those in Ada and recently Rust.
“The reasons that these projects are successful has more to do with the fact that they think about things like privilege separation and security posture. Every program, written in any language, is more secure when the people writing it understand that”
It’s orthogonal. The high-assurance, security field came up with the components far back as the late 70’s. The MULTICS evaluation by co-inventors of INFOSEC, Karger and Schell, showed many issues that were recurring in INFOSEC including choosing a programming language that leads to lower defects. They were applied in GEMSOS (mid 1980’s) whose properties Schell describes in 2nd link including specifically avoiding C (used Pascal w/ call-by-value internally) to ensure every state was computable. Also had secure multiprocessing with covert channel suppression and fine-grained POLA. Not sure if OpenBSD has all three even today. Available data indicates no A1-class kernel was ever penetrated during NSA’s 2-5 years of pentesting for certification of each or the 20+ years of field use for Boeing SNS or GEMSOS. So, people wanting real security should probably follow those principles that led to it in practice like using safer methods, architecture, and languages. ;)
https://www.acsac.org/2002/papers/classic-multics.pdf
https://www.fbcinc.com/e/nice/ncec/presentations/2015/Schell.pdf
I appreciate your comment. There is a lot of important points in there. I wrote, and then rewrote my original comment a few times. I don’t disagree that there are other options out there that we should be using, but despite all of that, hardly anyone is. The operating system in everyone’s pocket, and on most everyone’s desk, and in racks at data centers almost everywhere, is written in C and/or C++. The utilities, and network servers, and scripting languages that people are utilizing on top of those operating systems are also largely written in C. This may be due to Worse is Better, and it may be an idiotic stance as humans, but it is reality.
How do you propose we move to a better reality? And, what’s different about your proposition that will make people actually listen and start adopting safer tools? Until that enormous revolution starts, we’re all but stuck supporting trillions of lines of C / C++ code. Sure, we can start replacing it with Rust, or Ada, or any of the other suitable but safe languages out there. But, something tells me it’s not going to be at all trivial to get that going and to the scale we need, otherwise, I think we’d have seen a rise in the number of programmers using Oberon, or Modula-2 on a daily basis starting in the late 80s and continuing til the present.
Fortunately, Go and Rust do exist and are starting to gain a bit of traction. There’s even an operating system being written in Rust. But, this shouldn’t be taken as “wide-spread adoption.” This should probably be taken as “Bay Area web/mobile startup echo chamber” (to use the original author’s words) has some new shiny toys to play with. But, maybe in time, languages like Rust will take over the landscape everywhere and we’ll slowly phase out our heavy reliance on C.
Good points. The legacy and clean-slate problems are best kept as two, separate things. The reason is that early work on UCLA Secure Unix, etc showed UNIX architecture was inherently impossible to secure. Too complex, too many covert channels, etc. The stuff from Schell’s era plus the MILS designs later took a simple approach best illustrated with the open Nizza paper:
https://os.inf.tu-dresden.de/papers_ps/nizza.pdf
This architecture starts with a tiny component in kernel mode that is easy to verify, enforces key properties, and can’t be bypassed. There are extra components, best in user-mode, for specific stuff like init, secure storage of secrets, I/O, even scheduling in recent designs. These are privileged and designed to be shared (MLS, MILS). On top of this are both VM’s for legacy apps and stand-alone apps running right on the kernel. Frameworks like Camkes or ZeroMQ can facilitate easy communication. The idea is that you put what you can of the security-critical stuff outside the VM’s on top of the trusted kernel. RTOS’s like INTEGRITY-178B went further with Ada and [tiny] Java runtimes for those apps. Nizza demonstrator isolated the GUI, email signatures, and VPN’s crypto mechanism. Genode.org applies this for a whole desktop although proprietary products got finished first with INTEGRITY PC, LynxSecure, VxWorks MILS, and Sirrix TrustedDesktop.
So, that’s the proven method that mostly preserves legacy. What if you want the kernel safe without rewriting all the C? That’s where projects like Softbound + CETS and SAFEcode come in. The first gives you complete memory safety for C with less of a performance hit than naive versions of that. SAFEcode aims at something similar but has already been used on Linux and FreeBSD. Criswell et al in SVA-OS add OS interfaces that enforce safety for things like memory management and DMA manipuation. So, the interface + implementation code are safer. This comes with a performance hit but looks reasonable for level of protection it offers. You basically buy a CPU with a bit more speed and cache. Better to spend $50-100 on that than antivirus suite anway. ;)
http://www.cs.rutgers.edu/~santosh.nagarakatte/softbound/
http://sva.cs.illinois.edu/
Next approach, the best but most costly, is to solve underlying problem: both languages and hardware itself make software insecure by default. The leading one in security is probably CHERI that modifies MIPS, C compiler, and FreeBSD into capability-secure platform. Important for our discussion, it lets you use legacy code under MMU protection side-by-side with fine-grained POLA for components you modify. Two other models. One is the hardware version of Softbound+CETS, Watchdog, that knocks out most performance penalty. Another is series of CPU’s modified with crypto between CPU & memory unit that protects confidentiality and integrity of pages from software and RAM attacks. Let’s just say they can do memory safety, too, as part of their function. They’re the strongest of security measures since everything outside SOC boundary is untrusted.
https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
https://theses.lib.vt.edu/theses/available/etd-10112006-204811/unrestricted/edmison_joshua_dissertation.pdf
These are collectively the strong approaches to dealing with the C situation without costly rewrites. They’re what the money and time should be poured into. Every improvement in speed or security will benefit every tool using them automatically. Some already run OS’s, esp FreeBSD, right now. Robust implementation of both the compilers and CPU’s would get the most done. The compilers (or just passes), kernels, trusted components, middleware, whatever could be done in better language of choice so long as it can call or be called from C functions. Result would be small TCB whose components are stronger than monoliths alone. Lower chance of being bypassed as well since it’s the tactical shit like re-ordering memory or blocking one vector that get bypassed the most. The OS’s currently rely on combinations of those. Well, combining strong security with automated obfuscation/diversification was in my proposal to counter nation-states. So, can still use those OpenBSD tricks if they want to spend extra time. :)
I’m not DJB. I’ve never sold myself to anyone as DJB. But it does no good for me to tell my employers that. They won’t understand what that implies. But if I code in Rust rather than C, it means my employers won’t find out the hard way, that I am not DJB.
Look, DJB is human. I presume you are too. This stuff can be learned. DJB wrote a very interesting, and easy to follow paper about qmail’s security practices. If you are disciplined, you can write safe and secure C. It’s not impossible to do so.
It isn’t hard to answer the question, “why shouldn’t I just use Rust?” You probably should! Rust makes it harder to shoot yourself in the foot, since it has guarantees on memory safety, etc, which is a common attack vector for code written in C. But, even if you use Rust, you’re not strictly safe. It’s still possible to have exploitable security concerns, they just don’t involve buffer overruns, which are all too common.
No buffer overruns. No race conditions. Still a chance for integer overruns/underruns, and of course errors in my own logic.
Now, I have battle scars from writing lots of C code. To some degree, you need experience in C to understand why it’s worth climbing the Rust learning curve. But how much?
To clarify, Rust purports to prevent data races, not race conditions. Notably, safe Rust code can still deadlock. (We do try to take steps to avoid deadlock too. For example, the standard library provides channels and hard-to-misuse mutexes. But there are no guarantees there about deadlock. You can still lock/unlock mutexes in the wrong order. :-))
I have to wonder just how important it is for a Rust developer to speed up his program that he would risk deadlocks by going to mutexes when the Rust-ly abstractions will do the same job, albeit with some more waiting around.
For my use cases, the benefit to ditching Python/Java and going down to native code is well worth some unneeded clone() calls and syncs here and there.
But I stand corrected.
Hmm, could you elaborate a bit more on this? A
Mutex
does have a bit of nice abstraction around it (where destructors are responsible for locking a mutex after its data has gone out of scope, for example).Are the other abstractions you’re referring to channels? I’m not sure those are necessarily slower than mutexes, certainly, the channels in the
crossbeam
crate don’t use locks at all IIRC.Of course we’re in agreement. :-) I’ve just seen a lot people say “Rust prevents race conditions” and I like to make sure to nip that in the bud. (Neither Java nor Python prevent race conditions either.)
Not really. Too much of a neophyte at the moment. But I have enough experience with concurrent programming that I prefer to start out by mapping out my tasks and the communication patterns between them based on the patterns in the ZMQ manual. And then I write the code using whatever will implement those patterns in the language I have to use. In Rust, this winds up just being Channels.
I routinely use mutexes in Golang, which has a safer channel primitive, because the cost of channels is very high in performance critical code. I can’t speak for Rust, since I don’t have enough experience with it, but that’s one potential reason.
Lol. That’s an original way of looking at the overlap of DJB’s skills and language choice. I might make a similar quip about SPARK in the future.
There’s a whole hierarchy of things to learn if you want to understand the full stack behind a computer:
Binary arithmetic. Digital circuits. Digital gates. Sequential logic. ALUs and CPUs.
Then on the software side come: Instruction sets. Assembly language. C. OOP languages. Languages running VMs, e.g. Java, Python, C#. Domain specific languages and applications. Full stack applications running in the cloud.
Arguably, for any one of these levels, if you want to understand your abilities and limitations, you need some experience one level below. And for example, my lack of experience programming in assembly language does limit my versatility in doing anything in an embedded context. But demanding that someone spend inordinate amounts of time at C can quite easily turn into hazing, IMO.
It’s a fairly well written article, but it doesn’t go far enough. Real programmers, of course, write in assembly.
I think you mean:
Real programmers write in Verilog! All this silly “software” is just abstraction on top of the real platform: physics!
Real-programmers hand-wire transistors together that they made by hand. Like so:
https://hackaday.com/2010/05/13/transistor-fabrication-so-simple-a-child-can-do-it/
https://hackaday.io/project/665-4-bit-computer-built-from-discrete-transistors
Ok, dudes, I fess up: Of course, the real programmer is Mel.