Thanks for posting this. I’ve very recently made the decision to start writing and engaging more online, and seeing things like this makes me much more comfortable doing so, and motivates me to help contribute to a more positive environment. Fear of not being the smartest person in the room has kept me lurking for years now, but there are some definite opportunities for personal growth to be had from writing and engaging online if done in the right manner and for the right reasons, and it’s an opportunity to contribute to the world around me in some small way.
Being able to spread information and have discussion online is important, but there are some serious lessons in communication and relationships that need to be learned before the web platform can reach it’s maximum potential benefit in this regard. I’m excited to be a part of it, help where I can, and keep learning and growing myself. :)
Thanks for the reply. Glad it could help. :)
LibTLS has been absolutely fantastic to work with! It seriously beats learning the enormous, complicated, and currently-being-broken (in 1.1) OpenSSL API, and it makes my software more secure because there’s much less for me to mess up. Thanks for the great work.
No, it’s a useful practical concession. I think people are going to realize how pointless calling unwrap().unwrap().unwrap() can get. It also doesn’t impact memory safety.
Undefined behavior on null dereference should become a thing of the past, though.
I’ve written over 40 Rust crates and I’ve never written a single unwrap().unwrap().unwrap().
I have written unwrap().unwrap() 20 times though. 14 of those were in tests.
The great thing about unwrap is that is tells the compiler “I fully accept that this might panic and I’m okay with it.” It isn’t accidental or because you forgot. It might be because you were lazy but that’s not something any type system can fix. Any sufficiently lazy coder will find a way around your type system.
Pretty much matches my thoughts on why people should learn C, even if they don’t use it. And by learn, I mean complete at least one non trivial project, not just a few exercises.
In many languages, a loop that concatenates (sums) a sequence of integers looks a lot like one that concatenates a sequence of strings. The run time performance is not similar. This is obvious in C.
I have mixed thoughts on rust. As a practical systems language, sure, great. No cost abstraction, ok, sure, great. But for understanding how computers work? To compile by hand? Less sure about that.
There’s a place for portable assembler that’s a step above worrying about whether the destination is the left or right operand and what goes in the delay slot and omg so many trees. And whether it’s arm or Intel or power, all computer architectures work in similar fashion, so it makes sense to abstract that. But we shouldn’t abstract away how they work into something different.
Rust helped me understand how computers work much better than C (I’ve learned both C and C++ at Uni and had to implement some larger projects in it). It just adds a lot more explicit semantics to it which you need to know in C. It keeps strings as painful as they are :). The thing that’s really lacking - and I agree on that fully - is any kind of structured material covering all those details. If you really want to mess around with computer on a low level - and not write a tutorial on how to mess around with a computer on a low level with your chosen language - C is still the best choice and will remain so.
Most arguments about C seem, though, at a closer look, end up as “it’s there, it’s everywhere”. The same argument can be made about Java or C#. A agree that some C doesn’t hurt, but I don’t see how it is as necessary as people make it to be.
I really agree with the idea that Rust makes explicit a long catalog of things that one can do in C, but should not*.
Most arguments about C seem, though, at a closer look, end up as “it’s there, it’s everywhere”. The same argument can be made about Java or C#.
The missed point here is that important parts of the Java or C# toolchains or runtimes are written in C or C++; the argument rests on C (for the time being still “the portable assembly language”) being foundational, not just ubiquitous. C is almost always present, right above the bottom of the (technological) stack.
*Unless interacting with hardware / doing things that are inherently unsafe.
On further reflection, I think I missed @tedu’s point, which is something more like, “C is special because it’s pretty easy to mentally translate C into assembly or disassembly into C.”
Which would probably be true in the absence of aggressive UB optimizations. But, if horses were courses…
“C is special because it’s pretty easy to mentally translate C into assembly or disassembly into C if we assume it was compiled with -O0”
Here’s an example of something I ran into the other day that’s easy to do in C but unnecessarily hard to do in Rust: adding parent pointers to a tree data structure. In C I’d just add a field and update it everywhere. In Rust I’d have to switch all my existing left/right pointers to add Rc<T>.
I’m willing to buy that a binary tree implementation in Rust encodes properties it would be desirable to encode in the C version as well. But once you start with a correct binary tree Rust doesn’t seem to prevent any errors I’d be likely to make when adding a parent pointer, for all the pain it puts me through. There’s a lot of value in borrow-checking, but I think you’re understating the cost, the level of extra bondage and discipline involved.
It is everywhere though, what do most people think runs microcontrollers/avr/pic/etc… chips? Generally it is a bunch of bodged C and assembly.
The argument here is a bit different, you can avoid java (I have zero interaction with it), but you can’t realistically avoid dealing with C.
I can totally do that. Lots of new high-performance software (Kafka, Elasticsearch, Hadoop and similar) is written in Java, so if you are in that space, you can realistically avoid dealing with C. You will certainly avoid C if you do anything in the web space.
You will certainly run C software, but that doesn’t mean you need to know it.
Ah I see the disconnect, you’re using avoid in reference to having to program in. I’m using it in reference to use. In which case we’re both correct but talking past each other in meaning.
Rust is pretty great, no arguments there, but I think at least one of the author’s points is that C as a language has remained INCREDIBLY stable over decades.
Can you honestly say that immersing students in Rust will still have the same level of applicability 10 years from now?
Strong disagreement to part of your point. C in the 90s was very different to C in the early 00’s which is very different to C now. The standards may not have changed that much (though eg C89, C99 etc were things, which did change things), but the way compilers work has massively changed, eg in aggressive UB handling, meaning that the effective meaning of C code has completely changed, and continues to be in wild flux. C as a language is amazingly unstable, in part due to the language specifications being amazingly underspecified, and in part due to so many common things being UB.
You’re quite right. I learned C back in the K&R days, and even the transition to ANSI was readily noticeable, if not earth moving.
My point is that even having learned K&R, the amount of time it takes for me to come up to speed is relatively trivial. I would argue that this is nothing compared to say even the rate of change in idiomatic usage of the Java language over time. I learned Java back before generics and autoboxing, to say nothing of more recent Java 8 enhancements, and the Java landscape is a VERY different place now than it was when I ‘lived’ there.
I would disagree that the time to come up to speed is trivial in comparison - you are comparing apples and oranges.
The time to superficially come up to speed is trivial in comparison, but the time to actually learn how to not write heisenbugs into your code that you did not used to have to worry about - well, unless you are extensively fuzzing and characterising your code, you don’t actually know that you even have come up to speed.
Does GCC even fully implement C99 yet? Many people still compile their C with –std=c89…
Most people’s continuing use of c89 is either cargo culting or the need to support very old compilers. GCC and Clang have both had adequate support of c11 for years now. If you’re working with an old microcontroller you might be stuck having to use their patched gcc 3.x (a very bad time, speaking from experience) but aside from this sort of situation using c99 or c11 is perfectly reasonable.
Most people are indeed in an environment where they can use either c99 or c11, but since c11 is not quite a superset of c99, c89 seems to have retained a certain role in C programmers' mental models as the core of C. The real working core today could in practice be a bit bigger, essentially c89 plus those features that were not in c89, but are in both of c99 and c11. But that starts getting more complex to think about! So if you want your stuff to compile on both c99 and c11 compilers, just sticking to c89 is one solution, and probably the simplest one if you already knew c89.
I personally wrote mostly c99 in the early 2000s, but one of the specific features I used most, variable-length arrays , was taken back out of c11! (Well, demoted to an optional feature.)
 Perhaps better called runtime-length arrays. They aren’t variable in the sense of resizable, just with size not specified at compile-time; usually it’s known instead at function-entry time.
What does fully implement mean? c99 without appendices? yes, c99 with appendices? no.
Regardless clang is your best bet for something beyond c89 that works.
That wasn’t my argument. I was opposing the argument that C is the only language usable for that.
I’m quite aware that Rust is currently too young to teach it people as a usable skill on the job for the future.
I agree with flaviusb though, idiomatic C has had incredible changes over the last few years.
That’s kinda what I’ve been designing. At a high level but without the low-level chops (so far) to make it real. So since I can’t do, I teach :)
My Basic-like language Mu tries to follow DJB’s dictum to first make it safe before worrying about making it fast. Instead of pointer arithmetic I treat array and struct operations as distinct operations, which lets them be bounds-checked at source. Arrays always carry their lengths. Allocations always carry a refcount, so use-after-free is impossible. Reclaiming memory always clears it. You can’t ever convert a non-address to an address. All of these have overhead, particularly since everything’s interpreted so far.
While it’s safer than C it’s also less expressive. A function is a list of labels or statements. Each statement can have only one operation. You don’t have to worry about where the destination goes, though, because there’s an arrow:
x:num <- add y:num, z:num
You don’t have to mess with push or pop. Functions can be called with the same syntax as primitive operations.
It also supports tagged unions or sum types, generics, function overloading, literate programming (labels make great places to insert code), delimited continuations (stack operations are a great fit for assembly-like syntax). All these high-level features turn out not to require infix syntax or recursive expressions.
I’m working slowly to make it real, but I have slightly different priorities, so if someone wants to focus on the programming language angle here these ideas are freely available to steal.
I’d expect ropes to be more popular in high-level languages, to be frank. Java’s StringBuilder is a painful reminder that some supposedly “high-level” language designers still think in C.
I’m not thinking of any specific language, I’m thinking of a specific data structure: https://en.wikipedia.org/wiki/Rope_(data_structure)
Oh, you mean people should use ropes? In my experience, c# devs use StringBuilder and everyone else uses some spelling of Array.join(). I knew about, but still didn’t use, ropes when I wrote c++ code because what I really wanted was ostringstream.
In C++, using std::ostringstream is understandable, but how come high-level language pretenders like C# and Java force you to use StringBuilders?
Because for most development purposes the interface used when building up strings is more important than the performance. And StringBuilders are a common and well understood interface for building up strings incrementally.
What I’m saying is that StringBuilder is too low-level an interface. I want to be able to concatenate lots of strings normally, and let the implementation take care of doing it efficiently.
In theory you’re right, because StringBuilder is less widely known, less intuitive, harder to use, more verbose, etc.
In practice, high level abstractions with “magic” optimizations can cause problems if you’re really depending on the optimizations to work. E.g. you switch implementations, or make an innocuous change that causes the optimizer to bail out. It turns out even some high level languages aren’t really committed to their ideology.
I don’t want “magic optimizations” - far from it. I want a language where the cost of each operation is a part of the language specification, so that I don’t need to rely on implementation specifics to know that my programs are efficient. For instance, if the language specification says “string concatenation is O(log n)”, then naïve implementations of strings are automatically non-conforming, so I can assume it won’t happen.
The problem is that most strings are small – in fact, most strings would fit in a char* – and not concatenated all that often. In the common case, ropes are an incredible amount of overhead for a very small, common data structure.
Then you can use a hybrid implementation, where small enough strings are implemented as arrays of bytes storing the characters contiguously, and large strings are implemented as ropes. In most high-level languages, this doesn’t impose any extra overhead, since every object already has at least one word of metadata for storing dynamic type info and GC flags.
It might be worth fighting this lunacy by showing that the performance benefits are negligible. Completely ignoring the safety/unexpectedness aspect, I feel like these optimizations are really worthless anyway. If null checks after memcpy were a bottleneck in my programs, I’d probably notice.
It builds a kernel module, which may hinder adoption somewhat. Especially when thinking about supporting other platforms like Windows or Android/iOS.
It seems to work over UDP. One of the nice features that e.g. OpenVPN and OpenConnect have is that it is possible to route all traffic over TCP/443 (with a performance penalty, but at least it works…).
There’s a lot of comments over at HN with details by the authors, but they’ve also said they’ll be releasing a userspace version that’s cross-platform and in Rust.
HN link: https://news.ycombinator.com/item?id=11994265
I think their design is well justified and I really appreciate the focus on keeping the code small (4000 lines!). I think this is the VPN I’ve been waiting for. The cross-platform Rust version sounds promising too.
I’d suggest using content hashes instead of UUIDs. First of all, content hashes are strictly better, meaning they can do everything UUIDs can do (and more). Second, they are pure functions and should be no trouble in Haskell.
Better for…? User-visible identifiers? Primary keys?
There are certainly things that are most naturally data modeled such that the value is the identity, and if the value changes you really have a different object. It turns out that most things that one wants a database for are not in this category; you want an identifier that’s stable.
But I’m really not clear what use-case you’re envisioning, so it’s a really confusing statement. Your link talks about message delivery, and I suppose that one could build messaging infrastructure on top of Yesod, but it’s a generic web-and-database framework.
Sorry if I came across as a zealot. In this case, where the author doesn’t care about distributed ID generation and (presumably) wants mutable posts, I would use the hash of an ordinary numeric primary key plus a secret salt. If there were some content he knew wouldn’t change (e.g. post title/URL segment) that would be even better. If nothing else, this could save storage and, of course, works in Haskell.
Windows support is not a big deal for me personally, but I’m glad to see SQLite4’s LSM making inroads there. So thank you for this!
My question is, what is the status of LSM? It looks like development stalled in 2014, and Richard Hipp et al switched focus to CPU-optimizing SQLite3. How complete and usable is it currently?
Node is the wrong layer to change this. The Node developers are not security experts (nor should they be), which is why they use a library to take care of it. Additionally, almost everything Node does goes through an abstraction layer (e.g. libuv); OpenSSL is the abstraction layer in this case.
It would be much better to petition OpenSSL to switch. If the OpenSSL developers are incompetent, then petition Node to switch to a different library (I use LibreSSL personally, but I would still hesitate before switching a huge project like Node to it, for now). Or, get individual Node applications to switch, in cases where it makes sense and the authors can understand the need.
Cynically, I feel like a lot (not all) of the push for this is just people repeating things they’ve read.