I’d like to include Rust under “reasons C is growing obsolete”, but having studied and tried to code in the language I find it’s just not ready yet.. Maybe in five years.
Meanwhile, I find D quite mature already. It’s been around for 15 years, and it’s the C++ replacement I always wanted. I find D a very pleasant experience, with a healthy collection of user-contributed libraries, and it’s continuously improving.
This side of millennium I started one C project in 2006, and another in 2014. Together they generated several million in revenues. And let me put it this way, ESR wouldn’t be writing NTPD in Go in 2001, so whatever reason he wasn’t coding has nothing to do with C.
C was already relegated to a niche (albeit a broad one) before the Web took off. No one was really writing CGI scripts in C, but for backend core and systems programming it was and still is the mainstay. The inertia is tremendous, I don’t see Go or Rust (fine languages in their own) making a dent so far. Will be a while until curl, Linux kernel, Ciso IOS, Nginx et al stop using C and new C based projects stop spawning off.
Learning Python in 1997 was quite the watershed event for me. It was wonderful – like having the Lisp of my earliest years back, but with good libraries! And a full POSIX binding! And an object system that didn’t suck!
I wonder if esr had ever used Common Lisp. Certainly, its object system is pretty amazingly unsucky — still the best I’ve ever seen.
Not long before that (certainly as late as 1990) that overhead [of a language with automatic memory management] was very often unaffordable; Moore’s law hadn’t cranked enough cycles yet.
I don’t know if it was really that unaffordable; certainly, many people refused to pay it, but I think that there were plenty of folks who did find it to be worth the cost, and then some. I wonder how much better the world might have been today if some of those better-managed languages had succeeded back then.
Heh. ESR is an easy target (and I’m certainly no fan of the man), but to answer @bargap’s question - yes, I believe he’s familiar with some dialects of Lisp. He’s done a fair amount of Emacs hacking in his time (I’ll leave the issue of the quality of his hacks as an exercise for the reader).
Trying to implement (for example) NTPsec on Python would be a disaster, undone by high runtime overhead and latency variations due to GC.
Go uses a GC as well, but one which is tuned more for latency afaik. Can the Python GC be tuned for latency? You can at least disable it (temporarily).
As far as I’m aware, the GC in CPython must stop-the-world and doesn’t have many tuning knobs. Maybe if you turn threshold0 way down, threshold1 up a little and left threshold2 more or less alone, the 50th %ile GC latency would get lower, at the cost of throughput (more, shorter young-generation collections)? The worst-case GC latency wouldn’t get any better though.
You can at least disable it (temporarily).
You can switch CPython’s GC off and leave it off (and have only refcounting) if you’re prepared to live without reference cycles.
I would expect the majority of libraries on PyPI have never been tested with the GC turned off. I wouldn’t be very surprised if people ignored pull requests that made their libraries more complicated for the sole purpose of enabling them to be used without the GC.
Yeah. I suspect he never tried? Even if Python is too slow, would Java or OCaml have been a perfectly adequate language for that kind of project? It’s hard to believe the answer is no if Go is an option.
As someone who got started with ML in 2005, C seemed obviously obsolete even then.
Pretty good read. I have a very similar story, with a similar timeline (scary really) and I too can’t recall the last time I started a true C/C++ project. Perhaps it doesn’t interest him, but 2 things he doesn’t mention that draw me to Python and Go are their vastly more useful standard libraries for things like string processing and networking, and, at least in Go’s case, a much more plausible way of doing threads. The thought of doing HTTP with C++ gives me the hives, mostly for the reasons he mentions here for why he doesn’t think Rust is mature enough today - there are so many different libraries to do it, with no particular way of deciding which one is “good”.
Libraries are an interesting story, and break down a couple ways: Old vs New, and Nyetwork vs Network.
Old vs New is the simple fact we didn’t have JSON or networking or graphics or other things we rely on now when C was in its development phases. We had, at most, attempts at networking, none of which have survived (Arpa went from NCP to TCP/IP after C had become established), meaning that any C network library would likely have been shaped wrong for what we ended up with. Ditto graphics and data serialization. We know what those things look like now, so we can bake those libraries into a programming language core.
Nyetwork vs Network is the simple fact we couldn’t reasonably develop a “CPAN for C” (CCAN?) before pervasive networking solved the distribution problem. People got new C libraries from their OS vendor at the speed of a station wagon full of tapes. This means non-core C libraries are still a bit more annoying to get unless you’re one of the anointed few who use a package manager such as apt-get or yum.
Maybe this implies something about language development, but maybe the second factor solves the first: Do languages cycle in and out based on what libraries they’re new enough to ship with, or does the fact we can download new libraries these days mean Python or Go are going to stick around? Or is the field mature enough now we’ll never have a serious revolution in how major system components change again?
We had serialisation before JSON. There were ASN.1, canonical S-expressions, netstrings … JSON only won because lazy programmers used eval until it was in the standard library.
We had multiple data serialization standards before JSON, like we has multiple network standards before TCP/IP and multiple graphics standards (hardware) before raster graphics.
FWIW, he basically found a very old issue WRT epoll; I left a comment on the issue for others finding it in the future. Basically you can call epoll directly, but mio is the cross-platform binding to this that everyone uses, and Tokio is the higher-level network stack.
Meanwhile, I find D quite mature already. It’s been around for 15 years, and it’s the C++ replacement I always wanted. I find D a very pleasant experience, with a healthy collection of user-contributed libraries, and it’s continuously improving.
This side of millennium I started one C project in 2006, and another in 2014. Together they generated several million in revenues. And let me put it this way, ESR wouldn’t be writing NTPD in Go in 2001, so whatever reason he wasn’t coding has nothing to do with C.
C was already relegated to a niche (albeit a broad one) before the Web took off. No one was really writing CGI scripts in C, but for backend core and systems programming it was and still is the mainstay. The inertia is tremendous, I don’t see Go or Rust (fine languages in their own) making a dent so far. Will be a while until curl, Linux kernel, Ciso IOS, Nginx et al stop using C and new C based projects stop spawning off.
I wonder if esr had ever used Common Lisp. Certainly, its object system is pretty amazingly unsucky — still the best I’ve ever seen.
I don’t know if it was really that unaffordable; certainly, many people refused to pay it, but I think that there were plenty of folks who did find it to be worth the cost, and then some. I wonder how much better the world might have been today if some of those better-managed languages had succeeded back then.
I mean, it’s ESR, so I’m pretty certain he probably can’t even follow the structure of a Lisp program, let alone write one.
Heh. ESR is an easy target (and I’m certainly no fan of the man), but to answer @bargap’s question - yes, I believe he’s familiar with some dialects of Lisp. He’s done a fair amount of Emacs hacking in his time (I’ll leave the issue of the quality of his hacks as an exercise for the reader).
WebAssembly is the reason that I’ve returned to C. Prior to that, like ESR, the last time I started a C project was in the 20th century.
Expect some moves with wasm and Rust in the future ;)
Go uses a GC as well, but one which is tuned more for latency afaik. Can the Python GC be tuned for latency? You can at least disable it (temporarily).
As far as I’m aware, the GC in CPython must stop-the-world and doesn’t have many tuning knobs. Maybe if you turn
threshold0
way down,threshold1
up a little and leftthreshold2
more or less alone, the 50th %ile GC latency would get lower, at the cost of throughput (more, shorter young-generation collections)? The worst-case GC latency wouldn’t get any better though.You can switch CPython’s GC off and leave it off (and have only refcounting) if you’re prepared to live without reference cycles.
I would expect the majority of libraries on PyPI have never been tested with the GC turned off. I wouldn’t be very surprised if people ignored pull requests that made their libraries more complicated for the sole purpose of enabling them to be used without the GC.
Yeah. I suspect he never tried? Even if Python is too slow, would Java or OCaml have been a perfectly adequate language for that kind of project? It’s hard to believe the answer is no if Go is an option.
As someone who got started with ML in 2005, C seemed obviously obsolete even then.
I’d definitely guess he never tried. Python is refcounted and in the common case the latency is (high but) mostly deterministic
Pretty good read. I have a very similar story, with a similar timeline (scary really) and I too can’t recall the last time I started a true C/C++ project. Perhaps it doesn’t interest him, but 2 things he doesn’t mention that draw me to Python and Go are their vastly more useful standard libraries for things like string processing and networking, and, at least in Go’s case, a much more plausible way of doing threads. The thought of doing HTTP with C++ gives me the hives, mostly for the reasons he mentions here for why he doesn’t think Rust is mature enough today - there are so many different libraries to do it, with no particular way of deciding which one is “good”.
Libraries are an interesting story, and break down a couple ways: Old vs New, and Nyetwork vs Network.
Old vs New is the simple fact we didn’t have JSON or networking or graphics or other things we rely on now when C was in its development phases. We had, at most, attempts at networking, none of which have survived (Arpa went from NCP to TCP/IP after C had become established), meaning that any C network library would likely have been shaped wrong for what we ended up with. Ditto graphics and data serialization. We know what those things look like now, so we can bake those libraries into a programming language core.
Nyetwork vs Network is the simple fact we couldn’t reasonably develop a “CPAN for C” (CCAN?) before pervasive networking solved the distribution problem. People got new C libraries from their OS vendor at the speed of a station wagon full of tapes. This means non-core C libraries are still a bit more annoying to get unless you’re one of the anointed few who use a package manager such as
apt-get
oryum
.Maybe this implies something about language development, but maybe the second factor solves the first: Do languages cycle in and out based on what libraries they’re new enough to ship with, or does the fact we can download new libraries these days mean Python or Go are going to stick around? Or is the field mature enough now we’ll never have a serious revolution in how major system components change again?
We had serialisation before JSON. There were ASN.1, canonical S-expressions, netstrings … JSON only won because lazy programmers used
eval
until it was in the standard library.We had multiple data serialization standards before JSON, like we has multiple network standards before TCP/IP and multiple graphics standards (hardware) before raster graphics.
FWIW, he basically found a very old issue WRT epoll; I left a comment on the issue for others finding it in the future. Basically you can call epoll directly, but mio is the cross-platform binding to this that everyone uses, and Tokio is the higher-level network stack.