The frustration with MSVC and overall laggard adoption of C upgrades that makes a 23-year-old standard still “too new” was one of the reasons I’ve jumped ship from C.
It’s pretty bleak to think that in C even a smallest language improvement will take decades to be usable. C is effectively frozen with no prospect of improving in the foreseeable future.
MSVC has had fairly good support for the latest C++ standards for the last decade. We switched to C++17 a few years ago (and can probably switch to C++20 now) with no problems. Ubuntu LTS and RHEL not updating GCC (and Ubuntu shipping spectacularly go buggy versions: their 9.3 was awful) is more of a problem than Windows. We’ve just written an RTOS in C++20 for an embedded target with 256 KiB of RAM (about 10 KiB of which we want to use for the core RTOS), I have no idea why anyone would choose to write C these days given the alternatives.
I’ve tried, but it was tiring to keep explaining to users of my library that their compiler was the problem. Visual Studio itself is pretty good, there are people really invested in it, and expect to be able to use it.
And as curl notes, C99 wasn’t that much better, and with the exception of controversial VLA, code could be converted to MSVC’s C dialect, so it looked like insisting on C99 out of spite.
For new projects, it’s certainly possible to use C++ as a completely different language with its own set of idioms from ANSI C but that isn’t the only way it is intended to be used. C++ was specifically designed to facilitate incremental or light adoption in existing ANSI C codebases that no longer need high portability. Many notable organizations have their own “C++ style guides” where they bless certain features and condemn others as well. It isn’t required to pull in the entire kitchen sink. For instance, it’s relatively common to use C++ without exceptions. That usage is particularly compatible with otherwise ANSI C codebases.
My personal opinion is that to the extent that C99 (and onward) are adding new features, they should not exist. C was designed to be an easy to implement and easy to port language. Adding new features is counter to those goals. It’s as counterproductive as adding new features to assembly code, it unnecessarily threatens the bootstrapping process. Right now new platforms simply need to provide an ANSI C compiler to support bootstrapping the rest of the existing software stack. You can think of ANSI C as a narrow waist, that’s not its weakness, that’s its strength.
If curl ever does decide to go C99+, VLA usage should be banned, IMHO. That’s pretty easy to enforce.
They were required in C99, then became optional in C11+, and I have yet to hear of any decent justification for using them in clean/secure code. In my mind, they’re better abandoned as a past misfeature.
Either you’re letting the variable sizing part get big enough at runtime that it could exhaust stack space (ouch!), or you’re sure the variable size is constrained to a stack-reasonable limit, in which case you’re better off just using a fixed stack array len at that limit value (it’s just an SP bump after all, there’s not much real “cost” to “allocating” more). There are some edge case arguments you could make about optimizing for data cache locality for variable and commonly-small stack arrays that are occasionally larger (but still reasonable and limited), but it doesn’t seem worth the risks of allowing them in general.
VLAs are worse than that: the compiler has to add code to touch each page that might be needed for the VLA so that the kernel isn’t surprised by attempts to access unmapped pages a long way from the current stack allocation.
Do you mean flexible array members with VLAs ? Because that’s something I’m currently trying to find out more about. I find them very useful to avoid indirection and easily deep-copy things in a GC.
I wouldn’t think so. They are unrelated. Yes, flexible array members has its good uses (to group allocations), whereas VLAs are mostly a trap. You can totally disable VLAs (-Werror=vla) and keep flexing the last struct member.
VLAs are about dynamic memory allocation on the stack, whereas flexible array members are about using dynamic memory for dynamically sized structs. So they could be used together, but not that you should.
Nearly all VLAs I’ve come across have been accidental. In particular, the constant-sized VLA trap:
// VLA in disguise because BUFSIZE is not a constant expression:
const size_t BUFSIZE = 42;
char buf[BUFSIZE];
I meant the whole general concept of VLAs, at least any use-case I’ve seen of them. Could you give an example of what you mean? I have hard time wrapping my head around how flexible array members would fit together with VLAs.
I know curl can be built on probably every hardware platform that exists, but does there exist a platform that can compile C89 but doesn’t support C99? I believe even MSVC started supporting C99 in VS2015. What systems are still out there that don’t support C99?
A large number of our users/developers are still stuck on older MSVC versions so not even all users of this compiler suite can build C99 programs even today, in late 2022.
I don’t think I’ve ever come across a FORTRAN II (back then, characters were 6 bits, no lowercase) codebase outside of a museum, but I have come across (large, actively used) Fortran 77 code bases that didn’t want to move to Fortran 90, and that was only about 10 years ago, when they’d had 20 years to get used to the idea of the newer language spec, about as long as C programmers have had to get used to C99.
I strongly suspect that people who use these languages are the ones that maintain an adversarial relationship with their compiler and don’t want new features. The C++ ecosystem moved quite rapidly to C++17 in the last few years but C++ programmers tend to view their compiler as a friend that can help them generate good code, whereas C and Fortran programmers treat their compiler as an enemy that will try to introduce bugs.
The frustration with MSVC and overall laggard adoption of C upgrades that makes a 23-year-old standard still “too new” was one of the reasons I’ve jumped ship from C.
It’s pretty bleak to think that in C even a smallest language improvement will take decades to be usable. C is effectively frozen with no prospect of improving in the foreseeable future.
Well you can use Clang on Windows, which puts it at parity with every C alternative
ie which c alternatives does MSVC support? Probably none
MSVC has had fairly good support for the latest C++ standards for the last decade. We switched to C++17 a few years ago (and can probably switch to C++20 now) with no problems. Ubuntu LTS and RHEL not updating GCC (and Ubuntu shipping spectacularly go buggy versions: their 9.3 was awful) is more of a problem than Windows. We’ve just written an RTOS in C++20 for an embedded target with 256 KiB of RAM (about 10 KiB of which we want to use for the core RTOS), I have no idea why anyone would choose to write C these days given the alternatives.
I’ve tried, but it was tiring to keep explaining to users of my library that their compiler was the problem. Visual Studio itself is pretty good, there are people really invested in it, and expect to be able to use it.
And as curl notes, C99 wasn’t that much better, and with the exception of controversial VLA, code could be converted to MSVC’s C dialect, so it looked like insisting on C99 out of spite.
Visual Studio 2015 and higher support C99. What visual studio version were you using?
I’ve been using C99 before 2015, not everyone upgrades, plus there are still missing features and idiosyncrasies specific to MSVC.
Or you could just write C++ if you want a C upgrade like the MSVC team recommends.
C++ has more stuff, but overall it’s not an upgrade to me.
What features did you want from newer versions of the C standard that were not available in the version of C++ that MSVC supported?
I don’t want to rehash the old C vs C++, but it’s not C with a few warts fixed, it’s a different language with a whole bag of its own warts.
For new projects, it’s certainly possible to use C++ as a completely different language with its own set of idioms from ANSI C but that isn’t the only way it is intended to be used. C++ was specifically designed to facilitate incremental or light adoption in existing ANSI C codebases that no longer need high portability. Many notable organizations have their own “C++ style guides” where they bless certain features and condemn others as well. It isn’t required to pull in the entire kitchen sink. For instance, it’s relatively common to use C++ without exceptions. That usage is particularly compatible with otherwise ANSI C codebases.
My personal opinion is that to the extent that C99 (and onward) are adding new features, they should not exist. C was designed to be an easy to implement and easy to port language. Adding new features is counter to those goals. It’s as counterproductive as adding new features to assembly code, it unnecessarily threatens the bootstrapping process. Right now new platforms simply need to provide an ANSI C compiler to support bootstrapping the rest of the existing software stack. You can think of ANSI C as a narrow waist, that’s not its weakness, that’s its strength.
Named and indexed initializers.
These, and 64 bit types, are more or less the only changes I want between C89 and C99; C++20 added the first, but is still missing the second.
If curl ever does decide to go C99+, VLA usage should be banned, IMHO. That’s pretty easy to enforce.
They were required in C99, then became optional in C11+, and I have yet to hear of any decent justification for using them in clean/secure code. In my mind, they’re better abandoned as a past misfeature.
Either you’re letting the variable sizing part get big enough at runtime that it could exhaust stack space (ouch!), or you’re sure the variable size is constrained to a stack-reasonable limit, in which case you’re better off just using a fixed stack array len at that limit value (it’s just an SP bump after all, there’s not much real “cost” to “allocating” more). There are some edge case arguments you could make about optimizing for data cache locality for variable and commonly-small stack arrays that are occasionally larger (but still reasonable and limited), but it doesn’t seem worth the risks of allowing them in general.
VLAs are worse than that: the compiler has to add code to touch each page that might be needed for the VLA so that the kernel isn’t surprised by attempts to access unmapped pages a long way from the current stack allocation.
One of the first things I did after starting my current job was to evict a rogue VLA and make sure no more of them appeared. https://gitlab.isc.org/isc-projects/bind9/-/issues/3201
Do you mean flexible array members with VLAs ? Because that’s something I’m currently trying to find out more about. I find them very useful to avoid indirection and easily deep-copy things in a GC.
I wouldn’t think so. They are unrelated. Yes, flexible array members has its good uses (to group allocations), whereas VLAs are mostly a trap. You can totally disable VLAs (-Werror=vla) and keep flexing the last struct member.
VLAs are about dynamic memory allocation on the stack, whereas flexible array members are about using dynamic memory for dynamically sized structs. So they could be used together, but not that you should.
Nearly all VLAs I’ve come across have been accidental. In particular, the constant-sized VLA trap:
I meant the whole general concept of VLAs, at least any use-case I’ve seen of them. Could you give an example of what you mean? I have hard time wrapping my head around how flexible array members would fit together with VLAs.
I know curl can be built on probably every hardware platform that exists, but does there exist a platform that can compile C89 but doesn’t support C99? I believe even MSVC started supporting C99 in VS2015. What systems are still out there that don’t support C99?
From the article:
I’m still writing fortran, this is fine.
But are you still writing in Fortran 2?
I don’t think I’ve ever come across a FORTRAN II (back then, characters were 6 bits, no lowercase) codebase outside of a museum, but I have come across (large, actively used) Fortran 77 code bases that didn’t want to move to Fortran 90, and that was only about 10 years ago, when they’d had 20 years to get used to the idea of the newer language spec, about as long as C programmers have had to get used to C99.
I strongly suspect that people who use these languages are the ones that maintain an adversarial relationship with their compiler and don’t want new features. The C++ ecosystem moved quite rapidly to C++17 in the last few years but C++ programmers tend to view their compiler as a friend that can help them generate good code, whereas C and Fortran programmers treat their compiler as an enemy that will try to introduce bugs.