Nothing prevents you from not using features you don’t need,
The real trap of C++ is to end up with a byzantine program because you mix all the concepts into a single project. This, you shouldn’t do.
Actually, as soon as you reach for the standard library classes and templates..
You’ve got them all. Every damn C++ feature ever.
Some might be nicely wrapped for you…
Until something goes wrong and you start stepping through in the debugger then the WTF’s and what the hell does that do hit you from all sides.
C++ is unsafe. It’s an expert toolbox meant to be used when you actually know what you’re doing.
Languages like D and Rust are questioning that one.
Do you really have to be unsafe if you want power and speed?
The answer seems to be you can have power and speed, and sweep the “unsafe” parts into small well define portions of the code.
Yes, the compiler can do a lot more to protect you without costing you speed and expressiveness.
C++11 upwards are hugely better than they were…. but various bad choices in the past make them irredeemable.
I would strongly urge all C/C++ programmers to move to one of the modern options like D or Rust.
Everybody will be better off.
My personal choice is D.
Yes, and no. I primarily write C but when I do write C++ I try to think about how the code would look if it were roughly translated to C. I feel that for some of the code bases out there that use some new C++11 features this has gotten a lot harder. Code using the new lambda syntax, std::thread, async, and futures just generate a ton more boilerplate code that you’d have to write in C. Also, the use of auto in enough places almost requires a developer to make the jump to an IDE (where context around “auto” can quickly be provided) where as C is mostly written in an editor. I do feel like some of the improvements being made to the language allow you to write code faster at the expense of possibly hurting code readability.
I do feel like some of the improvements being made to the language allow you to write code faster at the expense of possibly hurting code readability.
I think it’s often the opposite - lambdas or auto are not really any easier to write than explicit custom functors (your IDE can do all the boilerplate for you anyway), but they’re a lot easier to read.
Well, I just flat out disagree with the title. Complexity is not a feature. A feature would be the compiler figuring out all by itself that some parameter can be an r-value reference, and doing the right thing automatically so I don’t have to think about it.
C++ is the result of adding every feature under the sun to a programming language, and then making the programmer responsible for getting all of the nitty-grity details right.
This is supported by the author of C++ with decent justifications. He saw the benefits of languages like Simula, ALGOL68, CLU, and ML. He also knew C’s momentum meant he’d likely not get adoption unless he built on it with compatibility. The result of bolting benefits of other languages onto C was C++. It’s needlessly complex but that comes from the key constraint of using/reusing C language & its issues.
I used to do a lot of C++ programming. A few years back I got fed up and switched to C for the projects that used to be C++. I’ve been a lot happier and more productive. In C, you always know what’s going on. In C, third-party libraries have a sane interface.
IMO, any C++ project would be better written in C or a dynamic language, often both.
One thing I love about C, is that it mixes with other languages easily. C++, not so much.
I’m not a C expert or a C++ expert, so this question may be noobish.
One place where I’ve heard C++ shines is a specific case of generic programming, e.g. when you want to be able to code an algorithm for float or for double only once, and not have to have multiple instances kicking around.
If you’re writing C and therefore presumably don’t have templates, what would you use in this case?
If the only thing your code is doing is abstracting over floats and doubles, copy and paste.
No, actually that’s a good point, and I’d argue one of the actual features C++ provides over C (where I consider OOP to be more of an anti-feature, but that’s another issue entirely).
In C, your only choice to do that is to use macros. However, I think code where it must do exactly the same thing to a float or double to be quite rare. In practice, I think the code may allow one or the other, but not both. In that case you just use a typedef or #define MY_REAL float or something like that.
C++ templates are nice for data structures, in theory, but since the code doesn’t actually do anything with the data, C’s void pointers work fine in those cases.
However, I think code where it must do exactly the same thing to a float or double to be quite rare. In practice, I think the code may allow one or the other, but not both. In that case you just use a typedef or #define MY_REAL float or something like that.
The best cases I have found for templates are “x to string” and “string to x”, you can do stuff like this:
template <class T>
std::string to_string(const T& value)
You can now call tostring with any type that is valid for ostringstream. You can combine this with variadic templates to create a printf replacement that is typesafe at compile time, called with something like:
myprintf(“int: ”, x, “, float::”, y, “, string:”, z, “\n”);
Type safety! With the STL data structures if you mismatch the types you get an error at compile time instead of crashing at run time.
Some code bases have macros to define a_function_##T for whatever T. You can also do #define T float #include "impl.h" #undef T if you have lots of code to specialise.
#define T float #include "impl.h" #undef T
I have just been reviewing a bunch of C++ code…
Smart pointers everywhere.
From a C programmers perspective it seems like a failure to think through ownership and lifecycle issues.
From a C programmers perspective shared_ptr<> seems like “I’m not using globals ‘cause globals are Bad, but I’m going to share and mutate this resource from everywhere but it’s not a global it’s A Smart Pointer.”
I’m sure there are legitimate reasons for smart pointers, but boy are they a slippery slope to perdition!
When testing this claim, compare it to something like Modula-3 designed in Wirth style to reduce complexity while adding features. It achieved incredible balance of various attributes one wanted in a programming language:
The language was easy to specify, easy to learn, fast to compile, handled low-level programs, handled large programs, had efficient code, and built-in stdlib/concurrency. Macro’s could’ve been added probably easier than C++’s template metaprogramming due to clean & simple language used. That simplicity was why Modula-3 had first, standard library to be mathematically verified free of some errors. The tools for doing that evolved into Frama-C and Krakatoa for Java after considerable effort. I’m not sure C++ even has a tool like that since it’s too complex for most CompSci people to tackle in such a total fashion.
On alternative spectrum, you see the Schemes that can express arbitrary stuff in concise code. If performance is low priority, about 6 constructs can express almost everything up to the interpreter itself. People, including myself, leveraged that power to embed imperative languages in Scheme to get the rapid development & ability to use macros not present in imperative language. People have done that for C (many), industrial-style BASIC (mine), Prolog (LISP vendors, sklogic) and Standard ML (sklogic, Myreen). Recently, Haskell users like Galois Inc have been doing the same thing for DSL’s like Ivory or Tower that extract to C that’s provably safe from various things.
So, there’s numerous languages that can do most of what C++ can without its complexity. The imperative alternatives have a fraction of its complexity. The LISP’s can do more, including emulate C++ itself, but primitives needed have a fraction of even Wirth-like complexity. For compiler implementation, Wirth- and Scheme-style languages had more success in small projects than C++ compilers did even with professionals trying to build them. All were used successfully in industrial projects with most of the users writing about how much more productive, both lines written & defect rate, they were over languages like C++.
I’d say C++’s complexity is provably a weakness of the design choices and constraints of the language. It’s not inherently necessary in an alternative language. It is overcomplicated vs benefits offered. Done today in a ML or Scheme, that complexity might not even be necessary for C++ itself to provide comparable capabilities with C compatibility & less complexity than C++.
I tend to judge a language based on the worst that can be done with it, rather than the best. Good engineers will write good code in any language. Bad engineers will write bad code in any language. What determines the fate of a codebase tends to be what its average engineers write. Admittedly, this means that an external variable weighs on code quality: you don’t encounter as much shitty Haskell code as shitty Java code for a variety of reasons, but one is the (dynamic) fact that Haskell is a niche language that attracts better programmers. If all of the mediocre engineers had to learn Haskell, you’d probably see a lot of terrible Haskell code (e.g. the DespondencyInfectionVisitorFactoryT monad transformer).
I’ve found Scala to be a massive disappointment, not because everything one can do with the language is terrible, and not for a lack of power and features (it’s largely the opposite) but because the language has too much complexity. Combine this with business requirements, deadlines, and mediocre programmers and you get horrible codebases. With C++, there’s even more room for awfulness as features and sharp edges (e.g. iterator invalidation) combine in unplanned ways and unsafe code gets into production.
I wouldn’t say that there’s never a good reason to use C++, but my experience is that good C++ programmers use the “++” part very conservatively. It’s “C plus a little”. Certainly, they don’t get into multiple inheritance and friend classes and Turing-complete template tricks unless they have no other options. I think that a lot of this is because so much of C++ makes no sense unless you understand C and assembler. Until you understand C arrays for what they are, iterator invalidation (and, in particular, the tradeoffs that led to its being a risk) just seems insane.