1. 6
  1. 6

    As a full-time C++ dev who actually likes working with C++, I really don’t like language changes like this.

    On one hand I agree this syntax is convenient, and subjectively “better” than the existing syntax, and I understand the language needs to grow to stay relevant, but I feel like big syntax changes aren’t productive.

    It’s becoming a little ridiculous.

    C++ is already big and complicated in the worst way: multiple, incompatible ways to do things, each with nuances and “gotchas” that make them potentially dangerous in certain situations, and there’s usually no clear or obvious way to choose between them, making the language hard to use and teach.

    Deprecation doesn’t help, because outside of the big tech companies nobody can afford to go back and update debugged, working code, so most commercial systems just compile in C++11 mode or whatever, and in practice the language only grows.

    And as an outside observer to the standards process, there’s no clear direction or design goals for most changes, except that notable people and “experts” in the community proposed them, or somebody found it convenient and had the political savvy to get it adopted. So-and-so at a FAANG read a book about about feature “foo” in language Bar, so now there’s a proposal to cram it into C++.

    Meanwhile, learning arbitrary new C++ changes takes away energy from learning new, better designed languages without all of the baggage. C and C++ were designed for an obsolete time in computer history. There are old language that were forward thinking, with modern features, that would be great for new development (cough, Common Lisp :-), but C++ isn’t one of them. By all means, learn new techniques, and apply them in C++, but not every little things needs to be added in.

    That said, nobody’s forcing me to use C++, and there’s a lot of new languages to move to, so I guess it’s my own problem…

    1. 3

      C++ is already big and complicated in the worst way: multiple, incompatible ways to do things, each with nuances and “gotchas” that make them potentially dangerous in certain situations, and there’s usually no clear or obvious way to choose between them, making the language hard to use and teach

      this is exactly why something like cppfront is needed, to make bold syntactic and semantic changes that can attempt to regularize the language without being overly shackled to the existing state of affairs. it provides a clean upgrade path for people who are able to use it; for everyone else there’s the more conservative evolution of c++.

      1. 3

        Languages grow and adapt. C++, Rust, C#, Java, Python, Go, OCaml, Javascript. With research and advances in computing, we are always going to find new, potentially better ways of expressing our programs. And many would argue that C++ isn’t evolving fast enough; I’m generally in that camp. Having to wait for some of the stuff that’s coming in C++23 is a bit frustrating.

        A lot of “modern” codebases won’t work with the original versions of a lot of those languages mentioned. Sure, C++ is probably one of the hardest ones to cope with in terms of change I think, but engineering is hard.

        1. 2

          I only hear about C++ language changes as a cautionary tale.

          I don’t hear Java devs complaining it has jumped the shark, and Java is super old by now. C# also managed to survive pretty long and remain coherent. JavaScript got only minor complaints about the dense ES6 syntax, but once everyone got used to it, it’s doing very well. PHP managed to bury a lot of its early mistakes, despite having huge install base and backwards compatibility liability. Rust users welcome its changes with “omg, finally!”. Python3 screwed up, but even they’re getting back on track now.

          There’s something unique about C++ that makes it keep adding partial fixes that get more partial fixes every 6 years.

          1. 2

            I was thinking about that a bit after I posted last night.

            So far the history of languages has been to throw them away and create new ones, but maybe the future is to adapt the existing language to the current needs. I still feel like C++ isn’t the best language for that, but it doesn’t hurt to try.

            Ironically, Lisp was designed with that kind of growth and evolution in mind, but it never really panned out for other reasons.

          2. 3

            multiple, incompatible ways to do things, each with nuances and “gotchas” that make them potentially dangerous in certain situations, and there’s usually no clear or obvious way to choose between them, making the language hard to use and teach.

            This, plus the problem is not just an artifact of C compatibility, it’s an ongoing issue with the recent additions to the C++ standard.

            I was very annoyed by the C++11 “universal and uniform initialization” syntax, precisely because it is not universal and uniform. It looks like one faction of the language committee wanted to use the brace initialization syntax for this, and another faction wanted to use the same syntax for aggregate initialization, so they compromised and overloaded the syntax to mean “universal” initialization for some types, and aggregate initialization for other types. So it’s not universal: there’s a gotcha that you need to understand before you can safely use this syntax in generic code.

            Ad hoc overloading, where the same syntax means semantically incompatible different things depending on argument types, can be found throughout the language. It makes the language hard to use by creating “gotchas”, and it works against generic programming.

            My suggestions for designers of future programming languages: support generic programming.

            1. Do not use ad-hoc overloading anywhere in the language, because it breaks generic programming.
            2. However, do use “principled overloading”, where all of the overloaded meanings are semantically compatible and are different implementations of the same algebraic structure, satisfying a common set of axioms. This is important, it’s what makes generic programming possible.

            Herb Sutter appears to get this, when he says “generic code demands that consistency” with respect to his proposal, which is intended to be a universal and uniform syntax for a variety of pattern matching. Well, he gets half of it anyway. In his video, he says “do not needlessly use divergent syntax”, because it breaks generic programming.

            But, Sutter’s proposal nevertheless introduces ad-hoc overloading. For one, the “is” operator is overloaded for two incommensurate cases:

            • T1 is T2 means “the value set of type T1 is a subset of or equal to the value set of T2”.
            • V is T means “the value V is contained within the value set of type T”.

            If you accept my “value set” metaphor of types, then these two operations correspond to T1 ⊆ T2 and V ∈ T in set theory. DIfferent operator symbols are used, they aren’t the same thing. Or in the Julia language, which is designed from the ground up for generic programming, these two operations are T1 <: T2 and isa(V,T).