1. 17
  1.  

  2. 9

    The given code examples are good, but it always puts me off when a technical blog post has a marketing call to action at the end.

    1. 8

      I’m curious: what would you rather have then? Perhaps you want companies to buy even more ads, contributing to the surveillance industry? Or maybe you expect people to write technical posts strictly after work, taking away from their personal time (it’s a lot of work, by the way)? Or perhaps you’d rather not have blog posts at all (eg like what we had before the web days)?

      I’ll grant you that content marketing is still not ideal. It contributes to the flood of redundant information. However, content farms aside, it seems like the least bad option unless we lived in a state of utopian space communism where everybody gets whatever they need automatically and for free.

      All my blog posts have a call to action because I am - shock horror - trying to make a living out of my hard earned knowledge while helping people at the same time.

      1. 5

        Thanks for the reply. I’m grateful that you take the time put good technical writing online for free as it’s hard to come by. As you correctly point out doing writing outside of work is a hobby only for those with plenty of free time, and it’s unreasonable to expect everyone to do so. And for that I apologize.

        Maybe the issue here, for me personally, was that the text didn’t appear to be content marketing at first so I felt slightly cheated once I reached the end. Was something inconvenient left out because it didn’t fit the message? Were some parts simplified to make the content more mainstream and therefore more marketable? I might have preferred to read something else instead.

        Adding to the confusion, the busy layout of the site makes it somewhat difficult to judge the content at a glance.

        1. 5

          Just to be clear, the OP is not my post. It’s just that many software engineers seem to have an unreasonable allergic reaction to any kind of sales and marketing, and I’d like that to change, because many people in our industry are self-employed or run small businesses, and need to promote their products and services somehow.

          I agree that Medium is not a good blog platform (and getting worse), and those “hero images” at the top are a total waste of space.

          I also agree with the issues you point out. The incentives are still not perfectly aligned between the reader and the writer in this situation. But then, a lot of stuff written by hobbyists is also going to be too mainstream, too beginner oriented, too sloppy – so maybe there’s no real difference in quality overall. And sometimes you might get better content from someone who’s trying to establish a reputation as an expert in a particular area so they can sell their products or services. They might dedicate more time to writing an in-depth article than most people would care to do for free. On balance, I think it’s better to have content marketing than to be bombarded with ads.

      2. 1

        Many of these posts are written to generate traffic to new products and services. It seems like a win-win to me, assuming you can tolerate a small box! ;)

      3. 6

        Given that this is promoting a tool to speed up C++ compilation, it misses out one common misconception: Many people think that splitting your C++ code into lots of separate compilation units (.cpp files) helps compile times, e.g. by allowing incremental recompilation.

        While organising your code into lots of small modules is good for the programmer, it isn’t usually a good idea when compiling modern C++. For example:

        • Templates which are widely used get instantiated in almost every compilation unit, then the duplicates must be “collapsed” at link time.
        • If a function definition is in a separate compilation unit, the compiler doesn’t have the option of inlining it at compile time, thus missing out on potential optimisations (I believe Link Time Optimisation can make up for something here, but only if your compiler supports it).
        • Linking lots of little object files takes a long time and cannot take advantage of multiple cores (at least, I’ve never seen a linker use more than one CPU core).

        For large C++ projects, you can typically slash compile times by #include-ing all your .cpp files into a few[1] ‘large’ .cpp files, and just compiling the large compilation units. If you haven’t, I suggest you try it and compare compilation times and resultant binary sizes.

        Somewhat related to this are “header-only” libraries.

        [1] In theory, you could #include them all in just one single file, but assuming you have multiple CPU cores available that would leave most of them idling, so having at least as many compilation units as you have cores is usually quickest.

        1. 2

          Templates which are widely used get instantiated in almost every compilation unit, then the duplicates must be “collapsed” at link time.

          About this, the best practice if you can is to instantiate your templates manually in the .cpp file. That means the implementation can be in the .cpp file and the header only has the declaration. That of course only works if you know all the different types your template is going to be used for but it also encourages not using templates in public headers.

          1. 1

            I may be misunderstanding, but this sounds like a nightmare from a maintenance perspective. Don’t you end up with lots of situations where you explicitly instantiate functions for something like container<int> because you know you use that type, somebody changes the code to use long instead of int and you end up with redundant container<int> instantiations that are no longer used and implicit container<long> instantiations (or you remove your explicit instantiation for container<int> without realising that the code that changed wasn’t the only place using container<int>, thus leaving you implicitly instantiating it). Do you automate this in some way?

            1. 1

              The body of the templated method is only in the .cpp where you instantiate it with your own types, the header only has the declaration. Doing what you describe would result in a linking error.

              1. 1

                I’m still not sure I understand what you’re suggesting. Supposing I define a container, Container which takes a template, T for the type of the values it contains: Container<T>, with some functions such as append(T). I then use Container<int> in two other classes, Foo and Bar. The simplest way to distribute the code would be something like this:

                • container.h // contains definition of Container and its functions
                • foo.cpp // includes container.h, calls Container<int>::append(int)
                • foo.h
                • bar.cpp // includes container.h, calls Container<int>::append(int)
                • bar.h

                Where are you suggesting I should put the definition of Container<int>::append(int)?

                1. 1

                  You need to have Container<T> in container.h but like a normal class, without any function bodies. Then, in your C++ (eg. container.cpp), like a normal class, you define the bodies, still with a template<typename T> and then you at the end of your .cpp file, you explicit instantiate your class[0] with template Container<int>;.

                  [0] https://en.cppreference.com/w/cpp/language/class_template

          2. 2

            never seen a linker use more than one CPU core

            lld happily consumes all my CPU cores :) apparently gold also supports threads.

            compiling the large compilation units

            Both unity builds (what you described) and “classic” LTO kinda suck during development. (i.e. goodbye fast incremental builds).

            Use ThinLTO.

            1. 1

              Both unity builds (what you described) and “classic” LTO kinda suck during development. (i.e. goodbye fast incremental builds).

              1. This will depend quite a bit on the size of the application you’re working on. For example, we have a large application bundled into ~16 files for unity compilation. In this case, you still get somewhat incremental builds, just with a larger increment. In this specific instance, any time lost is easily made up by the reduced link time.
              2. With most build systems, it’s pretty easy to configure it so that you can switch between unity and non-unity builds, so you can, for example, use unity builds for CI but have developers using non-unity or allow individual developers to choose.

              Unity builds aren’t a panacea, but depending on your toolchain and the size of your project, they’re often better than the alternative.

          3. 5

            I last touched C++ in 1995 or so working at Fidelity Investments doing a large Defined Benefits app in MFC. The language really does seem to have changed a lot.

            Many of the things that used to annoy the crud out of me seem to have been excised or mitigated. I’m trying to bring myself up to date with it, mostly because I’m super enjoying KDE and thinking about contributing.

            1. 4

              I recalled lots of old C++ being done like C. That caused many problems. I don’t know if that was your experience. Since I was countered on saying C/C++, I asked pjmpl on Hacker News for any descriptions of or resources on “modern” C++ to know how it’s done today. Here’s his answer with some techniques and books in case you find it useful:

              https://news.ycombinator.com/item?id=10208786

              Obviously, I’m interested in any Lobsters that do C++ weighing in on whether that was a good answer. Might also know some other resources. Obviously, anyone learning will have to read up on the new standards, too. They’re doing really, neat improvements.

              1. 6
                • references for out parameters

                This can make code a little hard to follow though it’s traditional C/C++ style. I like returning values and trusting the compiler to use moves instead of copies. Complex return types can be tuples. I don’t know enough about optimization to know how big a performance hit this is over references.

                1. 6

                  The C++ standard actually allows much more aggressive return value optimization than C, so it’s usually not a performance hit at all.

                  1. 3

                    The C++ standard actually allows much more aggressive return value optimization than C

                    I don’t believe that’s correct. The “return value optimisation” just amounts to storing the return value directly into its target location, rather than copying it after the fact, and that’s just as possible in C as it is in C++.

                    1. 2

                      Unfortunately I can’t dig up the source, but if I recall correctly there is some rule difference between C and C++ that lets C++ compilers perform RVO more often.

                      It may have been that C cannot RVO across compilation units. Because C++ can, without LTO, as you can see in this example. The signature as written is Foo factory() however the generated machine code has the signature Foo* factory(Foo*). I don’t think C compilers are allowed to export altered function signatures like that.

                      1. 3

                        I don’t think C compilers are allowed to export altered function signatures like that.

                        The signature is not changed; the ABI specifies how return value and parameters are passed back and forth, and the compiler is generating code according to both. The Sys V ABI (and the architectural/C++ supplement for it) specify this - the return object’s storage is allocated by the caller and a pointer to it is passed via a register. The C ABI specifies exactly the same thing, and the equivalent C compiles down to the exact same code: https://godbolt.org/z/fMBi36

                        if I recall correctly there is some rule difference between C and C++ that lets C++ compilers perform RVO more often

                        To be honest I find this hard to believe, and I can’t conceive why it would be the case - I’d be very interested if you could dig up a link to any information which confirms it.

                        (Maybe what you read was actually that the RVO is more useful/important in C++ than it is in C. This is certainly true, since it allows copy-constructor calls to be elided).

                        1. 1

                          If what you’re saying is true, that means return by value never had any performance implications in C whatsoever, since RVO essentially always happens, it’s part of the ABI. And that everyone using out parameters for structs for performance has simply been misguided / ignorant of this. So I feel like there must be more to the story there.

                          To be honest I find this hard to believe, and I can’t conceive why it would be the case - I’d be very interested if you could dig up a link to any information which confirms it.

                          I really wish I had that post too, I looked through my history for 30 minutes to no avail. If I recall correctly, which I’m now second guessing whether I do, the author demonstrated a situation where the C++ compiler generated optimized code but the C compiler did not. Oh well.

                          1. 2

                            If what you’re saying is true, that means return by value never had any performance implications in C whatsoever

                            That’s not exactly what I’m saying; the ABI is a huge factor and certain other ABIs may not lend themselves to the same optimisation.

                            Note that technically, “RVO” specifically refers to the C++ case where copy construction is elided (i.e. C doesn’t have “RVO”, it’s just that C doesn’t need the RVO optimisation because it doesn’t have copy constructors). But from your first post I’m reading your use of “RVO” as meaning “avoids an object copy” and certainly that’s possibly in both C and C++, if the ABI allows for it.

                            If the ABI doesn’t allow for in-place “construction” of the return object, then passing output-parameter structs explicitly as pointers can still be a win.

                            1. 1

                              Does the 32-bit sys V ABI handle structs the same way? The 64-bit ABI is better in a lot of ways, if this is one such way that would explain that.

                              1. 2

                                Does the 32-bit sys V ABI handle structs the same way?

                                Yes, it does.

                                However, there’s another thing I forgot to mention: passing a struct parameter via explicit pointer (or reference) can still be an occasional win over relying on in-place return value construction, even if the ABI does allow for that - because with the latter, the compiler needs to make sure it doesn’t accidentally create an aliasing pointer. That is, if you have something like:

                                struct foo_s { int a; int b; int c; }
                                foo_s gfoo;
                                foo_s foofunc();
                                
                                void f()
                                {
                                    gfoo = foofunc();  // <-- here
                                }
                                

                                In this case the compiler can’t pass the “hidden” return value pointer as &gfoo, because it’s possible that the foofunc() function reads values from gfoo while also storing values into its return value (via the hidden return value pointer). See https://godbolt.org/z/Zfacwj

                                1. 2

                                  Ah yes, that makes perfect sense!

                                  I’m learning a lot today, thank you!

                  2. 5

                    Yes references hide just enough information to be dangerous in my experience.

                  3. 2

                    Speaking of return types, tuples combined with structured bindings are da bomb. It basically looks like you are writing Python - you return a pair or a tuple from a function, and then use structured bindings to retrieve them directly into variables. So sweet.

                2. 3

                  BuildInfer

                  At first I wondered if that’s got something to do with Infer. Then I checked the site and wondered if I had landed on the Infer website, or a fork of it, because the layout and colors are so similar…

                  1. 2

                    Nothing to do with FB Infer… colours are just a coincidence! https://buildinfer.loopperfect.com/

                  2. 3
                    // C style
                    {
                      Foo* foo = new Foo();
                      foo->bar();
                      delete foo;
                    }
                    

                    I haven’t seen this kind of c before? Maybe some macro magic?

                    1. 7

                      C style.

                      1. 1

                        Oh, I get it ^^

                    2. 1

                      I like the examples, I never really touched a large code base in C++ so this comes as an enlightening. However, it might be too late for c++ considering that Rust has gained so much traction for most of the scenarios where c++ was the de facto tool. I just started to learn Rust and so far looks like a good mixture between C++ and Haskell, at least on the conceptual part

                      1. 7

                        it might be too late for c++ considering that Rust has gained so much traction

                        This is probably a biased view based on what you’ve been reading. Echo chambers and such. You won’t see me in online discussions putting so much time into research and careful comments about C or C++ because they’re useless languages that better ones will beat in a few years. Quite the opposite: beating them, if even possible, will be an uphill battle.

                        Although I’m a C++ opponent, I will gladly admit it has massive usage with even new projects doing big, performance-sensitive stuff often defaulting on it due to all the work done on compilers, libraries, education, and so on. Lots benefits from prior investments and social/market inertia. Rust is a drop in the bucket in uptake compared to where C++ is at. They could even grow at the same speed with the kinds of improvements C++ is getting. Who knows.

                        So, I focus on the cost-benefit analysis of what each brings with my preferences leaning toward Rust for its high baseline of safety with low-cost abstractions. A better C++. Far as stable ones, Ada and D are also contenders here. I push Rust since its backing by Mozilla with great, community approach have gotten it further than the others by far. Considering social and market forces, it’s the best C++ alternative so far.

                        1. 1

                          I haven’t looked closely at Rust recently so maybe someone can correct me, but Rust does not appear to have anything as powerful as C++ templates which allow for some advanced metaprogramming and compile-time computation and code generation. Generic programming is the most beautiful part of C++ which many competitors fail to deal with (except D, which has great support for generic programming).

                          1. 1

                            It has macros, which I understand can do that sort of thing, but I don’t know how easily.

                        2. 2

                          Rust has a lot of people talking about it but not nearly the same amount of people using it. It’s one of the much-talked about less-used languages. Haskell used to dominate that position. C++ still has much more usage than Rust, by a very wide margin.