I’ve never bothered with qmake, but I’ve had some awful experiences with CMake. To be honest, whenever I see a project that requires CMake to build, I often just close the tab rather than even trying to build it, because CMake builds rarely - if ever - succeed.
I agree with every point the article makes. For CMake being such a complex and “full-featured” build tool, it’s broken in pretty fundamental ways. Also the suckless guys don’t like it, and as some of the best C developers I’ve seen, that’s a pretty good reason to stay away from CMake.
I’ve had the same experience. I’ve found that the simpler the build system, the more likely I’ll build it successfully. e.g. almost every project I’ve found that just uses plain old Makefiles I can figure out how to build. Even if it errors, it’s easy to see why/how and fix it.
Exactly. Makefiles are simple and transparent (and somehow manage to be more standardized CMake). I know that make && make install will work with the vast majority of Makefiles, while the CMake command seems to vary for each project.
make && make install
On the contrary. I know that anything with a Makefile with fail spectacularly. Major cause: no configure step.
Autotools is painfully slow, complicated and messy but generally works.
I don’t see many CMake projects around but everything I have tried has worked I think.
Is it possible that projects that use Make tend to be easier to build because they are usually smaller and less complicated than a project that needs the functionality of something like CMake or Autotools?
OpenBSD is built without the use of CMake or Autotools. Just BSD make.
OpenBSD would normally be built in an OpenBSD environment I would suspect, and has a configuration file.
I think you’re getting at it depending on one environment to build one environment. Whereas the other tools are smoothing over issues involving many environments.
That’s the opposite of my experience back when I built software on interix, or when I cross-compiled for Angstrom. Any “complex” standardised build system - be that autotools, cmake, scons or something else - worked fine. Projects that used “simple” makefiles were impossible to build.
I’ve maintained code both using cmake and plain make. Cmake is awful, and for a Unix project, I’d pick make any day. Cmake is also within epsilon of being your only option if you care about good Windows support, so I don’t see it going away in the near to medium term future.
What? When I see a CMake project I can typically build it like this:
cmake -G "Unix Makefiles"
On some projects with non-packaged dependencies, this does not work. But the same problem is true for Autotools projects that require the paths of certain dependencies to be specified during the configure step.
The section on barging vs convoying is particularly interesting to me. OpenBSD (user land) spin locks are unfair, which I decided was bad. I replaced them with ticket locks, which enforce fifo behavior. All is better, right? Micro tests certainly showed increased fairness, but larger applications (like, uh, a certain browser) became much slower. Not the desired result. Stupid communists and their fairness, wrong about everything!
If you’re optimizing for performance, there’s no one-size-fits-all lock implementation to use. That’s why libraries such as Intel’s TBB provide both queuing mutexes and spin locks - as well as reader-writer variations of both. As mentioned in the OP, spinlocks are appropriate when contention is light and critical sections are short. Queueing mutexes work best when contention is moderate to heavy and critical sections are longer.
On TSX-enabled hardware, it’s also worth trying TBB’s speculative_mutex which attempts to use the XBEGIN and XEND instructions to execute the critical section, falling back to spinning if too many aborts happen. If a lock is mostly uncontended, XBEGIN/XABORT is about 20% faster than the equivalent spinlock acquisition in my experience.
The linked video of Bill Gates is absolute gold: https://www.youtube.com/watch?v=5ycx9hFGHog
SO TOTALLY LOOKING FOR A SEQUEL!
Can someone explain the potential pitfalls of using this?
Off the top of my head:
I don’t think there are any access controls between programs. I’m not certain, but I would definitely want to check before using it.
The latency is probably a bit higher.
If you’ve got the card, it’s a sunk cost, but buying a 12GB video card is not a cost efficient means of getting 12GB ram.
I certainly hope access control has improved. As late as ~2013, running an OpenCL program that contained a memory error would result in garbage getting written to the display of my workstation via unintended writes to the framebuffer…
Wow, I had no idea it was that bad. Hope things have improved!
This weekend I started writing a basic implementation of the Kademlia DHT. It’s nothing novel, but it’s a good way for me to learn more about asynchronous networking in C++. This week I hope to finish the business logic of k-bucket management so I can start setting up the networking layer next week.