A more accurate title is peril from reuse of code written with little focus on security, often in languages that make vulnerabilities easy, and without specs (eg Design-by-Contract) documenting/enforcing key assumptions. Ada, SPARK, Eiffel, and formal specs crowds have been avoiding lots of those problems for years. Perhaps commercial or FOSS developers could better apply what works in these critical libraries.
IMHO yes when there is a monoculture around OpenSSL for example and it is found to contain a critical flaw (Or 20). Perhaps when the majority of a set of a type of applications are using 1 common library for security critical functionality this should be a warning flag and for your similar application you should use an alternative library. Of course this is a double edged sword, the 2nd and 3rd most popular SSL libraries probably get less funding (Not that OpenSSL gets enough anyway) so may have more issues to start with. You could write everything yourself from scratch, but that way lies madness, it is expensive, time consuming, and you are solving the same problems all over again, and if you solve any of them incorrectly you have security issues again anyway. Programming’s hard, let’s go shopping.