1. 104
  1.  

    1. 46

      I’m starting to believe more and more that the IT security field is a huge scam.

      This is based on anecdotal data:

      • I used to work at German Fintech. We were about to get a big German bank as a customer, they requested “a security audit”, whatever that meant. My startup looking to save as much money as possible went to some random security company in Germany. The security “experts” ran some off-the-shelf vulnerability scanner on our Python/Flask application, and sent us a “list of vulnerabilities.”

        We had a bug that if you accessed something else than / or /login/ while being logged-off, you would get a 500 error. This was because your user was None and user.email would raise an AttributeError: 'NoneType' object has no attribute 'email'.

        The report was mostly vulnerabilities like:

        Possible SQL injection: staging.example.com/phpmyadmin/index.php?statement=INSERT+... returned a 500 error.

        Imagine this declined for /wp-admin/, and all the other PHP cliché paths. We fixed the bug, and the bank was happy with our security audit…

      • I used to work at another firm, which had some automaker as a customer, I mean this is Germany after all. They requested to run an audit on our system, before sending us any data. What was the audit? Sending a phishing email to all the employees. Of course somebody fell for it, so we had to go through phishing training. (There is a great article about how this is fake security: Don’t include social engineering in penetration tests)

      • My partner worked in a large non-tech firm. Their internal software has no granular ACL. People had global read/write access to everything by default, you just needed to login. If you were to manage to compromise an account, you could wreak total havoc on their systems. They had a dedicated security department, if you didn’t update your computer the latest Windows version when it was suggested to you, you would get a warning sent to your manager…

      • I was working at a multi-national firm, tech related, and we had to improve a risky system. Users had wide access, and there were severe insider risk. We designed a new solution, it was not fully secure, but it was definitely an improvement over the current state. We wrote down the design, sent it for security review, the security council was like “no! there are there problems, and these problems. You can’t move forward.” We explained that these problems already existed, and we were open to suggestions for solutions. They told us we were responsible for finding solutions, and blocked the project, thus leaving the current, worse situation as an indefinite solution. Basically, it was not about security, it was about bureaucracy and justifying their jobs… But I still needed to reboot my computer every X days for security reasons…

      All of this leads me to believe IT security industry, such as CVEs, is entirely bullshit. I’m convinced that the only people doing real security are people like Theo de Raadt and Colin Percival…

      1. 28

        Selecting examples of bad approaches doesn’t work. It’s like saying art is a scam because there’s lots of terrible quality work created on Fiverr. It’s sure there, but that doesn’t define the field itself. It’s a problem that companies can’t easily identify good quality security consultants. It’s a problem that PCI, SOX, etc are silly checkbox exercises. It’s a problem that standards are hard to apply to very custom internal solutions. It’s a problem that security testing is hard. But every field will have problems like that - it doesn’t mean the whole field is a scam.

        1. 23

          The problem is if (my guess) 90% of people only have the experiences with audits as described then you can try to argue for bad examples, but if it’s the majority it’s a structural problem. And I think it is.

          When I worked at a bank we also had mandatory security “checks” (not a proper audit by an external company) and 90% of the findings were simply bogus or non-exploitable, leaning mostly towards ridiculous. The more regulated the market the more it just attracts charlatans who know nothing except to check boxes.

          In every industry there are good people doing good work, that doesn’t make the industry not bullshit overall.

          1. 9

            In every industry there are good people doing good work, that doesn’t make the industry not bullshit overall.

            This! Thank you. This was exactly my point.

            One can take the Multi-Level-Marketing industry, as an example, which is basically a scam, and I’m pretty sure one can find a few multi-level-marketing companies which actually focus on selling products instead of running a pyramid scheme of salespeople. But one cannot use these isolated cases to dismiss the scam-ish behavior of MLMs as a whole.

            If people have paid attention to the previous stories from Daniel (= the author of this story), he has been trying to fight windmills for years because people are gaming the CVE mechanism for their own benefits. (i.e. if you find a lot of critical vulnerabilities it helps your reputation as a security researcher, so there is an incentive to inflate the score of any vulnerability you find.)

            I’m reading these stories, plugging in my anecdotal experiences with the industry, and folks go “no you can’t just throw a rotten industry under the bus, there are some competent people”. I’m sure there are some smart and well-intention people. In fact, I mentioned two: Theo de Raadt and Colin Percival. I’m convinced there are more, but this doesn’t mean the industry is healthy.

            1. 11

              if you find a lot of critical vulnerabilities it helps your reputation as a security researcher, so there is an incentive to inflate the score of any vulnerability you find.

              I’ve started referring to most CVEs as Curriculum Vitae Enhancers. I can’t even remember the last time we had a legitimate problem flagged by the scanner at work compared to the bogus ones. It makes it very difficult to take any of them seriously.

              1. [Comment removed by author]

            2. 14

              But every field will have problems like that - it doesn’t mean the whole field is a scam.

              Depends on the prevalence of such problems. If a sufficient proportion of the field is such bullshit, then the field is a bullshit.

              I’ll add another example, that I found was almost ubiquitous: bogus password management at work:

              • First login (on Windows of course), they ask me to set up my password. Okay then: correct battery horse staple
              • “Your password does not respect our complexity metrics”. Okay, let’s try again, Correct battery horse staple.
              • “Your password does not respect our complexity metrics”. Fuck you, why don’t you tell me what you want?!? But okay, I can guess: Correct battery horse staple1.
              • “All good”. At last.
              • 1-3 months later: “you must change your password”. That’s it, I’m going Postal.

              It’s been some years now that the NIST updated their recommendations to “don’t use complexity rules, just ban the most common password”, and “don’t force password renewal unless there’s an actual suspicion of a breach”. Yet for some reason the people in charge keep with the old, less secure, more cumbersome way of doing things. This is at best incompetence at a global scale. (I mean, okay, I live in the EU, so applicable guidelines may be obsolete, but the larger point remains.)

              1. 15

                Oh, password management blowing up in people’s faces is one of my favourite types of best intentions just not surviving contact with the real world.

                My favourite is from a few years back, in a place that had a particularly annoying rule. When people were onboarded, they were offered a password that they could not change. They could only change a password when it expired.

                Now, all passwords issued to new hires had a similar format, they were something like 8 random letters followed by 4 random numbers. Since you couldn’t change it in the first three months, people really did learn them by heart, it was more or less the only way.

                When the time to change them came, the system obviously rejected previous passwords. But it also rejected a bunch of other things, like passwords that didn’t have both letters and numbers, or passwords based on a dictionary word – so “correct battery horse staple” didn’t work, I mean, it had not one, but four dictionary words, so it had to be, like, really bad, right?

                Most people wouldn’t type in their “favourite” password (since they knew they had to change it soon), couldn’t type something more rememberable, so they did the most obvious thing: they used exactly the same password and just incremented the number. So if they’d started out with ETAOINSH-9421, their new password would be ETAOINSH-9422.

                Turned out that this was juuuust long enough that most people had real difficulty learning it by heart (especially those who, not being nerds, weren’t used to remembering weird passwords). So most of them kept their onboarding sheet around forever – which contained their password. Well, not forever, they kept it around for a few weeks, after which they just forget it in a drawer somewhere.

                If you got a hold of one of those – it was conveniently dated and all – you could compromise their password pretty reliably by just dividing the interval since they’d been hired to now by the password change interval, add that to their original password, and there you had it.

                1. 7

                  It’s been some years now that the NIST updated their recommendations to “don’t use complexity rules, just ban the most common password”, and “don’t force password renewal unless there’s an actual suspicion of a breach”.

                  I mean even M$ is saying this nowadays. Nevertheless, our (Czech) beloved Cybernetic Security Agency decided to publish following recommendation:

                  • minimum password length is 10 characters,
                  • prohibition of using the same password (last 12 passwords),
                  • maximum password validity period is 18 months,
                  • account lockout after 10 invalid password attempts in a row,

                  So not only they force expiration, but they also introduce DoS vector. :facepalm:

                  And they are literally in M$ pocket, running 100% on their stack, cloud, etc…

                  1. 4

                    Microsoft is weird here; they’re forcing my users to enter a pin number which is apparently as good as their complex password; yet its limited to 4-6 characters.

                    The lord knows how quickly one can blow through the entire numbers keyspace of even a 10 digit number, and numbers arent more memorable than phrases. I’m not sure how this is better security, but they are convinced it is: and market it as such.

                    1. 10

                      PINs on Windows machines are backed by the TPM, which will lock itself down in the event that someone is trying to brute-force the PIN. That’s basically the entire point of those PINs: Microsoft basically says “we introduced an extra bit of hardware that protects you from brute force attacks, so your employees can use shorter, more memorable passwords (PINs)”.

                      Those PINs are actually protecting a (much longer) private key that is stored in the TPM chip itself. The chip then releases this key only if the PIN is correct. You can read more about the whole system here: Windows Hello.

                      1. 4

                        To add to what @tudorr said: the reason that passwords need to be complex is that the threat model is usually an offline attack. If someone grabs a copy of your password database (this happens. A lot.) then they can create rainbow tables for your hash salt and start brute forcing it. If you’re using MD5, which was common until fairly recently, this cracking is very fast on a vaguely modern GPU for short passwords. If you’re using a modern password hash such as one of the Argon2 variants, you can tune it such that easy attempt costs 128 MiB of RAM (or more) and a couple of seconds of compute, so attacking it with a GPU is quite hard (you’ll be RAM limited on parallelisation and the compute has a load of sequential paths so making it take a second is not too hard, a GPU with 8 GiBs of RAM (cheap now) may be able to do 64 hashes per second, and it takes a long time to crack at that speed. Unless you care about attackers willing to burn hundreds of thousands of dollars on GPU cloud compute, you’re probably fine.

                        The PIN on Windows and most mobile devices is not used in the same way as the password. It is passed to the TPM or other hardware root of trust as one of the inputs to a key derivation function. This is then used to create a private key. When you log in, the OS sends this and a random number to the TPM. The TPM then generates the key with the PIN and some on-TPM secrets (which may include some secure boot attestation, so a different OS presenting the same PIN will generate a different key) and then encrypts the random number with this key. The OS then decrypts it with the public key that it has on file for you. If it matches the number that was sent to the TPM, you have logged in. Some variants also store things like home-directory encryption keys encrypted with this public key and so you can’t read the user’s files unless the TPM has the right inputs to the KDF and decrypts the encryption key. Even if you have root, you can’t access a user’s files until they log in.

                        If you compromise the password database with PINs, you get the public key. To log in, you need to create the private key that corresponds to the public key associated with the account. This is computationally infeasible.

                        The set of PINs may be small, but the PIN is only one of the inputs to the KDF. You need to know the others for the PIN to be useful. The TPM (or equivalent) is designed to make it hard to exfiltrate the keys even if you’re willing and able to decap the chips and attack them with a scanning-tunnelling electron microscope. If you can’t do that, they do rate limiting (in the case of TPMs, often by just being slow. Apple’s Secure Element chips implement exponential backoff, so require longer waits after each failed attempt). If you get three guesses and then have to wait a few minutes, and that wait gets longer every failed attempt, even a 4-digit PIN is fine (assuming it isn’t leaked or guessed some other way, such as being your birthday, but in those cases an arbitrarily long PIN is also a problem).

                    2. 5

                      1-3 months later: “you must change your password”. That’s it, I’m going Postal.

                      I knew someone that hated that rule, but he figured out that the system would forget his previous passwords after ten changes, so every time he changed his password eleven times, putting back his old password at the end.

                      1. 2

                        Thus (at least partially) defeating the very purpose of the bogus policy to begin with. Oh well, as long as I can just number my password that’s not too bad.

                        1. 1

                          At university I had to change my password on a regular basis. There was no requirement, but my 128 character password would trip up the occasional university system. And the ’ character (or some character) would trip up ASP.NET’s anti-SQL injection protections, and a couple of sites that I had to use very infrequently were ASP.NET. So I would change my password to something simpler, then change it back.

                          Eventually they instituted a password history thing, or I just got tired of picking a new password, or something like that. I can’t remember. I just remember sitting there trying over and over again to exhaust the history mechanism. I got up to multiple dozens of password changes before I gave up.

                      2. 6

                        Your analogy works better than you think, but your conclusion is wrong.

                        The art field is absolutely a scam, and what art professionals do (which is different from artists) defines the field, making it a total con. Same with these security “professionals”.

                        There are real artists and there are real security researchers. They make for an insignificant amount of activity in their respective industries. Clearly a very important part, of course, otherwise the grifters wouldn’t have anything to grift on. But the dynamics of their industries come from the grifters, not from the researchers or the real artists.

                      3. 15

                        It is mostly compliance theater, so one cannot be sued. Nobody really cares, just make sure all boxes are ticked.

                        1. 14

                          Security is a huge problem so there are a lot of vendors and a lot of people in it. It’s a very immature industry and, frankly, the standards for getting into it are extremely low. Most security analysts will have virtually no ability to read or write code, it’s shocking but it’s true. Their understanding of networking isn’t even particularly strong - your average developer probably has a better understanding.

                          You’re describing the “bad” that I’ve seen a bit of, but it’s quite the opposite of my personal experiences. At Dropbox, when I was on the security team at least (years ago), we didn’t say things like “no” or “that’s your responsibility”. We had to pass a coding test. We built and maintained software used at the company. When we did security reviews we explained risks and how to mitigate, we never said “no” because it wasn’t our place - risk is a business decision, we just informed the process. Lots of companies operate this way but it takes investment.

                          Unfortunately the need is high and the bar is low, so this is where we find ourselves.

                          I would not write off the entire security industry other than a few people as a scam.

                          1. 4

                            Some of us do understand networks. I’ve been doing infosec for about 30 years, I do agree that most of the auditors and folks that are analysts tend not to know enough. It’s very sad to see the complete lack of technical competence within the field. Even with supposed standards such as CISSP etc. I find a lack of understanding. I’m working to change that that I teach the younger folks about networking, and intelligent analysis. So I’m doing my bit.

                            1. 5

                              Oh I’ve worked with a ton of security people who know a ton of stuff and are impressively technical. I just mean that the skillset levied at a company can vary wildly. What most security people do bring to the table is information on actual attacks, what attackers go for, an interest in the space, etc. But in terms of specific technical skills the bar is all over the place.

                          2. 10

                            Kinda makes me think computer software business is a huge scam :P The examples read to me as if “business does not care about actual security, but does the bare minimum they can get away with and then is surprised that this isn’t appropriate”

                            1. 8

                              Isn’t that how businesses do everything, though?

                            2. 6

                              It’s no more BS than almost everything else. Any even moderately sized organization will consist largely of people who are some combination of: completely out of depth, unable to think about nuance, do not care about the result of their actions, selfish and so on. Most people just want to get paid and not get in trouble, and expect the same from other people. It takes a huge determination and wisdom from the leadership to steer any larger group towards desirable outcomes. Governments, corporations, etc are doomed to complete inefficiency for this very reason.

                              1. 4

                                All of this leads me to believe IT security industry, such as CVEs, is entirely bullshit.

                                It’s unfortunately just not a thing, IMO, that’s well-suited to being commodified, reduced to a set of checklists (though checklists are a helpful tool in the field, to be sure) and turned into an “industry” at all. The people you list are especially good at making exploits harder. There are some people who are doing excellent work at detecting and responding to exploits as well.

                                Apart from those two areas, which are important, I feel like the big advances to be had are in the study of cognitive psychology (specifically human/machine interaction) but haven’t quite been able to persuade myself to go back to school for that yet and pursue them.

                                1. 6

                                  I would argue that checklists being created by people not understanding the subject matter is worse than no checklists. I’ve seen things be objectively made worse because of security check lists.

                                  1. 4

                                    I think we probably agree, but I’d argue that the problem is that they’re often used to supplant rather than supplement expert judgement.

                                    Two examples I can think of offhand are flight readiness checklists and pre-surgery checklists. Those are often created by managers or hospital administrators (but with expert input) who don’t understand the details of the subject matter the same way as the pilots, mechanics and surgeons who will execute them. And they’ve been shown to reduce errors by the experts who execute them.

                                    What we’re doing with IT security checklists, though, is both having non-experts create them, then having non-experts execute them, then considering them sufficient to declare a system “secure”. A checklist, even created by non-experts, in the hands of someone who well understands the details of the system is helpful. A checklist, no matter who created it, being executed by someone who doesn’t then being signed off by someone who doesn’t, makes things objectively worse.

                                    1. 6

                                      I was under the impression (mainly from Atul Gawande’s book) that flight checklists and surgical checklists were created mostly by experts. The managers are mostly responsible for requiring that checklists are used.

                                      1. 2

                                        While I have heard about that book, I haven’t read it. My only experience with flight checklists is from a mechanic’s perspective, so I can’t authoritatively say who created them, but the impression I got was that it was project managers synthesizing input from pilots, mechanics and engineers. Working with hospital administrators, I would definitely argue that the administrators had creative input into the checklists. Not into the technical details, exactly. But they were collecting and curating input with help, because they were responsible for the “institutional” perspective, where the experts who were detailing specific list items were taking a more procedure-specific lens.

                                        The big beneficial input of the managers was certainly requiring that checklists be used at all, and maybe “synthesized” is a better word than “created” in the context of their input to the lists themselves.

                                        1. 9

                                          I think it’s worth mentioning that air travel is highly standardized and regulated. Commercial flight especially, but even private planes must comply with extensive FAA regulations (and similar outside the US, for most countries).

                                          The world of IT isn’t even close to that level of industrial maturity on a log scale.

                                          1. 2

                                            That’s a really good point. “Industrial maturity” is the phrase I was grasping for with my first sentence in my first reply, and kind of described but didn’t arrive at.

                                      2. 2

                                        I think the meaningful difference is that people would push back by actively wrong things on these checklists.

                                        Imagine “poke your finger into the open wound” levels of bad, not just “make sure to wash your hands three times” where someone would say “we used to do two, and it was fine, but ok, it takes me 1/10 the time of the operation ans hurts no one” versus “make sure to install Norton Antivirus on your server, and no we don’t care if you run OpenBSD” - which is “absolutely horrible take on security that would make everything worse even if it was possible”.

                                        I remember one check list where it said “make sure antivirus X is installed” - the problem is that it made every linux box work at 20% speed (we measured) when it scanned git checkouts. So we made sure it was installed and not running. Check.

                                  2. 2

                                    IT security field is a huge scam.

                                    I don’t think the field is scam, however, some people working in IT security are scammers or amateurs.

                                    1. 2

                                      Sending a phishing email to all the employees. Of course somebody fell for it,

                                      Actually, just sending a phishing to everyone, and then checking who clicked would be a great thing to do. I work for a tiny company and not everyone is that savvy.

                                      1. 10

                                        At previous $workplace, we had that cheesy warning in Outlook telling you the mail was from someone outside your organization. Of course when the security team ran a phishing test, they disabled that, and people fell for it. I guess because it’s really easy to spoof a sender that looks like they’re internal? If so, what’s the use of the warning in the first place?

                                        1. 2

                                          Your mail server should reject mail from its domain that’s coming from other servers, let alone DKIM and SPF.

                                          1. 7

                                            Yes. Well, my point is that the security team disabled one part of security theater to sneakily trap people to click on a link in an email that should reasonably have been flagged as outside the organization if the first piece of theater was active.

                                            1. 2

                                              I don’t think it’s theater. If the detection works as intended, it’s a very good indicator of potential phishing.

                                              1. 14

                                                In my experience it’s way too noisy to be a useful indicator. I’d estimate that 95% of the emails I get have a yellow WARNING: EXTERNAL EMAIL banner at the top. Every email from Jira, GitLab, Confluence, Datadog, AWS, you name it… they’re all considered “external” by the stupid mail system so they all get the warning. People develop something like ad banner blindness and tune this warning out very quickly.

                                                1. 3

                                                  Also helps that it’s the first line in an email, and we’re already trained to skip right past the “Dear So and so,”

                                                  1. 2

                                                    I suppose this is a benefit to my employer self-hosting Jira, GitLab, Confluence, etc. - their emails all go through internal mail servers, and as such don’t have the “EXTERNAL EMAIL” warning on them (unless they’re not legit)

                                              2. 4

                                                I tested this when I was at Microsoft. The SPF and DMARC records were correctly set up for microsoft.com, yet an email from my own Microsoft address to me, sent from my private mail server, arrived and was not marked as coming from an external sender. The fact that it failed both SPF and DKIM checks was sufficient for it to go in my spam folder, but real emails from colleagues sometimes went there too so I had to check it every few days.

                                          2. 1

                                            My partner worked in a large non-tech firm. Their internal software has no granular ACL. People had global read/write access to everything by default, you just needed to login. If you were to manage to compromise an account, you could wreak total havoc on their systems. They had a dedicated security department, if you didn’t update your computer the latest Windows version when it was suggested to you, you would get a warning sent to your manager…

                                            There are many similar situations here at $WORK. I guess you could say I am the very security department you speak of. Hypothetically if I were in this scenario you speak of (I’m not, but I am in similar), the thing is we can convince people to install updates - they’re automatic these days anyways.
                                            But we cannot convince internal dev teams that are understaffed and overworked (and cough under qualified) to create some perfect ACLs in the decade old internal crapware they’ve been handed to maintain.
                                            We tell them it’s vulnerable, add it to the ever-growing risk register, make sure assurance is aware, and move on to the next crap heap.

                                            There are just not enough “resources” (i.e. staff, people) around sometimes.

                                          3. 15

                                            CVSS is designed for ass-covering for vulnerability scanners, not for a practical assessment of the risk. It requires assuming the worst possible scenario, and one can always imagine a benign piece of software irresponsibly put in charge of some security-critical functionality.

                                            1. 12

                                              The problem with assigning meaning for a decision making process to numbers is that people will do math with it. You wouldn’t count CVEs, would you?

                                              Maybe names, logos and websites for severe bugs were actually the better solution ;)

                                              1. 5

                                                Maybe names, logos and websites for severe bugs were actually the better solution ;)

                                                Jerk. I read that right as I was swallowing some coffee, and I burst out laughing. Coffee in the nose is not a nice experience, and I blame you.

                                                (And I think the overall problem is our urge to turn what should be a qualitative analysis into an easier-to-measure set of tickboxes. The tickboxes work to assess bulk trends, but not to assess individual situations, and we care about not having individual sites exploited.)

                                                1. 9

                                                  My apologies. You may redeem this comment as a voucher for 1 (one) free coffee if you ever find yourself in Berlin, Germany. :-)

                                                  1. 11

                                                    Unfortunately, I have to give this a CVE a 10 because you could be exploited for a free coffee with a simple social engineering attack, likely including the word “please”

                                                    1. 4

                                                      I have been exposed! :-)

                                                2. 3

                                                  How about a trophy case section on your landing page with logos for all the brand name bugs closed in your product. :P

                                                3. 11

                                                  theoretical because most problems are never actually spotted exploited

                                                  That’s an odd definition of theoretical. I would call it theoretical if there’s no strong reason to believe that it is exploitable, not if it isn’t exploited.

                                                  What if we can guess that the problem is only used by a few or only affects an unusual platform? Not included.

                                                  To me, this is something you just disclose. I receive CVE notifications sometimes and I dismiss them because they simply aren’t relevant to me, but I still want the vendor to tell me. An obvious example is a vulnerability in a library that never interacts with attacker input - the classic one is a regex DoS where all regular expressions are statically controlled by me. But this happens near constantly and it’s pretty simple to just read “this configuration is vulnerable” and go “okay that’s not me neat, closed”.

                                                  The curl project is a CNA

                                                  Linux and curl probably should never be CNAs since the project maintainers fundamentally don’t believe in the system. I don’t really know why they were made CNAs, especially Linux, but whatever.

                                                  How on earth does anyone expect them to get this right?

                                                  But… you’re the one who’s making them do it. Curl is explicitly opting out of providing CVSS scores, users rely on these for prioritization, so CISA is stepping in. And unsurprisingly they aren’t in the ideal position to do it.

                                                  The curl security team had set the severity to LOW because of the low risk and special set of circumstances that are a precondition for the problem.

                                                  To me, this isn’t what risk is. I actually don’t want curl to say “well but 99% of you won’t be impacted”. Just tell me the impact if exploited, etc. CVSS captures that just fine. Then tell me the circumstances in which it is exploitable and I’ll close the ticket myself because that’s my job to know how to do.

                                                  So no, since we do not do the CVSS dance, we unfortunately will continue having CISA do this to us.

                                                  I feel like the solution here is not that complicated. Do the dance, publish scores, and just add a note to each CVE explaining that you aren’t vulnerable if you don’t run under a specific configuration. Companies are more than capable of reading the note, determining their configuration, and not patching. I do this all the time. Yep, curl is networked so its going to have higher scores. So what?

                                                  CVSS solves for “this version of this software contains a bug that can be exploited under coarse grained (remote, local) conditions and provides a coarse grained effect (dos, code execution)”. That lets me triage. When something is high/ critical I can just look at the patch notes provided and say “oh, okay I get that, but it says I have to run with a flag that I never run with and I just grep’d for it. Cool. Closed”.

                                                  If Curl has a bunch of angry users going to them over this, idk, that sucks? I feel like that’s an issue with the security industry being filled by “security engineers” who can’t evaluate patch notes. That’s not a problem with CVSS.

                                                  The note about Go seems to confirm this. Opting out of CVSS isn’t helping. CISA is going to step in because end users (like me) who have to ensure things are patched need some kind of flag that says “do this”. Maybe some vendors should be able to override the CVSS? I don’t know. I got a “critical” go vuln the other day and it was a minute or so to determine it didn’t apply to me so I closed it. CISA’s score wasn’t the problem there.

                                                  CISA is going to keep stepping in. We just got another federal mandate to ensure software vulnerabilities are patched. Patching isn’t going anywhere.

                                                  I’m very sympathetic to curl having a bunch of people freaking out, I just don’t think their approach is going to help that, and I don’t think CVSS / CISA are the issues - the standard for security professionals is just jarringly low and the strategy is largely to rely entirely on vendors while paying your internal security people very little.

                                                  1. 20

                                                    Linux and curl probably should never be CNAs since the project maintainers fundamentally don’t believe in the system.

                                                    They rightfully don’t believe in the system because it completely broke down for them.

                                                    I don’t really know why they were made CNAs, especially Linux

                                                    Because it’s the only way to have any control and not have random morons create braindead CVEs that don’t exist.

                                                    Companies are more than capable of reading the note, determining their configuration, and not patching.

                                                    I assume you are not an OSS project maintainer, because no they are not.

                                                    When something is high/ critical I can just look at the patch notes provided and say “oh, okay I get that, but it says I have to run with a flag that I never run with and I just grep’d for it. Cool. Closed”.

                                                    Apparently you missed the entire section of TFA where they explain that idiots will run scanners then bust your ass because there are signalled vulnerabilities above some meaningless score, that happens every week if not day, and every time you have to waste time trying to understand what they’re talking about.

                                                    1. 11

                                                      Because it’s the only way to have any control and not have random morons create braindead CVEs that don’t exist.

                                                      I mean, we built a 0day with fully reliable exploit for io-uring and were basically told “meh, I guess if you really want to file a CVE…” by Greg lol

                                                      I assume you are not an OSS project maintainer, because no they are not.

                                                      Apparently you missed the entire section of TFA where they explain that idiots will run scanners then bust your ass because there are signalled vulnerabilities above some meaningless score, that happens every week if not day, and every time you have to waste time trying to understand what they’re talking about.

                                                      Okay, let me be clearer on this. Companies should be capable and they are in that position. I explicitly noted that the security industry is staffed by people not capable of this, but that that isn’t a flaw in CVSS. That’s those people being bad at their jobs.

                                                      1. 11

                                                        Curl can’t fix those people being bad at their jobs. It’s completely rational for them to argue for and try to fix in whatever ways they have in their power the problems that those people being bad at their jobs cause for them. This is a collective action problem that is extremely unlikely to be fixed for curl.

                                                        Companies who want to do better can. But those companies are not the problem curl needs fixed for their own sanity.

                                                        1. 9

                                                          Right, but I explained why I think their approach isn’t going to work. People still rely on CVSS, CISA is going to fill that need, and the wind is heading in the direction of “patching is more important” not the opposite - again, we just had another federal mandate that focused heavily on patching. Avoiding CVSS isn’t going to do much, I actually suspect it will make things far worse for curl, and I don’t think their idea of “low because very few people would be impacted” is good at all, as I explained.

                                                          But those companies are not the problem curl needs fixed for their own sanity.

                                                          I wonder if there isn’t a better solution. Like, for example,

                                                          1. Do CVSS themselves. They gave a concrete example of being able to reduce a score from Critical to Medium using the CVSS calculator. They could probably do themselves a big favor by taking control over that again.

                                                          2. Include “you are only vulnerable if X and Y” in disclosures. Vendors include this in their reports. Companies should be able to read this and understand. They can even add an additional note like “internal curl score is X”.

                                                          3. Close out issues opened by people about these vulnerabilities that are low content. Close out asks based on them. Ignore it, basically. I wonder why that’s not tractable? I know people waste a lot of time on bullshit CVEs and whatnot, but this isn’t that, this is “the CVE already exists and customers are afraid because of an invalid score”.

                                                          I’m not going to try to exhaustively solve this here, but I’m at least skeptical of their current approach.

                                                          Their current approach seems like a worst of all worlds and the world they seem to want to get to, where they use their internal labeling approach, doesn’t seem helpful either. And they don’t seem to think that CVSS is appropriate for a networked application? And that future revisions can’t help? These are pretty bold statements that I would challenge.

                                                          The curl security team had set the severity to LOW because of the low risk and special set of circumstances that are a precondition for the problem.

                                                          Going back to this quote from Daniel, this is really bad to me. It’s not possible for curl to know the deranged shit I might be doing with their software. Tell me those circumstances, give me the worst case, and it’s on me to know if I’m vulnerable or not. I understand that right now they’re getting barrages of fools who aren’t willing to do that work but that feels like a very separate issue.

                                                          1. 6

                                                            Close out issues opened by people about these vulnerabilities that are low content. Close out asks based on them. Ignore it, basically. I wonder why that’s not tractable? I know people waste a lot of time on bullshit CVEs and whatnot, but this isn’t that, this is “the CVE already exists and customers are afraid because of an invalid score”.

                                                            Short of having a bot auto close every ticket with a mention of “CVE” or the name of the common vulnerability scanners, this seems like something that doesn’t scale to the large projects like curl or Linux.

                                                            1. 3

                                                              I’m not convinced of this. I’d want to see numbers. And I think a bot is a great idea - have a “CVE Question” template and if the issue doesn’t match that template, auto-close it.

                                                            2. 1

                                                              Tell me those circumstances, give me the worst case, and it’s on me to know if I’m vulnerable or not.

                                                              You seem to be complaining that the Curl team should do something they are already doing. Here’s the example that Daniel referred to in the blog post.

                                                              https://curl.se/docs/CVE-2024-11053.html

                                                              1. 3

                                                                I’m not complaining about that at all, it’s incredibly common for vendors to include that information and my point is that this is enough for someone in my position to triage. My point is that changing CVSS makes no sense - the information I care about is CVSS for first order triage and then notes like that to determine if I need to patch for my situation.

                                                                edit: To be clear, “about that” and what you are referring to are, I assume, that they already specify the details of the CVE. Obviously the whole article is about them not providing CVSS, which I’m saying they should do and they’d save themselves a lot of headache.

                                                          2. 6

                                                            I explicitly noted that the security industry is staffed by people not capable of this

                                                            You stated that companies are capable of a thing they effectively are not.

                                                            that that isn’t a flaw in CVSS. That’s those people being bad at their jobs.

                                                            An open system being “completely fine” under the assumption that randos achieve some kind of perfect standard is a completely useless system.

                                                            1. 2

                                                              You stated that companies are capable of a thing they effectively are not.

                                                              I restated this explicitly to clarify my point that they are the ones in position to do it, regardless of if they are capable.

                                                              An open system being “completely fine” under the assumption that randos achieve some kind of perfect standard is a completely useless system.

                                                              This feels like it misunderstands virtually all of my points.

                                                        2. 8

                                                          To me, this is something you just disclose. I receive CVE notifications sometimes and I dismiss them because they simply aren’t relevant to me, but I still want the vendor to tell me. An obvious example is a vulnerability in a library that never interacts with attacker input - the classic one is a regex DoS where all regular expressions are statically controlled by me.

                                                          You must be the first person on earth who enjoys security warnings about regex DoS.

                                                          1. 5

                                                            Oh I do not enjoy them at all, I find them very annoying tbh. I just don’t think they’re that problematic. First of all they’re always medium or lower, so they’re not exactly getting my heart rate up, and second, they’re trivial to close.

                                                            1. 3

                                                              Always medium? Every one I’ve seen so far has been CRITICAL. Especially the ones that only affect build tools.

                                                              1. 2

                                                                Is that right? Huh. I definitely don’t recall a regex DoS being critical. I’m not sure how anyone could get there either since even if you max everything out, DoS is going to cap out at high (7.5). I’ve seen a high before because it was provided by some web framework iirc. Otherwise usually medium.

                                                                Either way, the second I see “Regex DoS” I close the ticket. Super easy to triage, not much of a bother.

                                                                1. 3

                                                                  Either way, the second I see “Regex DoS” I close the ticket. Super easy to triage, not much of a bother.

                                                                  I’ve seen audits demand that every CVE above a certain CVSS level is either patched, or has a documented explanation for why it doesn’t apply. Which also meant CI wouldn’t pass until it was handled.

                                                                  You close the ticket, I have to spend a whole day investigating why specifically it doesn’t apply to us, and turn that into a multi-paragraph writeup longer than the original CVE description.

                                                                  1. 1

                                                                    What audits? That sounds like an internal policy from your security team. Compliance doesn’t prescribe that sort of thing

                                                          2. 8

                                                            I feel like the solution here is not that complicated. Do the dance, publish scores, and just add a note to each CVE explaining that you aren’t vulnerable if you don’t run under a specific configuration. Companies are more than capable of reading the note, determining their configuration, and not patching. I do this all the time. Yep, curl is networked so its going to have higher scores. So what?

                                                            I think the reason you are more happy with this is that you are the security department, you have the ability to do that.

                                                            Many of the people most annoyed by these processes have dealt with less reasonable security departments (or clients, users, auditors) who have the ability to insist you drop everything and address something because there’s a CVE attached, even if the CVE is meaningless in your context. And by address, they mean make code changes, they won’t accept an explanation for why the issue is not relevant.

                                                            1. 4

                                                              Right, and I’m very fine putting the blame on those departments. The fix here is for curl to not engage with them. Their existing solution seems to make things worse for everyone, including, I would think, themselves.

                                                              And by address, they mean make code changes, they won’t accept an explanation for why the issue is not relevant.

                                                              Tell them to fuck off, right? Like, seriously, curl is open source. They can just close issues as “won’t fix” and point to a policy for CVEs etc. That is fine.

                                                              1. 4

                                                                Maybe we got completely different things out of that article but what I was reading was essentially this.

                                                                1. 3

                                                                  Curl is saying they won’t engage in CVSS. I’m saying CVSS isn’t the issue, they should continue to use CVSS and that their rejection seems strange and their solution doesn’t seem like it helps with anything.

                                                                  1. 5

                                                                    I’m guessing, but I suspect the problem is operating an opensource project at curl’s scale with a small team. Given the amount of systems set up to parse CVSS and automatically open tickets, etc, I’d imagine a high score, even with text explaining the conditions it applies to, opens a barrage of low quality input for the maintainers. Noise that will be hard to filter out.

                                                                    1. 3

                                                                      But CVSS is going to happen anyway, just worse now. And their criticism of CVSS seems to be that the scores are too high because curl is networked (I disagree, that’s the point of CVSS) and that it doesn’t capture nuances like “your config isn’t vulnerable” but I’ve explained why I don’t think CVSS should capture that at all because it’s my job to know if my config is vulnerable.

                                                                    2. 3

                                                                      This is where we disagree. I think CVSS as it works now is exactly the problem. Since Curl is experiencing backwards progress in their interactions with it and are proving to be unsuccessful in getting it to change I’m not sure what else they can do other than stop.

                                                                      To Daniel’s credit he’s still committed to trying to fix the system. But he’s using the tools he has available.

                                                                      • A high profile project everyone has to care about
                                                                      • Consequences for mistreating that project that has an above average impact on CVSS.

                                                                      This is how you begin to fix the issue. If you don’t think CVSS is the problem this look wrong. But if you do think CVSS is the problem then this is an attempt to make progress.

                                                                      1. 1

                                                                        What is the problem with CVSS? The justification in the article is pretty weak (scores are too high because curl is networked? is that really a problem), and the solution doesn’t seem to address it well either.

                                                                        1. 8

                                                                          The problem is that CVSS scores do not reflect reality for a large amount of the software they get applied to and are not fit for purpose. It’s like getting bombarded with false positives when you are carrying the pager. It trains you to ignore the pager and delays addressing actual problems when they happen.

                                                                          The solution is to stop sounding the alarm on bogus stuff which means changing the metric you use to sound the alarm. Curl is correct that the context matters here and the scores lie. Why continue supporting the lie and encouraging a practice that contributes to security malpractice in the industry.

                                                                          I used to be the guy who had to be point on standardized audits for software. The audits were nearly always 80% noise which had to be waded through because the auditors were doing checklist ticking by reporting any CVE with a score above a certain level without regard to context. Getting them to certify us meant “resolving” every single one of them. Not getting certified was not an option due to the industry we were in “Healthcare Insurance”.

                                                                          Because of that experience I want Curl to win because I want those auditors doing “pen tests” to actually do their job so I get issues that real instead of spending my time arguing with them about the severity of issues. As it currently stands the cost of not doing their job is not payed by those auditors. It is payed by the users of the software because security becomes a slog through checkboxes with little energy left over for actually doing real security.

                                                                          1. 3

                                                                            The problem is that CVSS scores do not reflect reality for a large amount of the software they get applied to and are not fit for purpose. It’s like getting bombarded with false positives when you are carrying the pager. It trains you to ignore the pager and delays addressing actual problems when they happen.

                                                                            Right, the article covers this and I disagree. It’s also not a direct problem for curl (the problem is users going to curl afterwards, which I’ve proposed solutions for) it’s a problem for the security people who have to respond. CVSS scores represents what they represent - a very simple calculation based on factual information. It’s not up to curl to say (because they are in a terrible position to say) “but you shouldn’t worry because this configuration is weird” - they don’t know my configuration, I do. Curl is in a position to say two things:

                                                                            1. Here’s the CVSS score, which represents a sort of worst case
                                                                            2. Here’s the configuration that you can check to see if you’re vulnerable

                                                                            That’s it, and that’s the system they’re rejecting.

                                                                            The solution is to stop sounding the alarm

                                                                            CVSS isn’t an alarm though. And their approach is demonstrably worse if you consider it an alarm because CISA is obviously going to defer to the more severe score given ambiguity. This also isn’t a curl burden, again, it’s a security professional burden. I’m the one who has to deal with alarms.

                                                                            the scores lie

                                                                            The scores can’t lie and they don’t lie. CVSS is extremely simple to calculate. Unless you lie about the inputs, the score is simply what it is.

                                                                            Why continue supporting the lie and encouraging a practice that contributes to security malpractice in the industry.

                                                                            Because it lets me do first order triage.

                                                                            Because of that experience I want Curl to win because I want those auditors doing “pen tests” to actually do their job so I get issues that real instead of spending my time arguing with them about the severity of issues.

                                                                            I don’t know the situation you were in but:

                                                                            a) Curl won’t win. As mentioned, yet another federal mandate on patching just came out.

                                                                            b) Whatever process you had that led to pentest results needing to be an argument sounds like the problem, not CVSS.

                                                                            I have plenty of problems with auditors, CVSS isn’t relevant. The obvious issue is that they’re bankers.

                                                                            1. 3

                                                                              I would argue that the auditors being accountants/bankers is because cvss turned it into a ledger of numbers problem for large groups of the industry. But I suspect that we will have to agree to disagree. CVSS is problem for me as a consumer and as a maintainer of software.

                                                                              1. 1

                                                                                What is the problem you have with CVSS? I’m curious how it impacts you, especially as a maintainer of software.

                                                                                1. 3

                                                                                  I wasn’t using maintainer in the OSS sense. I don’t work on anything with the surface area to attract much CVSS stuff. But I already went through how it impacts the work I do above. It incentivizes practices that I don’t get a choice in whether I participate in. We are going in circles at this point.

                                                            2. 11

                                                              Even the open source world itself isn’t immune to the “security scanner” stuff.

                                                              I maintain a few Fedora packages, and one of them is a fork of CEF (obs-cef, now retired but I need to redo it as cef when I have some time) which is a Chromium derivative. Fedora/RH seems to run some sort of security scanner, where packages containing code that has CVEs get bugs auto filed for them by some script.

                                                              Since Chromium vendors a thousand packages, I get bugs filed whenever any of those has a CVE, even if that CVE doesn’t affect Chromium or that dependency isn’t even present in the built binary (e.g. test only code, or code used in a server component where Chromium only uses the client component). But it’s even worse. I’ve had bugs filed for packages that weren’t anywhere to be seen in the Chromium codebase. When I looked around, I found a reference to the package name in a source comment in another vendored package. It seems the scanner is something like grep -r vulnerable_package_name src/

                                                              Getting a bug and having to grep through the Chromium source only to mark it as invalid because it doesn’t apply gets old after a while…

                                                              1. 7

                                                                What’s the point of a 0-10 score, with decimal (0-100 in reality)? Does the scoring really need so much granularity? Like, is there an actual difference between 5.4 and 5.7?

                                                                It seems weird to me they don’t just map it to something sane, like low/medium/high/critical. But maybe there’s an actual valid reason for this?

                                                                1. 2

                                                                  it’s a mixed bag, and each of them gets low/medium/high/critical score assigned so you can make a better judgement.

                                                                  5.4 vs 5.7 comes by combining different scores from CIA (confidentiality, integrity and availability), depending on your industry you choose what matters more to you, is it integrity of data or service availability? Maybe you don’t care about confidentiality at all. Nobody knows better than you and your country laws.

                                                                  CVSS score is useful, but not in the way it’s presented, using it as a way to judge your system security is a bad idea, as proven by the article. Most of the companies ask for security audit only to cover their arse, which is why Vanta became such a popular tool. Security scanners are garbage if you don’t know how to use them and treat everything as insecure if it found a couple of vulns.

                                                                2. 3

                                                                  I do think that the system in its current form is pretty broken. However, if people were to look to things like epss, cve version four, and combine them in a nice weighted matrix which has already been done, this gets a lot easier. You can have a priority. The problem is all the check box auditors who insist on simple checks because they simply don’t understand things.

                                                                  1. 12

                                                                    At least for SOC2, which is what’s going to be relevant for the vast majority of companies, auditors generally don’t care if you patch or not. What they care about is that you have a patching policy and that you follow that policy, and ideally your policy maps to something like a NIST threat model.

                                                                    A valid policy would be “We patch all CVEs in our software within X weeks. We can extend that to Y weeks or indefinitely if our security team conducts an investigation and finds that the CVE does not increase our risk due to mitigating factors.” Then you just maintain a template document for those exclusions and a seceng fills it out when a CVE is not important to you, and you’ll show the auditor that you have those documents.