I appreciate Daniel’s view here, but part of the problem is that entities on both sides have acted in bad faith:
Reporters have filed bogus CVEs, or inflated the severity of CVEs.
Projects have claimed things are not security issues when they clearly are (in some cases, after they’re exploited in the wild).
Daniel acts in good faith and so may not have been aware of the latter case but (until fairly recently) it was more of a problem than the former. It looks as if, at some point in the last few years, the balance tipped in the opposite direction. Most moderately high-profile projects handle good-faith CVEs well, but an increasing number of people are bogus CVEs. The MITRE process was set up assuming good faith from most vulnerability researchers and has been slow to adapt.
My concern about the response of more projects becoming their own CNAs is that this makes it easier for projects to pretend vulnerability reports are not real. It would be nice if there were some impartial organisation that could validate CVEs in cases of dispute. If someone files a CVE against curl and Daniel says ‘this is obviously nonsense’ then there needs to be a process where the original author can say ‘no, look, I have a PoC that exploits it’ and someone else can adjudicate. MITRE doesn’t have the resources to do this. There’s still an unfortunate asymmetry, especially in the age of ChatGPT, where writing a bogus vulnerability report is far less effort than validating that a report is real. I wonder if there’s some model of bug bounties and pay to report that could work. If you had to pay $100 to report a CVE, got back $1000 if it is real, and lost the $100 if it isn’t, then that might discourage the spammers, but it might also discourage a lot of real security researchers.
Being a CNA seems workable for curl because Daniel is conscientious and properly investigates security reports. I don’t think it would scale (I’m not sure what track record you need to be a CNA, or if you can have the status revoked if you are maliciously refusing to record CVEs for real vulnerabilities).
If MITRE doesn’t have the resources to do run adjudication process it seems like they don’t have the resources to run a trustworthy CVE program. The issues are exactly as you describe. There are temptations and incentives for both sides to act in bad faith. There is really only one way handle that problem. An escalation and adjudication process is that way.
As it stands right now the CVE database is growing to be increasingly untrustworthy over time and the current new cycle is only making it worse.
I wonder if there’s a way to build some kind of reputation system for peer review of CVEs. If you haven’t submitted a CVE before, it gets put in an ‘unconfirmed’ state. If it’s confirmed by the vendor, you immediately get reputation. If it’s disputed by the vendor, it goes into a pile and people with a sufficiently high reputation can then review it. I’m not sure how you’d incentivise the last bit though. Perhaps reputation decays over time and can be increased by reviewing disputed CVEs.
Even without these issues, CVEs were already not useful for what people use them for: a tool to blindly decide whether some software is safe. For whatever reason, “can’t use any software if it has any CVE” has become semi-common policy.
You’re making a strong claim here. CVEs are not often used to decide if some software is safe. It’s a bad tool for that purpose and always has been. CVEs are mostly used as a database for common understanding of which issue you’re talking about. They exist so you can say we need to deal with CVE-1234 rather than having exchanges like “that buffer overflow issue… no, the one in the headers… no, the one from 2 weeks ago… etc.”.
It’s right on the page: “The mission of the CVE® Program is to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities.” Using them to judge whether an app is safe is just going to give you new problems - there’s way too much extra context needed for that, which they cannot capture.
I know what it’s supposed to be used for. I also know that many managers try to use it as a proxy, or indicator. I personally know multiple people who have to operate under the policy of “don’t use any software versions with CVEs”. It’s very ridiculous.
I don’t really get how this is a problem with CVEs. Idiots will be idiots, so what? CVE still provides value for notifying and cataloging vulnerabilities.
The notifying part has become tooling that raises an alarm about every instance of a “vulnerable” software, without having ability to verify whether it’s actually affected.
This combined with mass low-effort beg-bounty reports, and cover-our-asses inflated CVSS scoring means everything is “critically vulnerable” all the time, when in fact it’s just some build-time dependency having an expensive regex in a function nobody uses.
I don’t think the expensive regex “vuls” are ever higher than medium, but we can agree that people spam low CVEs out as resume fodder.
Regardless, I’m not sure what people expect. As was mentioned elsewhere, the reason it’s like this is because for decades vendors would refuse to disclose vulnerabilities and this still happens today. CVEs allow 3rd parties to make assertions about vulnerabilities instead, and vendors can dispute them if they choose to do so but ultimately the CVE means that users are in the loop about it.
I do wonder if the incentive programs that have sprung up (e.g. universities asking students to find vulnerabilities, t-shirts for pull requests, resume padding) has just totally poisoned this system to just be “run unintelligent tool, file output without critical thought”.
I don’t feel, although do not have first-hand experience of this, that the projects were as hostile to CVEs until the system poisoning turned them on it. It’s not just experienced, good faith, researchers anymore, so why is a project waste time constantly having to validate the mess?
At Google we have a system that sucks in CVEs, performs some heuristics to try and figure out a priority itself (not just what was reported AFAICT) and then files a bug on the appropriate team. I once requested access to the entire log for my team, and saw that it screens out a lot. When I did go and investigate myself, they really were just bogus or a really silly attack vector (e.g. physical access to a device at which point you’re stuffed anyway) it was pointless. I was very glad I wasn’t getting the firehose and actually getting useful and actionable reports.
I don’t believe in MITRE and I don’t believe in the CVE system. What the alternative is I don’t know. Something with more friction? Something that validates researcher credentials? Perhaps CISA steps up with its own thing and properly validates. Perhaps some academic consortium. But MITRE and CVEs ain’t it.
At Google we have a system that sucks in CVEs, performs some heuristics to try and figure out a priority itself (not just what was reported AFAICT) and then files a bug on the appropriate team.
I don’t suppose any of this tooling is (written about in) public?
It’s worth noting that, while MITRE is a “Top-Level Root” in the CVE system, they’re not the only organization that can or does adjudicate CVE submissions. In fact, the entire design of the CNA tree is to enable organizations with the greatest on-the-ground expertise to stand as the experts who decide if a vulnerability is legitimate or not.
Disclosure: I work at MITRE. I do not work on the CVE team. I have no insight into the deliberations they make nor any knowledge of this Curl vulnerability’s review. I am not speaking for MITRE or the CVE program.
Thanks for the clarification. It seems like in the adjudication for other CNAs in their area of expertise can only take the form of DISPUTED which is the core of the issue. I do think that DISPUTED should probably be temporary with a process of moving to something more settled. The lack of that process is part of what Daniel seems to be annoyed by and I think I would also be annoyed.
For reference, here is the full policy on disputes of different kinds in the CVE system. Note that for “rejected,” that can only happen if the vulnerability is determined not to be a vulnerability by the CNA that issued it, or through escalation to a parent CNA through the standard escalation process (also documented on that page) if necessary.
In this case, part of the problem with the specific vulnerability is that it is incorrect behavior, and it can in highly unlikely circumstances be part of a denial of service, so it probably rightly is a vulnerability. At the very least, it’s “not a vulnerability”-ness is uncertain enough to not be tagged as “rejected.” Disputed, along with all the notices on the CVE record to explain the dispute, is probably the correct outcome.
As for whether things should be able to remain disputed; a policy that requires resolution to either firmly reject or not reject is probably not ideal in either direction. Sometimes things persist in a state where two parties (the vuln reporter and the CNA) do not agree, and it’s probably systemically best if the system makes that dispute clear so consumers of the data can know to apply more scrutiny and make an informed choice themselves.
In this case, that CVE was issued before Curl became a CNA. If you go to the CVE page for that one, you’ll see that the very first word in its description is “DISPUTED,” and that shortly after there is a note explaining why this vulnerability is likely not an issue. The “References” sections’ first link is also to Daniel’s own blog post on why it’s not an issue.
The score you’re referring to is a CVSS Score, which are assigned by the National Vulnerability Database, not by the CVE system. You can view the NVD entry for this vulnerability, which shows it was rescored to a 3.3 (considered “low” severity).
In short: the CVE was a bug that does result in an overflow that causes observably-incorrect behavior and might involve a denial of service. The initial reviewers were from MITRE, as a CNA-of-last-resort, who issued the CVE number. The NVD then erred in rating it too highly, and Daniel was able to get it marked as disputed with a prominent note and link explaining the vulnerability, and to get NVD to rescore the vulnerability. As far as I can tell, the system basically worked here.
I share this concern, and thank you for writing about it.
I am highly skeptical about the ability of any hierarchical power structure to resolve it, because no matter whether you put the ultimate authority in industry, in government, or somewhere else, the incentive will always be to downplay structural problems to preserve the status quo. That’s exactly how everyone wound up being so under-invested in security in the first place.
If there’s going to be a lasting solution that threads the needle, it needs to start by acknowledging the reality that any actor in the system can potentially be acting out of greed rather than a desire for the public interest, and we need to all get comfortable applying our judgement about that stuff when we look at these things.
I don’t claim to know exactly how to enshrine that in process… alas.
My concern about the response of more projects becoming their own CNAs is that this makes it easier for projects to pretend vulnerability reports are not real.
I recall reading that Linux intends to not assign CVEs until after the problems have been fixed. Unfortunately I don’t remember where I read it, though.
No CVEs will be automatically assigned for unfixed security issues in the Linux kernel; assignment will only automatically happen after a fix is available and applied to a stable kernel tree, and it will be tracked that way by the git commit id of the original fix. If anyone wishes to have a CVE assigned before an issue is resolved with a commit, please contact the kernel CVE assignment team at cve@kernel.org to get an identifier assigned from their batch of reserved identifiers.
Linux kernel becoming a CNA is an absolute joke. Greg openly hates the CVE system and upstream has a long, continuous track record of hiding vulnerabilities from the public.
OTOH so does anyone doing vuln research. With the four stages of exploitable findings: from anger, “can I sell to LEO.gov?” to guilt; “is there a bug bounty?” to denial “it is definitely not the IDF buying (spoiler: it was the IDF)” to depression + fine, I give up, CVE-CV it is.
My understanding was that Linus’ historical stance was that there was never a great way to distinguish security bugs from ordinary bugfixes and so upstream wouldn’t issue security advisories at all. Is that what you’re talking about? I don’t think that’s great but I also think that’s different from actively hiding vulnerabilities.
spender used to keep a list/ post online when it happened and afaik Greg and Linus have openly admitted to this practice before, but I don’t keep any of that handy - I’d suggest looking into spender’s comments, he wrote a blog post responding to a talk that Greg did on CVEs that, I suspect, is very likely to also link to many cases where vulns were held.
Easier than you might think. First, most people can’t really judge a PoC. Second, most security researchers aren’t high profile enough to attract attention to their report. If a is report is buried and doesn’t get a CVE, you can hide it from customers quite effectively.
There was a good example of that some time ago where someone posted a Ruby script that effectively encrypted and decrypted some data with a known key. But it was quite obfuscated and confusing enough that people did have to rely on others to verify that it’s not a real openssl issue.
Personally I don’t blame MITRE, I blame the sea of crappy security companies that treat CVE feeds and CVSS scores as gospel, then show automated alerts in their crappy dashboards. And then these same companies lobby CISA, NIST, etc. to recommend their crappy software.
There’s no money to be made in individually reading every CVE to determine if a software engineer should really go patch it now or not, so no one does that. Instead they see a CVE, and if there’s no patch, they go harass the maintainers for a patch because they’re being harassed by this crappy security software that prevents them from releasing code with “vulnerabilities”.
Small hope for the future: the Vulnerability Exploitability eXchange (VEX) concept is intended to enable consumers to report in a machine-readable way whether their own users are impacted by a vulnerability in a dependency. It’s intended to help cut down on automated spam like this when vulnerabilities aren’t actually exploitable.
I may be missing something but I don’t get why daniel dismisses this CVE completely. If I’m not mistaken, reading past the bounds of the array is UB in C, and UB propagates to all the program. This is very theoretical but still seems to be a possibility that in debug mode, UB caused by this code will propagate to all the library. Am I missing something?
Yes, CVEs are handed out for things that can be exploited, not theoretical program unsoundness or bugs.
I wish there were a better in-between. E.g. the Rust project hands out CVEs for bugs in the stdlib that if misused, could break the language guarantees. (It happens to the best of us!) However, that is strictly speaking misuse of the CVE system, as these things are not exploitable, only open up the possibility of building exploitable systems.
If we started to hand out CVEs for every time someone reads past a bound of an array in C or would do something wrong in a programming language, the CVE would be little more than a glorified central bug tracker.
Hopefully this helps provide some language to clarify things:
What you’re describing are software weaknesses, types of which are tracked and described in the Common Weakness Enumeration (CWE) system, also maintained by MITRE. CWE doesn’t track specific weaknesses disclosed in software the way that CVE tracks vulnerabilities, but it does enumerate the types of weaknesses. In general, I don’t think people have thought it to be worthwhile to track weaknesses with specific identifiers the way we do with vulnerabilities, since there are more weaknesses than vulnerabilities and the workflows around them are more pedestrian because they’re not exploitable.
I get your point on having to limit what gets awarded a CVE or not. However I can’t help but wonder if UB should hold a kind of special place.
For example this curl CVE is also disputed by Daniel. I do not necessarily agree with him that this should not get a CVE given how critical curl has become but I however agree that the scope of this bug is well identified. Its possible behaviors are exhaustively mapped, and its behavior predictable.
UB however is different in the sense that it is not predictable. Yes, I can’t think of a compiler that would turn the out-of-bound array read into a security hole, yes compilers are not actively malicious, but it may exist, one cannot rule out that a standard abiding compiler would not make security daemons run of their CISO’s nose. Yet, save for bug in the compiler, the delay CVE above will never turn into more than what is described.
one cannot rule out that a standard abiding compiler would not make security daemons run of their CISO’s nose
Consider the logical consequence of this: essentially every non-trivial C program has UB, simply because of the nature of C and the magnitude of what compiler authors consider to be UB. Heck, there’s a long history of UB in standard library implementations, so even if you are as absolutely perfectly careful as you can possibly be in writing your own code, you can still wind up with UB.
So if UB is going to be a special case that bypasses normal requirements like a need to show actual real-world exploitability, then the only logical conclusion is: every piece of software in C must have a perma-CVE issued for likely UB. And if we really go all the way here, it should be at least a 9/10 and probably a 10/10 severity (or higher!) since, after all, UB technically allows anything to happen. UB implies arbitrary remote code execution. UB implies a full breach of all confidential data. UB allows any outcome; therefore UB implies every outcome.
And I’m really not joking here – this just really is where you end up when you start thinking through the consequences of arguing that “UB should hold a kind of special place”.
My point is a different one: UB or not is not the right criterion. Actual exploitable code is. That’s a problem that I often observe in these discussions: industry processes are first and foremost pragmatic. If there’s a compiler that exploits UB here so hard that it will lead to problems, a CVE can be written to “curl compiled with this compiler, shipped over here” is vulnerable.
Similarly, compilers breaking legal and sound code is not unheard of, so issues stemming from this could be assigned CVEs as well. In this case, we’re talking about “we found an out of bound read using a tool and did not investigate further”.
Don’t get me wrong: I dislike that it’s so easy to trigger UB in C by accident, but we need to deal with real issues. Otherwise, CVE becomes more noise than it already is at no gain.
On the other CVE: I think Daniel wouldn’t be so annoyed if it also wasn’t assigned as 9.8/10, which is patently absurd and the downstreams agree.
The claimed issue identifies a bug in curl that
only existed in debug-builds (thus disqualified)
even in debug-builds, a bad access will at worst cause a crash, which is also what the assert itself does when triggered. Thus having the same end result. Not a vulnerability.
in most situations, the bad access will not cause any problems at all, even in debug-builds (because the accessed stack memory is readable)
He also explained that it was fixed in a later commit.
Or maybe if there’s a dispute, both severities are shown. Then it becomes up to the CVE consumer to determine whether the reporter or vendor is more trustworthy.
This also still provides some friction against security tools blindly reporting the severity and users of those tools blindly freaking out over high severities.
I believe the originating purpose of the CVE system was to enable consistent identification of vulnerabilities by different organizations and throughout vulnerability management processes.
In the past, it was at times unclear if two organizations or two people would be talking about the same vulnerability when adjudicating possible problems or planning remediations. The lack of a consistent identifier scheme was very confusing and made coordination more difficult. CVE was invented to help make coordination easier.
Disclosure: I work at MITRE. I do not work on the CVE team. I have no insight into the deliberations they make nor any knowledge of this Curl vulnerability’s review. I am not speaking for MITRE or the CVE program.
Thanks, that does explain the rationale behind assigning unique identifiers to security vulnerabilities. However, it is still amazing that anyone thought this would work with no adjudication or safeguards in the most adversarial and devious part of the industry.
And anecdotally, it does seem to have worked decently well for a long time. AFAICT, and as @david_chisnall noted in the root of this thread, only in the past couple years has malicious CVE submission become a significant problem (although it’s also possible that I just wasn’t paying enough attention/in the right place/etc. before a few years ago).
I’m not sure when malicious CVE submission became a significant problem (or even if it really is one now) but trivial, irrelevant, and inaccurate CVE submissions have been a problem for many years.
That’s interesting: I understood the point of the article to be that these things don’t exist. From the MITRE CVE FAQ:
When one party disagrees with another party’s assertion that a particular issue is a vulnerability, a CVE Record assigned to that issue may be designated with a “DISPUTED” tag. In these cases, the CVE Program is making no determination as to which party is correct.
If “the CVE Program is making no determination as to which party is correct”, doesn’t that mean there’s no adjudication?
Ah, perhaps I misunderstood what you meant by “adjudication.” As I understand it, the CVE procedures generally don’t “adjudicate” in disputes between software maintainers and vulnerability reporters, because either side could be acting in bad faith so there’s a strong preference toward keeping information public but marked as disputed.
However, CNA’s have authority to reject submissions they believe to be invalid. In this case, that didn’t happen because Curl was not yet a CNA, which they now are.
Right, the whole problem is that of course either side could and often is acting in bad faith. In the absence of indepedent adjudication, this is never going to work.
Some projects are now becoming CNAs themselves in order to combat bad-faith CVE reporters, but that does not deal with the problem of bad-faith projects.
I appreciate that MITRE don’t have the resources to adjudicate… but that’s what seems so crazy to me. I also appreciate that the CVE Project was originally intended just to be a system for providing identifiers, to aid communication about vulnerabilities, rather than a database of proven vulnerabilities… but did people really not predict what would happen to it? I don’t have a horse in this race - I apologize if I’ve come across as overly critical - but I just find it mind-boggling that a system dependent on the good faith of all concerned was constructed in realm of hackers, crackers and people doing it for the lulz.
Basically, you fill out an application disclosing:
Organization name
Country
Business sector
Contact details for an organization representative
Communication preferences
Estimated annual number of CVE ID’s needed
Expected scope of coverage for vulnerability reports you’ll handle
Link to organization’s vulnerability disclosure policy
Link to organization’s security advisory policy
Availability for a one-hour guidance and orientation meeting
The organization has to commit to having a public vulnerability disclosure policy, a place to disclose vulnerabilities, and to abide by the CVE terms of use (that basically ensure open licensing of published vulnerability information so it can be shared with MITRE and other CNAs).
You then go through orientation, where you view an orientation video, receive a briefing on program policies, and go through some practice examples of reviewing and handling submitted CVE requests.
In short, yes. I believe the idea is that if this happens it can be considered a violation of the CVE program policies and should be reported to the CVE program for review.
Why does he care so much about not marking it Disputed? I don’t get it. Would it not work itself out naturally?
I mean, what if Daniel were wrong, if he just got mixed up? If there’s no penalty for squashing / rejecting a CVE without dispute [1], forcing it closed, wouldn’t that invite abuse?
The issue itself does seem like a “fake blood” scenario where it looks bad and attracts attention (as Daniel seems to acknowledge), but is benign. You might could expect people to hit the trap, rather than be surprised & upset when it happens.
If he was the kind of person who didn’t care so much, maybe he wouldn’t have maintained curl for more than two decades and it might have died along the way somewhere?
I appreciate Daniel’s view here, but part of the problem is that entities on both sides have acted in bad faith:
Daniel acts in good faith and so may not have been aware of the latter case but (until fairly recently) it was more of a problem than the former. It looks as if, at some point in the last few years, the balance tipped in the opposite direction. Most moderately high-profile projects handle good-faith CVEs well, but an increasing number of people are bogus CVEs. The MITRE process was set up assuming good faith from most vulnerability researchers and has been slow to adapt.
My concern about the response of more projects becoming their own CNAs is that this makes it easier for projects to pretend vulnerability reports are not real. It would be nice if there were some impartial organisation that could validate CVEs in cases of dispute. If someone files a CVE against curl and Daniel says ‘this is obviously nonsense’ then there needs to be a process where the original author can say ‘no, look, I have a PoC that exploits it’ and someone else can adjudicate. MITRE doesn’t have the resources to do this. There’s still an unfortunate asymmetry, especially in the age of ChatGPT, where writing a bogus vulnerability report is far less effort than validating that a report is real. I wonder if there’s some model of bug bounties and pay to report that could work. If you had to pay $100 to report a CVE, got back $1000 if it is real, and lost the $100 if it isn’t, then that might discourage the spammers, but it might also discourage a lot of real security researchers.
Being a CNA seems workable for curl because Daniel is conscientious and properly investigates security reports. I don’t think it would scale (I’m not sure what track record you need to be a CNA, or if you can have the status revoked if you are maliciously refusing to record CVEs for real vulnerabilities).
If MITRE doesn’t have the resources to do run adjudication process it seems like they don’t have the resources to run a trustworthy CVE program. The issues are exactly as you describe. There are temptations and incentives for both sides to act in bad faith. There is really only one way handle that problem. An escalation and adjudication process is that way.
As it stands right now the CVE database is growing to be increasingly untrustworthy over time and the current new cycle is only making it worse.
I wonder if there’s a way to build some kind of reputation system for peer review of CVEs. If you haven’t submitted a CVE before, it gets put in an ‘unconfirmed’ state. If it’s confirmed by the vendor, you immediately get reputation. If it’s disputed by the vendor, it goes into a pile and people with a sufficiently high reputation can then review it. I’m not sure how you’d incentivise the last bit though. Perhaps reputation decays over time and can be increased by reviewing disputed CVEs.
StackExchange for CVEs? Complete with high reputation people that smack down the positions they don’t like? Wait, that’s Wikipedia.
This is, as far as I know, how bug bounty programs work.
Even without these issues, CVEs were already not useful for what people use them for: a tool to blindly decide whether some software is safe. For whatever reason, “can’t use any software if it has any CVE” has become semi-common policy.
You’re making a strong claim here. CVEs are not often used to decide if some software is safe. It’s a bad tool for that purpose and always has been. CVEs are mostly used as a database for common understanding of which issue you’re talking about. They exist so you can say we need to deal with CVE-1234 rather than having exchanges like “that buffer overflow issue… no, the one in the headers… no, the one from 2 weeks ago… etc.”.
It’s right on the page: “The mission of the CVE® Program is to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities.” Using them to judge whether an app is safe is just going to give you new problems - there’s way too much extra context needed for that, which they cannot capture.
I know what it’s supposed to be used for. I also know that many managers try to use it as a proxy, or indicator. I personally know multiple people who have to operate under the policy of “don’t use any software versions with CVEs”. It’s very ridiculous.
I don’t really get how this is a problem with CVEs. Idiots will be idiots, so what? CVE still provides value for notifying and cataloging vulnerabilities.
The notifying part has become tooling that raises an alarm about every instance of a “vulnerable” software, without having ability to verify whether it’s actually affected.
This combined with mass low-effort beg-bounty reports, and cover-our-asses inflated CVSS scoring means everything is “critically vulnerable” all the time, when in fact it’s just some build-time dependency having an expensive regex in a function nobody uses.
I don’t think the expensive regex “vuls” are ever higher than medium, but we can agree that people spam low CVEs out as resume fodder.
Regardless, I’m not sure what people expect. As was mentioned elsewhere, the reason it’s like this is because for decades vendors would refuse to disclose vulnerabilities and this still happens today. CVEs allow 3rd parties to make assertions about vulnerabilities instead, and vendors can dispute them if they choose to do so but ultimately the CVE means that users are in the loop about it.
I do wonder if the incentive programs that have sprung up (e.g. universities asking students to find vulnerabilities, t-shirts for pull requests, resume padding) has just totally poisoned this system to just be “run unintelligent tool, file output without critical thought”.
I don’t feel, although do not have first-hand experience of this, that the projects were as hostile to CVEs until the system poisoning turned them on it. It’s not just experienced, good faith, researchers anymore, so why is a project waste time constantly having to validate the mess?
At Google we have a system that sucks in CVEs, performs some heuristics to try and figure out a priority itself (not just what was reported AFAICT) and then files a bug on the appropriate team. I once requested access to the entire log for my team, and saw that it screens out a lot. When I did go and investigate myself, they really were just bogus or a really silly attack vector (e.g. physical access to a device at which point you’re stuffed anyway) it was pointless. I was very glad I wasn’t getting the firehose and actually getting useful and actionable reports.
I don’t believe in MITRE and I don’t believe in the CVE system. What the alternative is I don’t know. Something with more friction? Something that validates researcher credentials? Perhaps CISA steps up with its own thing and properly validates. Perhaps some academic consortium. But MITRE and CVEs ain’t it.
I don’t suppose any of this tooling is (written about in) public?
I haven’t seen any evidence of it, no. I think it’s pretty custom built for Google architecture.
It’s worth noting that, while MITRE is a “Top-Level Root” in the CVE system, they’re not the only organization that can or does adjudicate CVE submissions. In fact, the entire design of the CNA tree is to enable organizations with the greatest on-the-ground expertise to stand as the experts who decide if a vulnerability is legitimate or not.
You can view the high-level structure of CNAs on the CVE website.
Disclosure: I work at MITRE. I do not work on the CVE team. I have no insight into the deliberations they make nor any knowledge of this Curl vulnerability’s review. I am not speaking for MITRE or the CVE program.
Thanks for the clarification. It seems like in the adjudication for other CNAs in their area of expertise can only take the form of DISPUTED which is the core of the issue. I do think that DISPUTED should probably be temporary with a process of moving to something more settled. The lack of that process is part of what Daniel seems to be annoyed by and I think I would also be annoyed.
For reference, here is the full policy on disputes of different kinds in the CVE system. Note that for “rejected,” that can only happen if the vulnerability is determined not to be a vulnerability by the CNA that issued it, or through escalation to a parent CNA through the standard escalation process (also documented on that page) if necessary.
In this case, part of the problem with the specific vulnerability is that it is incorrect behavior, and it can in highly unlikely circumstances be part of a denial of service, so it probably rightly is a vulnerability. At the very least, it’s “not a vulnerability”-ness is uncertain enough to not be tagged as “rejected.” Disputed, along with all the notices on the CVE record to explain the dispute, is probably the correct outcome.
As for whether things should be able to remain disputed; a policy that requires resolution to either firmly reject or not reject is probably not ideal in either direction. Sometimes things persist in a state where two parties (the vuln reporter and the CNA) do not agree, and it’s probably systemically best if the system makes that dispute clear so consumers of the data can know to apply more scrutiny and make an informed choice themselves.
If “sub organizations” are designed to adjudicated submissions, why do we end up with things like CVE-2020-19909 getting a 9.7?
https://daniel.haxx.se/blog/2023/08/26/cve-2020-19909-is-everything-that-is-wrong-with-cves/
In this case, that CVE was issued before Curl became a CNA. If you go to the CVE page for that one, you’ll see that the very first word in its description is “DISPUTED,” and that shortly after there is a note explaining why this vulnerability is likely not an issue. The “References” sections’ first link is also to Daniel’s own blog post on why it’s not an issue.
The score you’re referring to is a CVSS Score, which are assigned by the National Vulnerability Database, not by the CVE system. You can view the NVD entry for this vulnerability, which shows it was rescored to a 3.3 (considered “low” severity).
In short: the CVE was a bug that does result in an overflow that causes observably-incorrect behavior and might involve a denial of service. The initial reviewers were from MITRE, as a CNA-of-last-resort, who issued the CVE number. The NVD then erred in rating it too highly, and Daniel was able to get it marked as disputed with a prominent note and link explaining the vulnerability, and to get NVD to rescore the vulnerability. As far as I can tell, the system basically worked here.
Malicious vendors really are problems. According to freddyb:
I share this concern, and thank you for writing about it.
I am highly skeptical about the ability of any hierarchical power structure to resolve it, because no matter whether you put the ultimate authority in industry, in government, or somewhere else, the incentive will always be to downplay structural problems to preserve the status quo. That’s exactly how everyone wound up being so under-invested in security in the first place.
If there’s going to be a lasting solution that threads the needle, it needs to start by acknowledging the reality that any actor in the system can potentially be acting out of greed rather than a desire for the public interest, and we need to all get comfortable applying our judgement about that stuff when we look at these things.
I don’t claim to know exactly how to enshrine that in process… alas.
I recall reading that Linux intends to not assign CVEs until after the problems have been fixed. Unfortunately I don’t remember where I read it, though.
Edit: not where I read it, but presumably the original source, a bit more nuanced. https://lore.kernel.org/lkml/2024021430-blanching-spotter-c7c8@gregkh/
Linux kernel becoming a CNA is an absolute joke. Greg openly hates the CVE system and upstream has a long, continuous track record of hiding vulnerabilities from the public.
OTOH so does anyone doing vuln research. With the four stages of exploitable findings: from anger, “can I sell to LEO.gov?” to guilt; “is there a bug bounty?” to denial “it is definitely not the IDF buying (spoiler: it was the IDF)” to depression + fine, I give up, CVE-CV it is.
My understanding was that Linus’ historical stance was that there was never a great way to distinguish security bugs from ordinary bugfixes and so upstream wouldn’t issue security advisories at all. Is that what you’re talking about? I don’t think that’s great but I also think that’s different from actively hiding vulnerabilities.
No, that’s not really an accurate description of his opinion.
Ok. Do you have any links about that/upstream hiding vulnerabilities?
spender used to keep a list/ post online when it happened and afaik Greg and Linus have openly admitted to this practice before, but I don’t keep any of that handy - I’d suggest looking into spender’s comments, he wrote a blog post responding to a talk that Greg did on CVEs that, I suspect, is very likely to also link to many cases where vulns were held.
This post seems to contradict your statement, at least in the short term:
https://infosec.exchange/@joshbressers/111982425288425140
That would be kind of hard to pretend once an exploit was demonstrated, no?
Easier than you might think. First, most people can’t really judge a PoC. Second, most security researchers aren’t high profile enough to attract attention to their report. If a is report is buried and doesn’t get a CVE, you can hide it from customers quite effectively.
There was a good example of that some time ago where someone posted a Ruby script that effectively encrypted and decrypted some data with a known key. But it was quite obfuscated and confusing enough that people did have to rely on others to verify that it’s not a real openssl issue.
Personally I don’t blame MITRE, I blame the sea of crappy security companies that treat CVE feeds and CVSS scores as gospel, then show automated alerts in their crappy dashboards. And then these same companies lobby CISA, NIST, etc. to recommend their crappy software.
There’s no money to be made in individually reading every CVE to determine if a software engineer should really go patch it now or not, so no one does that. Instead they see a CVE, and if there’s no patch, they go harass the maintainers for a patch because they’re being harassed by this crappy security software that prevents them from releasing code with “vulnerabilities”.
Small hope for the future: the Vulnerability Exploitability eXchange (VEX) concept is intended to enable consumers to report in a machine-readable way whether their own users are impacted by a vulnerability in a dependency. It’s intended to help cut down on automated spam like this when vulnerabilities aren’t actually exploitable.
I may be missing something but I don’t get why daniel dismisses this CVE completely. If I’m not mistaken, reading past the bounds of the array is UB in C, and UB propagates to all the program. This is very theoretical but still seems to be a possibility that in debug mode, UB caused by this code will propagate to all the library. Am I missing something?
Yes, CVEs are handed out for things that can be exploited, not theoretical program unsoundness or bugs.
I wish there were a better in-between. E.g. the Rust project hands out CVEs for bugs in the stdlib that if misused, could break the language guarantees. (It happens to the best of us!) However, that is strictly speaking misuse of the CVE system, as these things are not exploitable, only open up the possibility of building exploitable systems.
If we started to hand out CVEs for every time someone reads past a bound of an array in C or would do something wrong in a programming language, the CVE would be little more than a glorified central bug tracker.
Hopefully this helps provide some language to clarify things:
What you’re describing are software weaknesses, types of which are tracked and described in the Common Weakness Enumeration (CWE) system, also maintained by MITRE. CWE doesn’t track specific weaknesses disclosed in software the way that CVE tracks vulnerabilities, but it does enumerate the types of weaknesses. In general, I don’t think people have thought it to be worthwhile to track weaknesses with specific identifiers the way we do with vulnerabilities, since there are more weaknesses than vulnerabilities and the workflows around them are more pedestrian because they’re not exploitable.
Very interesting.
I get your point on having to limit what gets awarded a CVE or not. However I can’t help but wonder if UB should hold a kind of special place.
For example this curl CVE is also disputed by Daniel. I do not necessarily agree with him that this should not get a CVE given how critical curl has become but I however agree that the scope of this bug is well identified. Its possible behaviors are exhaustively mapped, and its behavior predictable.
UB however is different in the sense that it is not predictable. Yes, I can’t think of a compiler that would turn the out-of-bound array read into a security hole, yes compilers are not actively malicious, but it may exist, one cannot rule out that a standard abiding compiler would not make security daemons run of their CISO’s nose. Yet, save for bug in the compiler, the delay CVE above will never turn into more than what is described.
Consider the logical consequence of this: essentially every non-trivial C program has UB, simply because of the nature of C and the magnitude of what compiler authors consider to be UB. Heck, there’s a long history of UB in standard library implementations, so even if you are as absolutely perfectly careful as you can possibly be in writing your own code, you can still wind up with UB.
So if UB is going to be a special case that bypasses normal requirements like a need to show actual real-world exploitability, then the only logical conclusion is: every piece of software in C must have a perma-CVE issued for likely UB. And if we really go all the way here, it should be at least a 9/10 and probably a 10/10 severity (or higher!) since, after all, UB technically allows anything to happen. UB implies arbitrary remote code execution. UB implies a full breach of all confidential data. UB allows any outcome; therefore UB implies every outcome.
And I’m really not joking here – this just really is where you end up when you start thinking through the consequences of arguing that “UB should hold a kind of special place”.
My point is a different one: UB or not is not the right criterion. Actual exploitable code is. That’s a problem that I often observe in these discussions: industry processes are first and foremost pragmatic. If there’s a compiler that exploits UB here so hard that it will lead to problems, a CVE can be written to “curl compiled with this compiler, shipped over here” is vulnerable.
Similarly, compilers breaking legal and sound code is not unheard of, so issues stemming from this could be assigned CVEs as well. In this case, we’re talking about “we found an out of bound read using a tool and did not investigate further”.
Don’t get me wrong: I dislike that it’s so easy to trigger UB in C by accident, but we need to deal with real issues. Otherwise, CVE becomes more noise than it already is at no gain.
On the other CVE: I think Daniel wouldn’t be so annoyed if it also wasn’t assigned as 9.8/10, which is patently absurd and the downstreams agree.
I think he explained it rather clearly:
He also explained that it was fixed in a later commit.
Why should a CVE have a severity if the reporter and vendor can’t agree on it?
Or maybe if there’s a dispute, both severities are shown. Then it becomes up to the CVE consumer to determine whether the reporter or vendor is more trustworthy.
This also still provides some friction against security tools blindly reporting the severity and users of those tools blindly freaking out over high severities.
I struggle to understand why anyone ever thought the CVE process was a good idea. The logic seems to be something like this:
I am shocked and surprised at the result.
I believe the originating purpose of the CVE system was to enable consistent identification of vulnerabilities by different organizations and throughout vulnerability management processes.
In the past, it was at times unclear if two organizations or two people would be talking about the same vulnerability when adjudicating possible problems or planning remediations. The lack of a consistent identifier scheme was very confusing and made coordination more difficult. CVE was invented to help make coordination easier.
Disclosure: I work at MITRE. I do not work on the CVE team. I have no insight into the deliberations they make nor any knowledge of this Curl vulnerability’s review. I am not speaking for MITRE or the CVE program.
Thanks, that does explain the rationale behind assigning unique identifiers to security vulnerabilities. However, it is still amazing that anyone thought this would work with no adjudication or safeguards in the most adversarial and devious part of the industry.
If they were treated just as identifiers, and not as flashing warning lights, it would work fine.
My guess is that this is a “we’ll cross that bridge when we get there” situation.
And anecdotally, it does seem to have worked decently well for a long time. AFAICT, and as @david_chisnall noted in the root of this thread, only in the past couple years has malicious CVE submission become a significant problem (although it’s also possible that I just wasn’t paying enough attention/in the right place/etc. before a few years ago).
I’m not sure when malicious CVE submission became a significant problem (or even if it really is one now) but trivial, irrelevant, and inaccurate CVE submissions have been a problem for many years.
I’m not good at time. I might mean ‘ten’ when I say ‘couple of years’.
Hm, there are adjudications and safeguards. People can disagree about the effectiveness of those safeguards, but they do exist.
That’s interesting: I understood the point of the article to be that these things don’t exist. From the MITRE CVE FAQ:
If “the CVE Program is making no determination as to which party is correct”, doesn’t that mean there’s no adjudication?
Ah, perhaps I misunderstood what you meant by “adjudication.” As I understand it, the CVE procedures generally don’t “adjudicate” in disputes between software maintainers and vulnerability reporters, because either side could be acting in bad faith so there’s a strong preference toward keeping information public but marked as disputed.
However, CNA’s have authority to reject submissions they believe to be invalid. In this case, that didn’t happen because Curl was not yet a CNA, which they now are.
Right, the whole problem is that of course either side could and often is acting in bad faith. In the absence of indepedent adjudication, this is never going to work.
Some projects are now becoming CNAs themselves in order to combat bad-faith CVE reporters, but that does not deal with the problem of bad-faith projects.
I appreciate that MITRE don’t have the resources to adjudicate… but that’s what seems so crazy to me. I also appreciate that the CVE Project was originally intended just to be a system for providing identifiers, to aid communication about vulnerabilities, rather than a database of proven vulnerabilities… but did people really not predict what would happen to it? I don’t have a horse in this race - I apologize if I’ve come across as overly critical - but I just find it mind-boggling that a system dependent on the good faith of all concerned was constructed in realm of hackers, crackers and people doing it for the lulz.
Can any organization become a CNA or is there a vetting process?
You can view the requirements and process for becoming a CNA on the CVE website.
Basically, you fill out an application disclosing:
The organization has to commit to having a public vulnerability disclosure policy, a place to disclose vulnerabilities, and to abide by the CVE terms of use (that basically ensure open licensing of published vulnerability information so it can be shared with MITRE and other CNAs).
You then go through orientation, where you view an orientation video, receive a briefing on program policies, and go through some practice examples of reviewing and handling submitted CVE requests.
Thanks a lot for the explanation!
I wonder if there’s a risk that an organization tries to become a CNA specifically to “bury” uncomfortable CVEs.
In short, yes. I believe the idea is that if this happens it can be considered a violation of the CVE program policies and should be reported to the CVE program for review.
Why does he care so much about not marking it Disputed? I don’t get it. Would it not work itself out naturally?
I mean, what if Daniel were wrong, if he just got mixed up? If there’s no penalty for squashing / rejecting a CVE without dispute [1], forcing it closed, wouldn’t that invite abuse?
The issue itself does seem like a “fake blood” scenario where it looks bad and attracts attention (as Daniel seems to acknowledge), but is benign. You might could expect people to hit the trap, rather than be surprised & upset when it happens.
[1] “unwittingly”, of course, of course
If he was the kind of person who didn’t care so much, maybe he wouldn’t have maintained curl for more than two decades and it might have died along the way somewhere?