The CVE stigma: we need a new security culture

If you’ve ever tried to report a security vulnerability to an open source project, you know the feeling. You find something real, you write a detailed report, you follow responsible disclosure, and then… silence. Or worse, pushback. The maintainer tells you it’s not a real issue. The ticket gets closed. Sometimes you get a reply that feels almost hostile, as if you just insulted someone’s work instead of trying to help.

I’ve been involved in open source for a long time and I’ve seen this pattern play out more times than I’d like to admit. There’s a strange dynamic around CVEs in our industry: reporting a vulnerability should be a normal, healthy part of software development, but instead it often feels like an accusation. And for the maintainer on the receiving end, having a CVE allocated against their project can feel like a mark of shame.

This needs to change. And it needs to change fast, because the world around us is changing faster than our culture can keep up.

The reporting nightmare

Let’s talk about what it actually looks like to report a vulnerability to a project. The experience varies wildly depending on the project, but the friction is surprisingly common even in well-established ones.

First you need to figure out if the project even has a security policy. Some projects have a SECURITY.md file, some have a dedicated email, some have nothing at all. You might end up opening a public GitHub issue because there’s no private channel, which kind of defeats the purpose of responsible disclosure.

Then comes the triage. Many maintainers, especially in the smaller projects, are volunteers. They have day jobs. They maintain software because they care about it, not because they’re paid to handle security reports. So your report might sit for weeks. If the maintainer doesn’t agree with your assessment, you’re stuck in a back-and-forth that can drag on for months. I’ve seen cases where a perfectly valid vulnerability was dismissed simply because the maintainer didn’t consider the attack scenario realistic enough.

And the CVE system itself doesn’t help. It was designed in a world where software came from vendors with support contracts and SLAs. When you apply it to open source, the assumptions break down. There’s no commercial relationship, no obligation to respond, no dedicated security team. But the expectations from downstream users and scanners remain the same: if there’s a CVE, it must be fixed, and it must be fixed now.

The result is a system that creates pressure without providing resources. Maintainers feel attacked, reporters feel ignored, and the actual security of the software suffers.

The shame problem

Here’s something that doesn’t get discussed enough: many projects treat a CVE as a failure rather then a finding. There’s a cultural undercurrent that says having vulnerabilities in your code means you wrote bad code, that it reflects poorly on you as a developer.

This is wrong, and it’s counterproductive.

Every non-trivial piece of software has bugs. Some of those bugs have security implications. This is not a moral failure, its a statistical certainty. The Linux kernel gets CVEs. OpenSSL gets CVEs. The JDK gets CVEs. These are some of the most reviewed, most tested codebases on the planet. If they have vulnerabilities, your project will too.

But the stigma persists. I’ve seen maintainers downplay severity scores, argue that a bug isn’t exploitable when it clearly is, or simply refuse to acknowledge the issue. In some cases the motivation is understandable: a high-severity CVE can trigger automated alerts across thousands of organizations, creating a firestorm of support requests that a volunteer maintainer is simply not equipped to handle. But ignoring the problem doesn’t make it go away.

The Daniel Stenberg situation with curl is a good example of the other side of this coin. He has been vocal about the flood of bogus CVE reports, many of them AI-generated, that waste maintainer time. He estimates that about 20% of all security submissions to curl are now AI-generated noise, and the rate of genuine vulnerabilities has dropped to around 5%. For every real vulnerability, there are four fake ones. Each fake one consumes hours of expert time to disprove. This is a real problem, and it’s getting worse.

But the solution isn’t to build higher walls around vulnerability reporting. The solution is to build better processes and, more importantly, a better culture.

AI is changing the game whether we like it or not

Here’s where things get interesting and, honestly, a bit uncomfortable for our industry.

In the last couple of years AI systems have gone from theoretical vulnerability discovery to actually finding real zero-day vulnerabilities in production software. This isn’t hype. The evidence is concrete and growing.

Google’s Big Sleep project, a collaboration between Project Zero and DeepMind, found an exploitable stack buffer underflow in SQLite in late 2024. The Project Zero team noted that human researchers couldn’t rediscover the same vulnerability using traditional fuzzing even after 150 CPU hours of testing. Since then, Big Sleep has found over 20 vulnerabilities across projects like FFmpeg and ImageMagick, each discovered and reproduced by the AI agent without human intervention.

Anthropic published findings showing Claude discovering previously unknown vulnerabilities, including in GhostScript, where the model took a creative approach by reading through the Git commit history after more traditional analysis methods failed.

In late 2025 and early 2026, AI systems autonomously discovered zero-day vulnerabilities in Node.js and React. Not toy projects. Not contrived examples. Two of the most widely deployed pieces of JavaScript infrastructure in the world. The vulnerabilities were real, the exploits worked, and patches were necessary.

Researchers from the University of Illinois showed that teams of LLM agents working together could exploit zero-day vulnerabilities with meaningful success rates. Trend Micro’s AESIR platform has contributed to the discovery of 21 CVEs across NVIDIA, Tencent, and MLflow since mid-2025.

The trajectory is clear: AI-discovered CVEs jumped from around 300 in 2023 to over 450 in 2024, and exceeded 1,000 in 2025. This is not slowing down.

The FFmpeg wake-up call

The FFmpeg controversy in late 2025 perfectly illustrates the collision between old culture and new technology. Google’s Big Sleep found 13 vulnerabilities in FFmpeg alone. The volunteer maintainers were understandably frustrated: a trillion-dollar company was using AI to find bugs in their code and then dropping reports on them with a 90-day disclosure countdown, without providing patches or funding.

FFmpeg’s maintainers called some of the findings “CVE slop” and asked Google to either fund the project or stop burdening volunteers with security reports. One maintainer described a bug in a LucasArts Smush codec, an issue affecting only early versions of a 1990s game, flagged as a “medium-impact” security vulnerability. Nick Wellnhofer resigned as maintainer of libxml2, citing the unsustainable workload of addressing security reports without compensation.

The maintainers have a point. But the uncomfortable truth is that those vulnerabilities still exist in the code. The fact that finding them is now cheap and fast doesn’t make them less real.

We need a new culture

So where does this leave us? I think we need to rethink our relationship with security vulnerabilities from the ground up. Here what I believe needs to change.

Vulnerabilities are not failures

We need to stop treating CVEs as marks of shame. A vulnerability report should be treated like a bug report: a normal part of the software lifecycle. The projects that handle CVEs well, with transparency, clear communication, and timely fixes, should be seen as more trustworthy, not less.

The CVE system needs reform

The current system wasn’t designed for the open source world. We need better processes for triaging reports, especially now that AI tools can generate them at scale. The OpenSSF and OWASP are working on this, but progress is slow. We need clear guidelines for what constitutes a valid report, better tooling for maintainers to handle volume, and a way to distinguish between genuine findings and noise.

Funding must follow expectations

If the industry expects open source maintainers to handle security reports with the same rigor as commercial vendors, then funding needs to follow. You can’t demand enterprise-grade security response from volunteers working on their spare time. Organizations that depend on open source software need to invest in the projects they rely on. This means direct funding, dedicated security resources, or at minimum, contributing patches alongside vulnerability reports.

Security education needs an update

Most developers learn about security as a set of rules: don’t use eval, sanitize your inputs, use parameterized queries. This is necessary but not sufficient. We need to teach developers that vulnerabilities are inevitable, that finding them is good, and that the process of fixing them is a skill worth developing. Security should be part of the development culture, not an external audit that happens once a year.

Prepare for the AI flood

AI-powered vulnerability discovery is here and it’s only going to accelerate. Bruce Schneier has noted that the latest models can analyze substantial codebases and produce candidate vulnerabilities in hours or minutes, fundamentally altering the economics of vulnerability discovery. Multi-agent systems where specialized LLMs collaborate on code analysis, exploit development, and verification are outperforming single-model approaches.

This means every project, regardless of size, will face an increasing volume of vulnerability reports. We need to build the infrastructure, both technical and cultural, to handle this. That includes better automated triage, clearer severity standards, and a shared understanding that a rising CVE count doesn’t mean software is getting worse. It means we’re getting better at finding problems.

Conclusion

The security landscape is shifting under our feet. AI tools are finding real vulnerabilities in real software at a pace that human researchers can’t match. Our vulnerability reporting and handling processes, built for a slower era, are showing cracks everywhere.

The answer isn’t to dismiss the findings or shoot the messenger. It’s to grow up as an industry. Treat vulnerabilities as the normal engineering challenge they are. Fund the maintainers who keep critical infrastructure running. Reform the systems that create perverse incentives. And prepare for a world where the volume of discovered vulnerabilities will only increase.

We’ve been treating CVEs as something to be ashamed of. It’s time to start treating them as something to be proud of fixing.

References