Modern Vulnerability Management
Exploring The Three Layers of Failure
The Limits of Legacy Vulnerability Management
Modern vulnerability management is struggling to keep up with the rapid evolution of the software it aims to protect. It’s not a single tool, team, or workflow that’s failing, but the entire system that allows open source vulnerabilities to exist. The growing integration of AI into software development is only exacerbating this challenge.
Despite major investment in scanning tools, disclosure pipelines, and security automation, organizations continue to operate with blind spots large enough for systemic risk to take root. Our analysis shows this failure compounds across three breakpoints in the software ecosystem, each breaking in its own way, and each amplifying the others.
The Data Layer
The Data Layer consists of various elements within the global vulnerability intelligence system: CVE (the Common Vulnerabilities Enumeration program), NVD (the National Vulnerability Database), and the advisory pipelines around them. Elements in this layer are increasingly incomplete, inconsistent, and slow. Coverage gaps, inaccurate version data, and long scoring delays distort how risk is understood and prioritized by both humans and AI.
The Consumption Layer
The Consumption Layer describes any activities related to importing open source software. Even when accurate data and patches exist, organizations continue to download and deploy vulnerable components. Dependency pinning, sprawling transitive graphs, outdated CI images, and ungoverned AI-generated component selection all reinforce the reuse of insecure versions. AI tools can only make recommendations that are as up-to-date as their training data. Much of today’s risk arises not from new exploits, but through persistent poor consumption habits.
The Ecosystem Layer
The Ecosystem Layer encapsulates the myriad of decisions that must be made for long-lived projects with open source dependencies. A growing share of software now depends on unsupported or end-of-life (EOL) releases. These components receive no patches, making vulnerabilities permanent. Legacy frameworks, abandoned libraries, and orphaned versions accumulate as long-term technical debt, leaving organizations dependent on software that cannot be secured through traditional remediation.
Accumulated Vulnerability Debt
- Data Layer: Incomplete, inaccurate, and delayed public vulnerability intelligence
- Consumption Layer: Developers, AI, and pipelines keep pulling vulnerable components
- Ecosystem Layer: Dependence on EOL and abandoned components locks in permanent risk

This chapter quantifies where the system breaks down, and outlines what a modern vulnerability management model must look like in a world where software moves far faster than the legacy processes designed to safeguard it.
The Data Layer Is Breaking Down: Where Vulnerability Intelligence Goes Wrong
Modern threat and vulnerability management relies on the global intelligence ecosystem, anchored by the CVE program, NVD enrichment data, and upstream advisory pipelines that feed them. But that foundation is no longer consistently reliable. Coverage gaps, inconsistent metadata, delayed scoring, and missing ecosystem context now distort the very signals organizations depend on to assess and prioritize risk. When the underlying data is incomplete or wrong, every downstream decision, whether by humans, scanners, or AI, starts from a flawed premise.
Sonatype Security Research analyzed more than 1,700 open source CVEs throughout 2025 to understand where the gaps lie, and how they are impacting software development and security teams.
Coverage Collapse
The first warning sign is the growing gap in basic CVE coverage. Nearly 65% of open source CVEs lack an NVD-assigned CVSS score, leaving most open source vulnerabilities without an official severity rating. That means that only about 600 open source vulnerabilities last year could effectively be triaged. When Sonatype assigned scores to these unscored CVEs, 46% were actually High or Critical, meaning many serious vulnerabilities enter the ecosystem without any meaningful prioritization signal.
This problem is accelerating. In just five years, the global CVE count has doubled, yet the number of unscored CVEs has increased 37x, overwhelming a system built for manual processing and slower software cycles. As volume grows, the gap widens — leaving defenders without the baseline CVE data they rely on to triage risk effectively.
FIGURE 3.1 NVD-Assigned Severity of 2025 Open Source CVEs
Source: Sonatype
Public CVE Data Accuracy Failures
Even when scores exist, they’re inconsistent enough to drive different outcomes depending on which feed you trust. Compared to Sonatype scoring and analysis, exact CVSS score matches are rare (4.4%), and severity categories align only 55.7% of the time. This means 44% of CVEs land in a different bucket in NVD versus Sonatype. The direction of the drift is usually upward in NVD: 61.3% of NVD scores are higher than Sonatype, compared with 34.3% that are lower.
FIGURE 3.2 Severity Score and Category Adjustments
Source: Sonatype
Sonatype identified 20,362 false positives, or packages incorrectly marked as vulnerable, creating noise in vulnerability management workflows and wasting developer time, and 167,286 false negatives, meaning exploitable components went unflagged entirely. The result is a vulnerability intelligence ecosystem that misleads both developers and security teams, forcing organizations to spend time on issues that don’t exist while overlooking those that do. Inaccurate data also biases AI-driven tools, which use this information to determine dependency selection, upgrade paths, and remediation recommendations.
Delays That Break Defenses
In 2025, the NVD’s median time-to-score for open source CVEs was 41 days, with some taking up to a year. Meanwhile, exploit proof-of-concepts and maintainer patches frequently appear within hours. This growing lag renders “official” vulnerability information increasingly stale. By the time a CVE receives a severity score, the vulnerability may already be exploited in the wild, patched upstream, or both. Organizations relying exclusively on NVD data become effectively blind during the period when fast action matters most.
FIGURE 3.3 NVD Time-to-Analysis of 2025 Open Source CVEs
Source: Sonatype
The CVE Crisis analysis highlights how even minor metadata inaccuracies create outsized real-world consequences:
- Incorrect vulnerable version ranges generated thousands of false positives, overwhelming downstream scanners.
- Wrong component identifiers resulted in silent false negatives — packages with real vulnerabilities passed security checks unflagged.
- EOL versions omitted from advisories gave organizations a false sense of security, masking risks that upstream maintainers no longer track.
- These cases reveal a systemic issue: the CVE system excels at naming vulnerabilities, but struggles to describe them reliably enough for automated decision-making.
AI as a Force Multiplier for Bad Data
AI-assisted development tools — increasingly embedded across coding, build, and remediation workflows — amplify the weaknesses of the data layer. Large language models are trained on public CVE and NVD data and treat it as authoritative even when it is incomplete, outdated, or incorrect. This impact is compounded when using an older model. As a result, AI does not fix bad data, but rather distributes it faster, which is examined closer in the From Guesswork to Governance chapter.
The data layer is the foundation of threat and vulnerability management, yet today it is the least reliable part of the system. Incomplete coverage, inaccurate metadata, long scoring delays, AI amplification, and shadow download blind spots collectively undermine the ability of organizations to recognize and respond to real risk. When the data layer fails, every subsequent decision — what to fix, when to fix it, and how to prioritize it — begins from the wrong premise.
Poor Consumption Patterns Sustain Avoidable Risk
Even when vulnerability data is accurate and patches are readily available, risk persists because of how organizations actually consume open source. Dependency pinning, transitive pull-ins, outdated build images, and AI-generated manifests all keep vulnerable components in circulation long after fixes exist. In practice, a large share of modern vulnerability exposure is not created by new flaws — it is sustained by repeated reuse of old ones.
Log4Shell: The Case That Should Have Changed Everything — But Didn’t
Log4Shell was expected to be the turning point: the moment the industry collectively learned to upgrade quickly, retire vulnerable components, and modernize dependency practices. Four years later, the data tells a different story: the remediation path is well-understood and non-breaking, the open source vulnerability is universally recognized, and yet vulnerable versions continue to circulate at scale.
Regional patterns make the problem even clearer. While some markets have driven vulnerable Log4j usage down to single digits, others continue to pull 20–45% vulnerable versions, suggesting deeply uneven adoption of safe releases and persistent reliance on outdated build templates, pinned versions, or ungoverned transitive dependencies.
Log4Shell should have eliminated any doubt about the cost of running outdated open source. Instead, it revealed how ingrained consumption habits can be — and how long vulnerable code can persist, even when every incentive exists to move away from it.
The Broader Pattern: Java’s Top Unnecessary Risks
Log4Shell remains the most visible example of “avoidable” vulnerability exposure, but it is not the dominant driver. Taking a broader look at the Java ecosystem, Sonatype analyzed the most frequently downloaded components that contained a vulnerability, despite a fix for that vulnerability already existing. The same consumption pattern repeats across the ecosystem: the vast majority of vulnerable components being downloaded already have a safer version available. The 10th Annual State of the Software Supply Chain Report found that roughly 95% of vulnerable component downloads had a fix on the shelf, while only ~0.5% represented true edge cases with no upstream path forward.
The most concerning signal is how frequently well-known vulnerable releases persist years after fixes are released. The Java ecosystem provides clear examples: widely used libraries with long-available patches still see substantial (and in some cases overwhelming) consumption of vulnerable versions. This is “unnecessary risk” in its purest form: risk that organizations continue to import into new builds even when safer versions are readily available.
These packages share three characteristics: (1) at least one disclosed vulnerability, (2) a published fix, and (3) low adoption of the fixed line. The reasons are rarely dramatic. They’re structural: pinned versions copied across services, transitive dependency blind spots, upgrade friction (especially across major versions), and selection signals that reward familiarity over maintainability.
Sonatype took a closer look at four vulnerable component versions with released fixes that, combined, represent a total of nearly 1.8 billion avoidable vulnerable downloads in 2025.
Four Vulnerable Component Versions with Released Fixes
|
Component |
Vulnerable Version(s) Still Widely Consumed |
Fixed Version Available |
% of 2025 Avoidable Vulnerable Downloads |
Representative CVE(s) |
Why It Persists (Consumption Drivers) |
|---|---|---|---|---|---|
| commons-compress |
1.21
|
1.26 (Feb 2024)
|
46.32%
|
CVE-2012-2098, CVE-2024-26308, CVE-2020-1945,
CVE-2024-25710, CVE-2021-36374 |
Deeply embedded in build/packaging workflows; low “visibility” dependency; upgrades deferred unless forced.
|
| commons-lang |
2.6 (legacy major line)
|
3.18.0 (Jul 2025)
|
99.88%
|
CVE-2025-48924
|
Major-version migration is non-trivial (2.x → 3.x); older enterprise stacks remain pinned to legacy APIs.
|
| snappy |
0.4
|
0.5 (May 2024)
|
99.58%
|
CVE-2024-36124
|
Common in distributed platforms (e.g., Hadoop/Spark ecosystems) where low-level compression deps are pinned for stability/performance.
|
| jdom2 |
2.0.6
|
2.0.6.1 (Dec 2021)
|
57.73%
|
CVE-2021-33813
|
Widely reused XML utility; upgrade inertia and “if it isn’t broken” maintenance norms keep vulnerable lines circulating.
|
Component
| commons-compress |
1.21
|
| commons-lang |
2.6 (legacy major line)
|
| snappy |
0.4
|
| jdom2 |
2.0.6
|
Vulnerable Version(s) Still Widely Consumed
| commons-compress |
1.26 (Feb 2024)
|
| commons-lang |
3.18.0 (Jul 2025)
|
| snappy |
0.5 (May 2024)
|
| jdom2 |
2.0.6.1 (Dec 2021)
|
Fixed Version Available
| commons-compress |
46.32%
|
| commons-lang |
99.88%
|
| snappy |
99.58%
|
| jdom2 |
57.73%
|
% of 2025 Avoidable Vulnerable Downloads
| commons-compress |
CVE-2012-2098, CVE-2024-26308, CVE-2020-1945,
CVE-2024-25710, CVE-2021-36374 |
| commons-lang |
CVE-2025-48924
|
| snappy |
CVE-2024-36124
|
| jdom2 |
CVE-2021-33813
|
Representative CVE(s)
| commons-compress |
Deeply embedded in build/packaging workflows; low “visibility” dependency; upgrades deferred unless forced.
|
| commons-lang |
Major-version migration is non-trivial (2.x → 3.x); older enterprise stacks remain pinned to legacy APIs.
|
| snappy |
Common in distributed platforms (e.g., Hadoop/Spark ecosystems) where low-level compression deps are pinned for stability/performance.
|
| jdom2 |
Widely reused XML utility; upgrade inertia and “if it isn’t broken” maintenance norms keep vulnerable lines circulating.
|
Why It Persists (Consumption Drivers)
| commons-compress |
|
| commons-lang |
|
| snappy |
|
| jdom2 |
|
Why Teams Keep Downloading Open Source Vulnerabilities
If patches exist and the risks are well-known, why do vulnerable components continue to flow into modern software at such scale? The answer lies not in malicious intent, but in the quiet, structural habits of software development. Collectively, these patterns mean vulnerable components remain in circulation, not because teams are unaware of the risk, but because the system makes unsafe choices easier than safe ones.
The System Makes Unsafe Choices Easier Than Safe Ones
SET-AND-FORGET DEPENDENCIES
A version gets pinned once and then copied forward across services for years
THE RESULT:
Changing dependencies feels risky; leaving them alone feels "safe."TRANSITIVE DEPENDENCIES + UNCLEAR OWNERSHIP
Vulnerabilities arrive via the dependency tree, not direct installs.
THE RESULT:
No single team feels accountable for buried upgradesTOOLING THAT SHRIEKS BUT DOESN’T STEER
Scanners generate long CVE lists without clear prioritization or safe upgrade paths.
THE RESULT:
Teams hit alert fatigue and avoid “break the build” upgrades.INCENTIVES FAVOR FEATURES OVER HYGIENE
Maintenance work is deferred unless there’s a fire drill.
THE RESULT:
Delivery is rewarded; dependency upkeep is invisible.AI Exacerbates Vulnerable Consumption
AI-assisted development tools are increasingly embedded across modern software workflows — from code generation and dependency selection to build configuration and remediation guidance. While these tools can accelerate delivery, they also inherit and amplify the same consumption patterns that already sustain vulnerability risk. AI amplifies vulnerable consumption in several predictable ways:
- AI suggests “popular” (historically common) versions, not secure ones.
- AI generates manifests with outdated/vulnerable components.
- Training data lags, so even after fixes exist, AI keeps suggesting vulnerable versions.
- Without governance, AI increases component sprawl.
AI does not introduce new vulnerability classes, but it accelerates existing consumption behavior. When unsafe versions are already easier to consume than safe ones, AI makes those unsafe choices faster, more repeatable, and harder to unwind. Most vulnerability risk is no longer a vulnerability discovery problem. It’s a consumption behavior problem, and AI scales that behavior by default.
When the Ecosystem Stops Maintaining Software and Vulnerability Management Breaks Down
Even with accurate vulnerability intelligence and disciplined dependency practices, some risks cannot be mitigated because the software itself is no longer maintained. A growing share of open source components now lives on EOL, or abandoned release lines where no patches will ever be issued and new open source vulnerabilities may never be disclosed. These dependencies create permanent exposure: organizations inherit flaws that cannot be remediated upstream, locking long-term risk into the foundation of their software.
To analyze how End-of-life (EOL) dependencies turn open source vulnerabilities into persistent risk, we partnered with HeroDevs to examine the security impact of EOL software across modern software supply chains.
EOL Software is Not an Edge Case
EOL software is often discussed as something a mature program will eventually “clean up.” But data and analysis from HeroDevs suggests the opposite: EOL dependencies are a structural flaw of modern enterprise stacks, showing up consistently across ecosystems and persisting over time.
5–15% of components in enterprise dependency graphs are EOL, meaning EOL exposure is present even when teams believe they are only using supported top-level libraries.
81,000+ package versions with known CVEs are both EOL and unpatchable. HeroDevs estimates this number may actually be 400,000 across all registries.
EOL exposure appears across all major ecosystems (Java, Python, npm), with little variation in long-term persistence, suggesting this is not limited to one language community or a single package manager.
EOL changes the risk model. A measurable share of open source vulnerabilities now fall into a category that traditional remediation workflows cannot resolve. For these components, “scan → ticket → patch” stops being a workflow and becomes a backlog generator.
FIGURE 3.4: Breakdown of EOL Components by Registry
Source: Sonatype
Why EOL Allows “Forever Open Source Vulnerabilities”
Most vulnerability programs assume a predictable lifecycle: issues are disclosed, fixes are released, and risk declines as organizations patch and upgrade. EOL status breaks that logic. Once a release line is out of maintenance, upstream fixes stop, and a vulnerability can persist indefinitely — not simply because teams are slow to respond, but because the ecosystem no longer provides a patch path. At the same time, advisory coverage often degrades for unsupported versions, creating blind spots where EOL exposure is undercounted or missed entirely. And because abandoned code is reviewed less, fewer issues may be found or disclosed, so “no CVE” can indicate low scrutiny rather than safety.
In practice, EOL turns ordinary defects into “forever vulnerabilities”: liabilities that cannot be resolved through routine patching and instead require major upgrades, replacements, or commercial backports. AI-assisted development can amplify this effect by steering teams toward what is most common in historical code rather than what is currently supported. EOL components often appear “popular” in public corpora, making them more likely to be suggested and adopted as defaults in AI-generated manifests. Once introduced, those patterns can replicate across services through reuse, reinforcing dependence on software that has no viable long-term remediation path.
AI Reinforces EOL Risk in Predictable Ways:
AI models recommend EOL components because training data reflects historical prevalence, not current support status.
EOL in the Wild: Log4Shell and Others
EOL is not just a theoretical lifecycle concern. It has measurable real-world impact during major incidents. Log4Shell illustrates how EOL status can prevent closure even when a fix exists in maintained branches. Real-world cases show how EOL obstructs remediation:
- 14% of Log4j artifacts affected by Log4Shell are now EOL, representing more than 619 million downloads in 2025, preventing closure even four years later.
- Widely deployed major versions of Java, Node.js, Python frameworks, and .NET libraries continue to see active download volume despite being unsupported.
- CVE coverage for these versions is often incomplete or missing, reinforcing misleading “clean” scan results, especially when advisories and scanners focus on supported release lines.
This is how “known vulnerabilities” become “persistent exposure.” Even if engineering teams upgrade where they can, long-tail EOL usage can keep a vulnerability class alive in production fleets, especially in large enterprises with diverse portfolios, legacy workloads, and inherited dependency trees.
The Backport Ecosystem
As EOL exposure becomes unavoidable, a secondary market has emerged to provide what upstream maintainers no longer can: security patches for unsupported release lines. This ecosystem is both a pragmatic mitigation path and a signal of structural fragility in open source lifecycle guarantees.
These programs can reduce risk when modernization is not immediately feasible. But they also underscore a core shift: for a meaningful share of enterprise dependencies, patchability is no longer guaranteed by the open source ecosystem itself. Organizations must plan for lifecycle continuity as a security requirement, not a best practice.
A GROWING RESPONSE ECOSYSTEM INCLUDES:
- Commercial extended-support providers that backport security fixes (and sometimes ship compatible, maintained forks).
- Smaller specialist vendors and consultancies that produce targeted patches for older release branches.
- Community-maintained forks that temporarily sustain patching.
How the Three Layers Compound Each Other
Together, these failures create structural vulnerability debt, or risk that accumulates faster than it can be identified, triaged, or patched. Traditional “find and fix” workflows, centered on CVE identifiers and remediation queues, cannot keep pace with this reality. When the data is incomplete, consumption is undisciplined, and the ecosystem is aging, security becomes a reactive discipline rather than a strategic one.
Modern vulnerability risk is not the product of a single failure point. It is systemic, emerging from the way multiple weaknesses interact across the SDLC. When viewed in isolation, each layer appears manageable. When combined, they create a feedback loop that sustains risk even in organizations with mature security programs. The result is not a backlog problem but a structural one:
- Long-term residual risk persists across software lifecycles, surviving refactors, rebuilds, and even organizational change.
- Attack windows widen as vulnerable and EOL components accumulate faster than teams can identify, prioritize, and remove them.
- Remediation pipelines fall behind dependency sprawl, generating more work than existing security and engineering capacity can absorb.
- Compliance artifacts drift from reality. SBOMs, audit reports, and scan results increasingly reflect what tools can see, not what software actually runs, especially when shadow downloads, or artifacts that are pulled into development without the use of a repository manager, bypass formal governance.
This is why vulnerability management feels increasingly ineffective, even as tooling improves. The system is optimized to find and fix individual vulnerabilities, while the risk itself is produced by how software is sourced, reused, and aged over time. When bad data feeds unsafe consumption, and unsafe consumption feeds unpatchable software, remediation alone cannot catch up. Organizations accumulate vulnerability debt, not because teams are inattentive, but because the system allows risk to enter faster than it can be retired.
How Vulnerability Debt Accumulates
.png)
Modernizing Vulnerability Management
The issues outlined in this chapter are not the result of insufficient effort or tooling, but the product of workflows designed for a slower, simpler software ecosystem. Addressing modern vulnerability risk requires modernization, not acceleration of legacy “find-and-fix” models.
To reduce structural vulnerability debt, organizations must correct weaknesses across all three layers of the system: data, consumption, and ecosystem. And, with increasing integration of AI into software pipelines, reducing this risk has never been more critical.
Layer
Key Actions
Primary KPI
Key Actions:
- Enrich CVE/NVD: leverage data from OSV.dev, GitHub Security Advisories, upstream maintainers, and commercial intel.
- Add decision context: accurate version ranges, exploitability signals, and EOL status.
- Improve identification: fingerprint shadow-downloaded artifacts and feed curated data into AI systems.
Primary KPI:
Key Actions:
- Block by default: repository firewall + policy controls for known-vulnerable versions and shadow downloads.
- Standardize safe inputs: golden images, dependency templates, internal catalogs/allowed versions.
- Automate hygiene: PR bots + continuous refresh with compatibility-aware upgrades; govern build agents/AI to approved sources.
Primary KPI:
Key Actions:
- Treat EOL as critical: detect, prioritize, and remove unsupported components.
- Define exit paths: major upgrades, framework transitions, retirement plans.
- Reduce provenance risk: eliminate unsupported shadow binaries; use extended-support backports only as transitional controls; surface lifecycle status in SBOM/risk scoring.
Primary KPI:
Key Actions:
- Constrain recommendations: limit AI to approved catalogs and sources.
- Steer the model: retrain/condition on enriched, policy-aligned metadata (not popularity).
- Verify outputs: monitor AI-generated manifests for vulnerable/EOL/shadow patterns and enforce dependency-aware guardrails in workflow.
Primary KPI:
To meaningfully reduce vulnerability debt, organizations need to move beyond CVE-by-CVE remediation toward lifecycle-based modernization and governance. In practice, “reducing risk” increasingly means addressing structural weaknesses: improving the fidelity of vulnerability intelligence, making safe dependency intake the default, and proactively migrating away from EOL components that have no future patch path.
This shift is necessary because vulnerability risk is now systemic rather than isolated. Modern vulnerability management often fails at the system level, constrained by weak data quality, inefficient consumption patterns, and the compounding effects of aging software foundations. The data layer, in particular, is increasingly misaligned with real-world exposure: coverage gaps, inaccurate metadata, and delayed scoring distort prioritization, waste remediation effort, and obscure material risk.
At the same time, the ecosystem itself is aging in ways that create durable exposure. EOL and abandoned components transform open source vulnerabilities into long-term liabilities that cannot simply be patched away; they must be modernized out of the environment or supported through alternative maintenance models. AI increases the urgency of this modernization agenda. Without governance, AI can amplify each failure mode, making lifecycle modernization, not CVE tracking alone, the only sustainable path forward.
Without governance, AI can amplify each failure mode, making lifecycle modernization, not cve tracking alone, the only sustainable path forward.
Download the Full Report