As we look back on 2025, AI and open source have fundamentally changed how software is built. Generative AI, automated pipelines, and ubiquitous open source have dramatically increased developer velocity and expanded what teams can deliver — while shifting risk into the everyday decisions developers make as code is written, generated, and assembled.
Organizations must acknowledge this shift and embrace an AI-powered SDLC, ensuring the right guardrails are embedded directly into the flow of development. Legacy "shift left" approaches haven't kept up. Moving security checks earlier often just moves friction earlier, without integrating security into how developers actually work.
In 2026, progress will depend on embedding security intelligence directly into the developer flow. Model Context Protocols (MCPs) enable this by creating a shared, machine-readable understanding of code, components, policies, and risk across developer tools, agentic AI systems, and security platforms. This allows developers — and the AI systems supporting them — to receive guidance at the moment decisions are made, not after the fact.
At Sonatype, we see MCPs as foundational to a true shift-left model: one where developers move faster with confidence, security becomes a natural part of building software, and governance is enforced through intelligence rather than interruption. The challenge ahead isn't choosing between speed and safety. It's making the secure path the easiest one to take.
"As the future of long-standing, government-backed organizations like NIST, CISA, and MITRE grows uncertain, the cybersecurity industry stands at a pivotal moment. The expiration of the Cybersecurity Information Sharing Act and the potential defunding of the CVE program signal a glaring shift in how the industry coordinates, communicates, and defends against adversaries. At the same time, blurred lines between state and private actors present the opportunity for hackers to target 'offensive' players as targets for attacks.
As a result, 2026 is set to welcome a new era of threat intelligence, one defined not by a centralized authority but by the strength of private-public collaboration, modernized infrastructure, and sustainable investment. It's the organizations that value transparency, shared standards, and secure frameworks that will keep pace. Those who cling to legacy, centralized structures will quickly find themselves outmatched, outperformed, and out of luck." — Brian Fox, CTO and Co-Founder
"Compromising existing software packages proved highly effective in 2025, and that success is already shaping attacker behavior. Developers who maintain popular open source packages will be targeted more aggressively. This problem will only worsen as AI tooling makes highly authentic phishing campaigns trivial to produce. At the same time, the rise of AI-generated code will further strain already overwhelmed maintainers, increasing the difficulty of distinguishing legitimate contributions from malicious ones. How the community copes with this relentless growth in volume will be a defining factor in how this year plays out.
High-profile attacks like shai-hulud are likely to become blueprints for what comes next. Because open source malware naturally targets developers, we should expect a surge in threats designed to infect a developer's local environment, compromise trusted packages, and automatically republish malicious versions. With AI tooling now ubiquitous, future threats may even leverage a host's local AI environment to intelligently adapt their structure to the specific package being republished, significantly complicating static detection." — Garrett Calpouzos, Principal Security Researcher
"We've had a massive influx of both malware and regulation in recent years, spurring regulated firms toward a marriage of automation and oversight under a myriad of names: automated governance, GRC engineering, policy-as-code, compliance-as-code, and more. With the recent changes to the OSCAL ecosystem and the emergence of OpenSSF's Gemara project last year, 2026 will see increased adoption of machine-optimized GRC documents. As tooling matures and traction ensues among early adopters, we'll begin to see risk mitigations treated as simply another design requirement that informs development without slowing innovation." — Eddie Knight, Open Source Program Office Lead
The future of cybersecurity intelligence won't be anchored in legacy structures — it will be shaped by a community willing to rethink how trust, collaboration, and rapid threat assessment work in practice.
The opportunity now is to design a more resilient and adaptive model, one capable of meeting the realities of the decade ahead.